diff --git a/src/Doc/Isar_Ref/Generic.thy b/src/Doc/Isar_Ref/Generic.thy --- a/src/Doc/Isar_Ref/Generic.thy +++ b/src/Doc/Isar_Ref/Generic.thy @@ -1,1827 +1,1827 @@ (*:maxLineLen=78:*) theory Generic imports Main Base begin chapter \Generic tools and packages \label{ch:gen-tools}\ section \Configuration options \label{sec:config}\ text \ Isabelle/Pure maintains a record of named configuration options within the theory or proof context, with values of type \<^ML_type>\bool\, \<^ML_type>\int\, \<^ML_type>\real\, or \<^ML_type>\string\. Tools may declare options in ML, and then refer to these values (relative to the context). Thus global reference variables are easily avoided. The user may change the value of a configuration option by means of an associated attribute of the same name. This form of context declaration works particularly well with commands such as @{command "declare"} or @{command "using"} like this: \ (*<*)experiment begin(*>*) declare [[show_main_goal = false]] notepad begin note [[show_main_goal = true]] end (*<*)end(*>*) text \ \begin{matharray}{rcll} @{command_def "print_options"} & : & \context \\ \\ \end{matharray} \<^rail>\ @@{command print_options} ('!'?) ; @{syntax name} ('=' ('true' | 'false' | @{syntax int} | @{syntax float} | @{syntax name}))? \ \<^descr> @{command "print_options"} prints the available configuration options, with names, types, and current values; the ``\!\'' option indicates extra verbosity. \<^descr> \name = value\ as an attribute expression modifies the named option, with the syntax of the value depending on the option's type. For \<^ML_type>\bool\ the default value is \true\. Any attempt to change a global option in a local context is ignored. \ section \Basic proof tools\ subsection \Miscellaneous methods and attributes \label{sec:misc-meth-att}\ text \ \begin{matharray}{rcl} @{method_def unfold} & : & \method\ \\ @{method_def fold} & : & \method\ \\ @{method_def insert} & : & \method\ \\[0.5ex] @{method_def erule}\\<^sup>*\ & : & \method\ \\ @{method_def drule}\\<^sup>*\ & : & \method\ \\ @{method_def frule}\\<^sup>*\ & : & \method\ \\ @{method_def intro} & : & \method\ \\ @{method_def elim} & : & \method\ \\ @{method_def fail} & : & \method\ \\ @{method_def succeed} & : & \method\ \\ @{method_def sleep} & : & \method\ \\ \end{matharray} \<^rail>\ (@@{method fold} | @@{method unfold} | @@{method insert}) @{syntax thms} ; (@@{method erule} | @@{method drule} | @@{method frule}) ('(' @{syntax nat} ')')? @{syntax thms} ; (@@{method intro} | @@{method elim}) @{syntax thms}? ; @@{method sleep} @{syntax real} \ \<^descr> @{method unfold}~\a\<^sub>1 \ a\<^sub>n\ and @{method fold}~\a\<^sub>1 \ a\<^sub>n\ expand (or fold back) the given definitions throughout all goals; any chained facts provided are inserted into the goal and subject to rewriting as well. Unfolding works in two stages: first, the given equations are used directly for rewriting; second, the equations are passed through the attribute @{attribute_ref abs_def} before rewriting --- to ensure that definitions are fully expanded, regardless of the actual parameters that are provided. \<^descr> @{method insert}~\a\<^sub>1 \ a\<^sub>n\ inserts theorems as facts into all goals of the proof state. Note that current facts indicated for forward chaining are ignored. \<^descr> @{method erule}~\a\<^sub>1 \ a\<^sub>n\, @{method drule}~\a\<^sub>1 \ a\<^sub>n\, and @{method frule}~\a\<^sub>1 \ a\<^sub>n\ are similar to the basic @{method rule} method (see \secref{sec:pure-meth-att}), but apply rules by elim-resolution, destruct-resolution, and forward-resolution, respectively @{cite "isabelle-implementation"}. The optional natural number argument (default 0) specifies additional assumption steps to be performed here. Note that these methods are improper ones, mainly serving for experimentation and tactic script emulation. Different modes of basic rule application are usually expressed in Isar at the proof language level, rather than via implicit proof state manipulations. For example, a proper single-step elimination would be done using the plain @{method rule} method, with forward chaining of current facts. \<^descr> @{method intro} and @{method elim} repeatedly refine some goal by intro- or elim-resolution, after having inserted any chained facts. Exactly the rules given as arguments are taken into account; this allows fine-tuned decomposition of a proof problem, in contrast to common automated tools. \<^descr> @{method fail} yields an empty result sequence; it is the identity of the ``\|\'' method combinator (cf.\ \secref{sec:proof-meth}). \<^descr> @{method succeed} yields a single (unchanged) result; it is the identity of the ``\,\'' method combinator (cf.\ \secref{sec:proof-meth}). \<^descr> @{method sleep}~\s\ succeeds after a real-time delay of \s\ seconds. This is occasionally useful for demonstration and testing purposes. \begin{matharray}{rcl} @{attribute_def tagged} & : & \attribute\ \\ @{attribute_def untagged} & : & \attribute\ \\[0.5ex] @{attribute_def THEN} & : & \attribute\ \\ @{attribute_def unfolded} & : & \attribute\ \\ @{attribute_def folded} & : & \attribute\ \\ @{attribute_def abs_def} & : & \attribute\ \\[0.5ex] @{attribute_def rotated} & : & \attribute\ \\ @{attribute_def (Pure) elim_format} & : & \attribute\ \\ @{attribute_def no_vars}\\<^sup>*\ & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute tagged} @{syntax name} @{syntax name} ; @@{attribute untagged} @{syntax name} ; @@{attribute THEN} ('[' @{syntax nat} ']')? @{syntax thm} ; (@@{attribute unfolded} | @@{attribute folded}) @{syntax thms} ; @@{attribute rotated} @{syntax int}? \ \<^descr> @{attribute tagged}~\name value\ and @{attribute untagged}~\name\ add and remove \<^emph>\tags\ of some theorem. Tags may be any list of string pairs that serve as formal comment. The first string is considered the tag name, the second its value. Note that @{attribute untagged} removes any tags of the same name. \<^descr> @{attribute THEN}~\a\ composes rules by resolution; it resolves with the first premise of \a\ (an alternative position may be also specified). See also \<^ML_op>\RS\ in @{cite "isabelle-implementation"}. \<^descr> @{attribute unfolded}~\a\<^sub>1 \ a\<^sub>n\ and @{attribute folded}~\a\<^sub>1 \ a\<^sub>n\ expand and fold back again the given definitions throughout a rule. \<^descr> @{attribute abs_def} turns an equation of the form \<^prop>\f x y \ t\ into \<^prop>\f \ \x y. t\, which ensures that @{method simp} steps always expand it. This also works for object-logic equality. \<^descr> @{attribute rotated}~\n\ rotate the premises of a theorem by \n\ (default 1). \<^descr> @{attribute (Pure) elim_format} turns a destruction rule into elimination rule format, by resolving with the rule \<^prop>\PROP A \ (PROP A \ PROP B) \ PROP B\. Note that the Classical Reasoner (\secref{sec:classical}) provides its own version of this operation. \<^descr> @{attribute no_vars} replaces schematic variables by free ones; this is mainly for tuning output of pretty printed theorems. \ subsection \Low-level equational reasoning\ text \ \begin{matharray}{rcl} @{method_def subst} & : & \method\ \\ @{method_def hypsubst} & : & \method\ \\ @{method_def split} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method subst} ('(' 'asm' ')')? \ ('(' (@{syntax nat}+) ')')? @{syntax thm} ; @@{method split} @{syntax thms} \ These methods provide low-level facilities for equational reasoning that are intended for specialized applications only. Normally, single step calculations would be performed in a structured text (see also \secref{sec:calculation}), while the Simplifier methods provide the canonical way for automated normalization (see \secref{sec:simplifier}). \<^descr> @{method subst}~\eq\ performs a single substitution step using rule \eq\, which may be either a meta or object equality. \<^descr> @{method subst}~\(asm) eq\ substitutes in an assumption. \<^descr> @{method subst}~\(i \ j) eq\ performs several substitutions in the conclusion. The numbers \i\ to \j\ indicate the positions to substitute at. Positions are ordered from the top of the term tree moving down from left to right. For example, in \(a + b) + (c + d)\ there are three positions where commutativity of \+\ is applicable: 1 refers to \a + b\, 2 to the whole term, and 3 to \c + d\. If the positions in the list \(i \ j)\ are non-overlapping (e.g.\ \(2 3)\ in \(a + b) + (c + d)\) you may assume all substitutions are performed simultaneously. Otherwise the behaviour of \subst\ is not specified. \<^descr> @{method subst}~\(asm) (i \ j) eq\ performs the substitutions in the assumptions. The positions refer to the assumptions in order from left to right. For example, given in a goal of the form \P (a + b) \ P (c + d) \ \\, position 1 of commutativity of \+\ is the subterm \a + b\ and position 2 is the subterm \c + d\. \<^descr> @{method hypsubst} performs substitution using some assumption; this only works for equations of the form \x = t\ where \x\ is a free or bound variable. \<^descr> @{method split}~\a\<^sub>1 \ a\<^sub>n\ performs single-step case splitting using the given rules. Splitting is performed in the conclusion or some assumption of the subgoal, depending of the structure of the rule. Note that the @{method simp} method already involves repeated application of split rules as declared in the current context, using @{attribute split}, for example. \ section \The Simplifier \label{sec:simplifier}\ text \ The Simplifier performs conditional and unconditional rewriting and uses contextual information: rule declarations in the background theory or local proof context are taken into account, as well as chained facts and subgoal premises (``local assumptions''). There are several general hooks that allow to modify the simplification strategy, or incorporate other proof tools that solve sub-problems, produce rewrite rules on demand etc. The rewriting strategy is always strictly bottom up, except for congruence rules, which are applied while descending into a term. Conditions in conditional rewrite rules are solved recursively before the rewrite rule is applied. The default Simplifier setup of major object logics (HOL, HOLCF, FOL, ZF) makes the Simplifier ready for immediate use, without engaging into the internal structures. Thus it serves as general-purpose proof tool with the main focus on equational reasoning, and a bit more than that. \ subsection \Simplification methods \label{sec:simp-meth}\ text \ \begin{tabular}{rcll} @{method_def simp} & : & \method\ \\ @{method_def simp_all} & : & \method\ \\ \Pure.\@{method_def (Pure) simp} & : & \method\ \\ \Pure.\@{method_def (Pure) simp_all} & : & \method\ \\ @{attribute_def simp_depth_limit} & : & \attribute\ & default \100\ \\ \end{tabular} \<^medskip> \<^rail>\ (@@{method simp} | @@{method simp_all}) opt? (@{syntax simpmod} * ) ; opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use' | 'asm_lr' ) ')' ; @{syntax_def simpmod}: ('add' | 'del' | 'flip' | 'only' | 'split' (() | '!' | 'del') | 'cong' (() | 'add' | 'del')) ':' @{syntax thms} \ \<^descr> @{method simp} invokes the Simplifier on the first subgoal, after inserting chained facts as additional goal premises; further rule declarations may be included via \(simp add: facts)\. The proof method fails if the subgoal remains unchanged after simplification. Note that the original goal premises and chained facts are subject to simplification themselves, while declarations via \add\/\del\ merely follow the policies of the object-logic to extract rewrite rules from theorems, without further simplification. This may lead to slightly different behavior in either case, which might be required precisely like that in some boundary situations to perform the intended simplification step! \<^medskip> Modifier \flip\ deletes the following theorems from the simpset and adds their symmetric version (i.e.\ lhs and rhs exchanged). No warning is shown if the original theorem was not present. \<^medskip> The \only\ modifier first removes all other rewrite rules, looper tactics (including split rules), congruence rules, and then behaves like \add\. Implicit solvers remain, which means that trivial rules like reflexivity or introduction of \True\ are available to solve the simplified subgoals, but also non-trivial tools like linear arithmetic in HOL. The latter may lead to some surprise of the meaning of ``only'' in Isabelle/HOL compared to English! \<^medskip> The \split\ modifiers add or delete rules for the Splitter (see also \secref{sec:simp-strategies} on the looper). This works only if the Simplifier method has been properly setup to include the Splitter (all major object logics such HOL, HOLCF, FOL, ZF do this already). The \!\ option causes the split rules to be used aggressively: after each application of a split rule in the conclusion, the \safe\ tactic of the classical reasoner (see \secref{sec:classical:partial}) is applied to the new goal. The net effect is that the goal is split into the different cases. This option can speed up simplification of goals with many nested conditional or case expressions significantly. There is also a separate @{method_ref split} method available for single-step case splitting. The effect of repeatedly applying \(split thms)\ can be imitated by ``\(simp only: split: thms)\''. \<^medskip> The \cong\ modifiers add or delete Simplifier congruence rules (see also \secref{sec:simp-rules}); the default is to add. \<^descr> @{method simp_all} is similar to @{method simp}, but acts on all goals, working backwards from the last to the first one as usual in Isabelle.\<^footnote>\The order is irrelevant for goals without schematic variables, so simplification might actually be performed in parallel here.\ Chained facts are inserted into all subgoals, before the simplification process starts. Further rule declarations are the same as for @{method simp}. The proof method fails if all subgoals remain unchanged after simplification. \<^descr> @{attribute simp_depth_limit} limits the number of recursive invocations of the Simplifier during conditional rewriting. By default the Simplifier methods above take local assumptions fully into account, using equational assumptions in the subsequent normalization process, or simplifying assumptions themselves. Further options allow to fine-tune the behavior of the Simplifier in this respect, corresponding to a variety of ML tactics as follows.\<^footnote>\Unlike the corresponding Isar proof methods, the ML tactics do not insist in changing the goal state.\ \begin{center} \small \begin{tabular}{|l|l|p{0.3\textwidth}|} \hline Isar method & ML tactic & behavior \\\hline \(simp (no_asm))\ & \<^ML>\simp_tac\ & assumptions are ignored completely \\\hline \(simp (no_asm_simp))\ & \<^ML>\asm_simp_tac\ & assumptions are used in the simplification of the conclusion but are not themselves simplified \\\hline \(simp (no_asm_use))\ & \<^ML>\full_simp_tac\ & assumptions are simplified but are not used in the simplification of each other or the conclusion \\\hline \(simp)\ & \<^ML>\asm_full_simp_tac\ & assumptions are used in the simplification of the conclusion and to simplify other assumptions \\\hline \(simp (asm_lr))\ & \<^ML>\asm_lr_simp_tac\ & compatibility mode: an assumption is only used for simplifying assumptions which are to the right of it \\\hline \end{tabular} \end{center} \<^medskip> In Isabelle/Pure, proof methods @{method (Pure) simp} and @{method (Pure) simp_all} only know about meta-equality \\\. Any new object-logic needs to re-define these methods via \<^ML>\Simplifier.method_setup\ in ML: Isabelle/FOL or Isabelle/HOL may serve as blue-prints. \ subsubsection \Examples\ text \ We consider basic algebraic simplifications in Isabelle/HOL. The rather trivial goal \<^prop>\0 + (x + 0) = x + 0 + 0\ looks like a good candidate to be solved by a single call of @{method simp}: \ lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops text \ The above attempt \<^emph>\fails\, because \<^term>\0\ and \<^term>\(+)\ in the HOL library are declared as generic type class operations, without stating any algebraic laws yet. More specific types are required to get access to certain standard simplifications of the theory context, e.g.\ like this:\ lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp lemma fixes x :: "'a :: monoid_add" shows "0 + (x + 0) = x + 0 + 0" by simp text \ \<^medskip> In many cases, assumptions of a subgoal are also needed in the simplification process. For example: \ lemma fixes x :: nat shows "x = 0 \ x + x = 0" by simp lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp text \ As seen above, local assumptions that shall contribute to simplification need to be part of the subgoal already, or indicated explicitly for use by the subsequent method invocation. Both too little or too much information can make simplification fail, for different reasons. In the next example the malicious assumption \<^prop>\\x::nat. f x = g (f (g x))\ does not contribute to solve the problem, but makes the default @{method simp} method loop: the rewrite rule \f ?x \ g (f (g ?x))\ extracted from the assumption does not terminate. The Simplifier notices certain simple forms of nontermination, but not this one. The problem can be solved nonetheless, by ignoring assumptions via special options as explained before: \ lemma "(\x::nat. f x = g (f (g x))) \ f 0 = f 0 + 0" by (simp (no_asm)) text \ The latter form is typical for long unstructured proof scripts, where the control over the goal content is limited. In structured proofs it is usually better to avoid pushing too many facts into the goal state in the first place. Assumptions in the Isar proof context do not intrude the reasoning if not used explicitly. This is illustrated for a toplevel statement and a local proof body as follows: \ lemma assumes "\x::nat. f x = g (f (g x))" shows "f 0 = f 0 + 0" by simp notepad begin assume "\x::nat. f x = g (f (g x))" have "f 0 = f 0 + 0" by simp end text \ \<^medskip> Because assumptions may simplify each other, there can be very subtle cases of nontermination. For example, the regular @{method simp} method applied to \<^prop>\P (f x) \ y = x \ f x = f y \ Q\ gives rise to the infinite reduction sequence \[ \P (f x)\ \stackrel{\f x \ f y\}{\longmapsto} \P (f y)\ \stackrel{\y \ x\}{\longmapsto} \P (f x)\ \stackrel{\f x \ f y\}{\longmapsto} \cdots \] whereas applying the same to \<^prop>\y = x \ f x = f y \ P (f x) \ Q\ terminates (without solving the goal): \ lemma "y = x \ f x = f y \ P (f x) \ Q" apply simp oops text \ See also \secref{sec:simp-trace} for options to enable Simplifier trace mode, which often helps to diagnose problems with rewrite systems. \ subsection \Declaring rules \label{sec:simp-rules}\ text \ \begin{matharray}{rcl} @{attribute_def simp} & : & \attribute\ \\ @{attribute_def split} & : & \attribute\ \\ @{attribute_def cong} & : & \attribute\ \\ @{command_def "print_simpset"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} \<^rail>\ (@@{attribute simp} | @@{attribute cong}) (() | 'add' | 'del') | @@{attribute split} (() | '!' | 'del') ; @@{command print_simpset} ('!'?) \ \<^descr> @{attribute simp} declares rewrite rules, by adding or deleting them from the simpset within the theory or proof context. Rewrite rules are theorems expressing some form of equality, for example: \Suc ?m + ?n = ?m + Suc ?n\ \\ \?P \ ?P \ ?P\ \\ \?A \ ?B \ {x. x \ ?A \ x \ ?B}\ \<^medskip> Conditional rewrites such as \?m < ?n \ ?m div ?n = 0\ are also permitted; the conditions can be arbitrary formulas. \<^medskip> Internally, all rewrite rules are translated into Pure equalities, theorems with conclusion \lhs \ rhs\. The simpset contains a function for extracting equalities from arbitrary theorems, which is usually installed when the object-logic is configured initially. For example, \\ ?x \ {}\ could be turned into \?x \ {} \ False\. Theorems that are declared as @{attribute simp} and local assumptions within a goal are treated uniformly in this respect. The Simplifier accepts the following formats for the \lhs\ term: \<^enum> First-order patterns, considering the sublanguage of application of constant operators to variable operands, without \\\-abstractions or functional variables. For example: \(?x + ?y) + ?z \ ?x + (?y + ?z)\ \\ \f (f ?x ?y) ?z \ f ?x (f ?y ?z)\ \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}. These are terms in \\\-normal form (this will always be the case unless you have done something strange) where each occurrence of an unknown is of the form \?F x\<^sub>1 \ x\<^sub>n\, where the \x\<^sub>i\ are distinct bound variables. For example, \(\x. ?P x \ ?Q x) \ (\x. ?P x) \ (\x. ?Q x)\ or its symmetric form, since the \rhs\ is also a higher-order pattern. \<^enum> Physical first-order patterns over raw \\\-term structure without \\\\\-equality; abstractions and bound variables are treated like quasi-constant term material. For example, the rule \?f ?x \ range ?f = True\ rewrites the term \g a \ range g\ to \True\, but will fail to match \g (h b) \ range (\x. g (h x))\. However, offending subterms (in our case \?f ?x\, which is not a pattern) can be replaced by adding new variables and conditions like this: \?y = ?f ?x \ ?y \ range ?f = True\ is acceptable as a conditional rewrite rule of the second category since conditions can be arbitrary terms. \<^descr> @{attribute split} declares case split rules. \<^descr> @{attribute cong} declares congruence rules to the Simplifier context. Congruence rules are equalities of the form @{text [display] "\ \ f ?x\<^sub>1 \ ?x\<^sub>n = f ?y\<^sub>1 \ ?y\<^sub>n"} This controls the simplification of the arguments of \f\. For example, some arguments can be simplified under additional assumptions: @{text [display] "?P\<^sub>1 \ ?Q\<^sub>1 \ (?Q\<^sub>1 \ ?P\<^sub>2 \ ?Q\<^sub>2) \ (?P\<^sub>1 \ ?P\<^sub>2) \ (?Q\<^sub>1 \ ?Q\<^sub>2)"} Given this rule, the Simplifier assumes \?Q\<^sub>1\ and extracts rewrite rules from it when simplifying \?P\<^sub>2\. Such local assumptions are effective for rewriting formulae such as \x = 0 \ y + x = y\. %FIXME %The local assumptions are also provided as theorems to the solver; %see \secref{sec:simp-solver} below. \<^medskip> The following congruence rule for bounded quantifiers also supplies contextual information --- about the bound variable: @{text [display] "(?A = ?B) \ (\x. x \ ?B \ ?P x \ ?Q x) \ (\x \ ?A. ?P x) \ (\x \ ?B. ?Q x)"} \<^medskip> This congruence rule for conditional expressions can supply contextual information for simplifying the arms: @{text [display] "?p = ?q \ (?q \ ?a = ?c) \ (\ ?q \ ?b = ?d) \ (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"} A congruence rule can also \<^emph>\prevent\ simplification of some arguments. Here is an alternative congruence rule for conditional expressions that conforms to non-strict functional evaluation: @{text [display] "?p = ?q \ (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"} Only the first argument is simplified; the others remain unchanged. This can make simplification much faster, but may require an extra case split over the condition \?q\ to prove the goal. \<^descr> @{command "print_simpset"} prints the collection of rules declared to the Simplifier, which is also known as ``simpset'' internally; the ``\!\'' option indicates extra verbosity. The implicit simpset of the theory context is propagated monotonically through the theory hierarchy: forming a new theory, the union of the simpsets of its imports are taken as starting point. Also note that definitional packages like @{command "datatype"}, @{command "primrec"}, @{command "fun"} routinely declare Simplifier rules to the target context, while plain @{command "definition"} is an exception in \<^emph>\not\ declaring anything. \<^medskip> It is up the user to manipulate the current simpset further by explicitly adding or deleting theorems as simplification rules, or installing other tools via simplification procedures (\secref{sec:simproc}). Good simpsets are hard to design. Rules that obviously simplify, like \?n + 0 \ ?n\ are good candidates for the implicit simpset, unless a special non-normalizing behavior of certain operations is intended. More specific rules (such as distributive laws, which duplicate subterms) should be added only for specific proof steps. Conversely, sometimes a rule needs to be deleted just for some part of a proof. The need of frequent additions or deletions may indicate a poorly designed simpset. \begin{warn} The union of simpsets from theory imports (as described above) is not always a good starting point for the new theory. If some ancestors have deleted simplification rules because they are no longer wanted, while others have left those rules in, then the union will contain the unwanted rules, and thus have to be deleted again in the theory body. \end{warn} \ subsection \Ordered rewriting with permutative rules\ text \ A rewrite rule is \<^emph>\permutative\ if the left-hand side and right-hand side are the equal up to renaming of variables. The most common permutative rule is commutativity: \?x + ?y = ?y + ?x\. Other examples include \(?x - ?y) - ?z = (?x - ?z) - ?y\ in arithmetic and \insert ?x (insert ?y ?A) = insert ?y (insert ?x ?A)\ for sets. Such rules are common enough to merit special attention. Because ordinary rewriting loops given such rules, the Simplifier employs a special strategy, called \<^emph>\ordered rewriting\. Permutative rules are detected and only applied if the rewriting step decreases the redex wrt.\ a given term ordering. For example, commutativity rewrites \b + a\ to \a + b\, but then stops, because the redex cannot be decreased further in the sense of the term ordering. The default is lexicographic ordering of term structure, but this could be also changed locally for special applications via @{index_ML Simplifier.set_term_ord} in Isabelle/ML. \<^medskip> Permutative rewrite rules are declared to the Simplifier just like other rewrite rules. Their special status is recognized automatically, and their application is guarded by the term ordering accordingly. \ subsubsection \Rewriting with AC operators\ text \ Ordered rewriting is particularly effective in the case of associative-commutative operators. (Associativity by itself is not permutative.) When dealing with an AC-operator \f\, keep the following points in mind: \<^item> The associative law must always be oriented from left to right, namely \f (f x y) z = f x (f y z)\. The opposite orientation, if used with commutativity, leads to looping in conjunction with the standard term order. \<^item> To complete your set of rewrite rules, you must add not just associativity (A) and commutativity (C) but also a derived rule \<^emph>\left-commutativity\ (LC): \f x (f y z) = f y (f x z)\. Ordered rewriting with the combination of A, C, and LC sorts a term lexicographically --- the rewriting engine imitates bubble-sort. \ experiment fixes f :: "'a \ 'a \ 'a" (infix "\" 60) assumes assoc: "(x \ y) \ z = x \ (y \ z)" assumes commute: "x \ y = y \ x" begin lemma left_commute: "x \ (y \ z) = y \ (x \ z)" proof - have "(x \ y) \ z = (y \ x) \ z" by (simp only: commute) then show ?thesis by (simp only: assoc) qed lemmas AC_rules = assoc commute left_commute text \ Thus the Simplifier is able to establish equalities with arbitrary permutations of subterms, by normalizing to a common standard form. For example: \ lemma "(b \ c) \ a = xxx" apply (simp only: AC_rules) txt \\<^subgoals>\ oops lemma "(b \ c) \ a = a \ (b \ c)" by (simp only: AC_rules) lemma "(b \ c) \ a = c \ (b \ a)" by (simp only: AC_rules) lemma "(b \ c) \ a = (c \ b) \ a" by (simp only: AC_rules) end text \ Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and give many examples; other algebraic structures are amenable to ordered rewriting, such as Boolean rings. The Boyer-Moore theorem prover @{cite bm88book} also employs ordered rewriting. \ subsubsection \Re-orienting equalities\ text \Another application of ordered rewriting uses the derived rule @{thm [source] eq_commute}: @{thm [source = false] eq_commute} to reverse equations. This is occasionally useful to re-orient local assumptions according to the term ordering, when other built-in mechanisms of reorientation and mutual simplification fail to apply.\ subsection \Simplifier tracing and debugging \label{sec:simp-trace}\ text \ \begin{tabular}{rcll} @{attribute_def simp_trace} & : & \attribute\ & default \false\ \\ @{attribute_def simp_trace_depth_limit} & : & \attribute\ & default \1\ \\ @{attribute_def simp_debug} & : & \attribute\ & default \false\ \\ @{attribute_def simp_trace_new} & : & \attribute\ \\ @{attribute_def simp_break} & : & \attribute\ \\ \end{tabular} \<^medskip> \<^rail>\ @@{attribute simp_trace_new} ('interactive')? \ ('mode' '=' ('full' | 'normal'))? \ ('depth' '=' @{syntax nat})? ; @@{attribute simp_break} (@{syntax term}*) \ These attributes and configurations options control various aspects of Simplifier tracing and debugging. \<^descr> @{attribute simp_trace} makes the Simplifier output internal operations. This includes rewrite steps, but also bookkeeping like modifications of the simpset. \<^descr> @{attribute simp_trace_depth_limit} limits the effect of @{attribute simp_trace} to the given depth of recursive Simplifier invocations (when solving conditions of rewrite rules). \<^descr> @{attribute simp_debug} makes the Simplifier output some extra information about internal operations. This includes any attempted invocation of simplification procedures. \<^descr> @{attribute simp_trace_new} controls Simplifier tracing within Isabelle/PIDE applications, notably Isabelle/jEdit @{cite "isabelle-jedit"}. This provides a hierarchical representation of the rewriting steps performed by the Simplifier. Users can configure the behaviour by specifying breakpoints, verbosity and enabling or disabling the interactive mode. In normal verbosity (the default), only rule applications matching a breakpoint will be shown to the user. In full verbosity, all rule applications will be logged. Interactive mode interrupts the normal flow of the Simplifier and defers the decision how to continue to the user via some GUI dialog. \<^descr> @{attribute simp_break} declares term or theorem breakpoints for @{attribute simp_trace_new} as described above. Term breakpoints are patterns which are checked for matches on the redex of a rule application. Theorem breakpoints trigger when the corresponding theorem is applied in a rewrite step. For example: \ (*<*)experiment begin(*>*) declare conjI [simp_break] declare [[simp_break "?x \ ?y"]] (*<*)end(*>*) subsection \Simplification procedures \label{sec:simproc}\ text \ Simplification procedures are ML functions that produce proven rewrite rules on demand. They are associated with higher-order patterns that approximate the left-hand sides of equations. The Simplifier first matches the current redex against one of the LHS patterns; if this succeeds, the corresponding ML function is invoked, passing the Simplifier context and redex term. Thus rules may be specifically fashioned for particular situations, resulting in a more powerful mechanism than term rewriting by a fixed set of rules. Any successful result needs to be a (possibly conditional) rewrite rule \t \ u\ that is applicable to the current redex. The rule will be applied just as any ordinary rewrite rule. It is expected to be already in \<^emph>\internal form\, bypassing the automatic preprocessing of object-level equivalences. \begin{matharray}{rcl} @{command_def "simproc_setup"} & : & \local_theory \ local_theory\ \\ simproc & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{command simproc_setup} @{syntax name} '(' (@{syntax term} + '|') ')' '=' @{syntax text}; @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+) \ \<^descr> @{command "simproc_setup"} defines a named simplification procedure that is invoked by the Simplifier whenever any of the given term patterns match the current redex. The implementation, which is provided as ML source text, needs to be of type \<^ML_type>\morphism -> Proof.context -> cterm -> thm option\, where the \<^ML_type>\cterm\ represents the current redex \r\ and the result is supposed to be some proven rewrite rule \r \ r'\ (or a generalized version), or \<^ML>\NONE\ to indicate failure. The \<^ML_type>\Proof.context\ argument holds the full context of the current Simplifier invocation. The \<^ML_type>\morphism\ informs about the difference of the original compilation context wrt.\ the one of the actual application later on. Morphisms are only relevant for simprocs that are defined within a local target context, e.g.\ in a locale. \<^descr> \simproc add: name\ and \simproc del: name\ add or delete named simprocs to the current Simplifier context. The default is to add a simproc. Note that @{command "simproc_setup"} already adds the new simproc to the subsequent context. \ subsubsection \Example\ text \ The following simplification procedure for @{thm [source = false, show_types] unit_eq} in HOL performs fine-grained control over rule application, beyond higher-order pattern matching. Declaring @{thm unit_eq} as @{attribute simp} directly would make the Simplifier loop! Note that a version of this simplification procedure is already active in Isabelle/HOL. \ (*<*)experiment begin(*>*) simproc_setup unit ("x::unit") = \fn _ => fn _ => fn ct => if HOLogic.is_unit (Thm.term_of ct) then NONE else SOME (mk_meta_eq @{thm unit_eq})\ (*<*)end(*>*) text \ Since the Simplifier applies simplification procedures frequently, it is important to make the failure check in ML reasonably fast.\ subsection \Configurable Simplifier strategies \label{sec:simp-strategies}\ text \ The core term-rewriting engine of the Simplifier is normally used in combination with some add-on components that modify the strategy and allow to integrate other non-Simplifier proof tools. These may be reconfigured in ML as explained below. Even if the default strategies of object-logics like Isabelle/HOL are used unchanged, it helps to understand how the standard Simplifier strategies work.\ subsubsection \The subgoaler\ text \ \begin{mldecls} @{index_ML Simplifier.set_subgoaler: "(Proof.context -> int -> tactic) -> Proof.context -> Proof.context"} \\ @{index_ML Simplifier.prems_of: "Proof.context -> thm list"} \\ \end{mldecls} The subgoaler is the tactic used to solve subgoals arising out of conditional rewrite rules or congruence rules. The default should be simplification itself. In rare situations, this strategy may need to be changed. For example, if the premise of a conditional rule is an instance of its conclusion, as in \Suc ?m < ?n \ ?m < ?n\, the default strategy could loop. % FIXME !?? \<^descr> \<^ML>\Simplifier.set_subgoaler\~\tac ctxt\ sets the subgoaler of the context to \tac\. The tactic will be applied to the context of the running Simplifier instance. \<^descr> \<^ML>\Simplifier.prems_of\~\ctxt\ retrieves the current set of premises from the context. This may be non-empty only if the Simplifier has been told to utilize local assumptions in the first place (cf.\ the options in \secref{sec:simp-meth}). As an example, consider the following alternative subgoaler: \ ML_val \ fun subgoaler_tac ctxt = assume_tac ctxt ORELSE' resolve_tac ctxt (Simplifier.prems_of ctxt) ORELSE' asm_simp_tac ctxt \ text \ This tactic first tries to solve the subgoal by assumption or by resolving with with one of the premises, calling simplification only if that fails.\ subsubsection \The solver\ text \ \begin{mldecls} @{index_ML_type solver} \\ @{index_ML Simplifier.mk_solver: "string -> (Proof.context -> int -> tactic) -> solver"} \\ @{index_ML_op setSolver: "Proof.context * solver -> Proof.context"} \\ @{index_ML_op addSolver: "Proof.context * solver -> Proof.context"} \\ @{index_ML_op setSSolver: "Proof.context * solver -> Proof.context"} \\ @{index_ML_op addSSolver: "Proof.context * solver -> Proof.context"} \\ \end{mldecls} A solver is a tactic that attempts to solve a subgoal after simplification. Its core functionality is to prove trivial subgoals such as \<^prop>\True\ and \t = t\, but object-logics might be more ambitious. For example, Isabelle/HOL performs a restricted version of linear arithmetic here. Solvers are packaged up in abstract type \<^ML_type>\solver\, with \<^ML>\Simplifier.mk_solver\ as the only operation to create a solver. \<^medskip> Rewriting does not instantiate unknowns. For example, rewriting alone cannot prove \a \ ?A\ since this requires instantiating \?A\. The solver, however, is an arbitrary tactic and may instantiate unknowns as it pleases. This is the only way the Simplifier can handle a conditional rewrite rule whose condition contains extra variables. When a simplification tactic is to be combined with other provers, especially with the Classical Reasoner, it is important whether it can be considered safe or not. For this reason a simpset contains two solvers: safe and unsafe. The standard simplification strategy solely uses the unsafe solver, which is appropriate in most cases. For special applications where the simplification process is not allowed to instantiate unknowns within the goal, simplification starts with the safe solver, but may still apply the ordinary unsafe one in nested simplifications for conditional rules or congruences. Note that in this way the overall tactic is not totally safe: it may instantiate unknowns that appear also in other subgoals. \<^descr> \<^ML>\Simplifier.mk_solver\~\name tac\ turns \tac\ into a solver; the \name\ is only attached as a comment and has no further significance. \<^descr> \ctxt setSSolver solver\ installs \solver\ as the safe solver of \ctxt\. \<^descr> \ctxt addSSolver solver\ adds \solver\ as an additional safe solver; it will be tried after the solvers which had already been present in \ctxt\. \<^descr> \ctxt setSolver solver\ installs \solver\ as the unsafe solver of \ctxt\. \<^descr> \ctxt addSolver solver\ adds \solver\ as an additional unsafe solver; it will be tried after the solvers which had already been present in \ctxt\. \<^medskip> The solver tactic is invoked with the context of the running Simplifier. Further operations may be used to retrieve relevant information, such as the list of local Simplifier premises via \<^ML>\Simplifier.prems_of\ --- this list may be non-empty only if the Simplifier runs in a mode that utilizes local assumptions (see also \secref{sec:simp-meth}). The solver is also presented the full goal including its assumptions in any case. Thus it can use these (e.g.\ by calling \<^ML>\assume_tac\), even if the Simplifier proper happens to ignore local premises at the moment. \<^medskip> As explained before, the subgoaler is also used to solve the premises of congruence rules. These are usually of the form \s = ?x\, where \s\ needs to be simplified and \?x\ needs to be instantiated with the result. Typically, the subgoaler will invoke the Simplifier at some point, which will eventually call the solver. For this reason, solver tactics must be prepared to solve goals of the form \t = ?x\, usually by reflexivity. In particular, reflexivity should be tried before any of the fancy automated proof tools. It may even happen that due to simplification the subgoal is no longer an equality. For example, \False \ ?Q\ could be rewritten to \\ ?Q\. To cover this case, the solver could try resolving with the theorem \\ False\ of the object-logic. \<^medskip> \begin{warn} If a premise of a congruence rule cannot be proved, then the congruence is ignored. This should only happen if the rule is \<^emph>\conditional\ --- that is, contains premises not of the form \t = ?x\. Otherwise it indicates that some congruence rule, or possibly the subgoaler or solver, is faulty. \end{warn} \ subsubsection \The looper\ text \ \begin{mldecls} @{index_ML_op setloop: "Proof.context * (Proof.context -> int -> tactic) -> Proof.context"} \\ @{index_ML_op addloop: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ @{index_ML_op delloop: "Proof.context * string -> Proof.context"} \\ @{index_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ @{index_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ @{index_ML Splitter.add_split_bang: " thm -> Proof.context -> Proof.context"} \\ @{index_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\ \end{mldecls} The looper is a list of tactics that are applied after simplification, in case the solver failed to solve the simplified goal. If the looper succeeds, the simplification process is started all over again. Each of the subgoals generated by the looper is attacked in turn, in reverse order. A typical looper is \<^emph>\case splitting\: the expansion of a conditional. Another possibility is to apply an elimination rule on the assumptions. More adventurous loopers could start an induction. \<^descr> \ctxt setloop tac\ installs \tac\ as the only looper tactic of \ctxt\. \<^descr> \ctxt addloop (name, tac)\ adds \tac\ as an additional looper tactic with name \name\, which is significant for managing the collection of loopers. The tactic will be tried after the looper tactics that had already been present in \ctxt\. \<^descr> \ctxt delloop name\ deletes the looper tactic that was associated with \name\ from \ctxt\. \<^descr> \<^ML>\Splitter.add_split\~\thm ctxt\ adds split tactic for \thm\ as additional looper tactic of \ctxt\ (overwriting previous split tactic for the same constant). \<^descr> \<^ML>\Splitter.add_split_bang\~\thm ctxt\ adds aggressive (see \S\ref{sec:simp-meth}) split tactic for \thm\ as additional looper tactic of \ctxt\ (overwriting previous split tactic for the same constant). \<^descr> \<^ML>\Splitter.del_split\~\thm ctxt\ deletes the split tactic corresponding to \thm\ from the looper tactics of \ctxt\. The splitter replaces applications of a given function; the right-hand side of the replacement can be anything. For example, here is a splitting rule for conditional expressions: @{text [display] "?P (if ?Q ?x ?y) \ (?Q \ ?P ?x) \ (\ ?Q \ ?P ?y)"} Another example is the elimination operator for Cartesian products (which happens to be called \<^const>\case_prod\ in Isabelle/HOL: @{text [display] "?P (case_prod ?f ?p) \ (\a b. ?p = (a, b) \ ?P (f a b))"} For technical reasons, there is a distinction between case splitting in the conclusion and in the premises of a subgoal. The former is done by \<^ML>\Splitter.split_tac\ with rules like @{thm [source] if_split} or @{thm [source] option.split}, which do not split the subgoal, while the latter is done by \<^ML>\Splitter.split_asm_tac\ with rules like @{thm [source] if_split_asm} or @{thm [source] option.split_asm}, which split the subgoal. The function \<^ML>\Splitter.add_split\ automatically takes care of which tactic to call, analyzing the form of the rules given as argument; it is the same operation behind \split\ attribute or method modifier syntax in the Isar source language. Case splits should be allowed only when necessary; they are expensive and hard to control. Case-splitting on if-expressions in the conclusion is usually beneficial, so it is enabled by default in Isabelle/HOL and Isabelle/FOL/ZF. \begin{warn} With \<^ML>\Splitter.split_asm_tac\ as looper component, the Simplifier may split subgoals! This might cause unexpected problems in tactic expressions that silently assume 0 or 1 subgoals after simplification. \end{warn} \ subsection \Forward simplification \label{sec:simp-forward}\ text \ \begin{matharray}{rcl} @{attribute_def simplified} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute simplified} opt? @{syntax thms}? ; opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')' \ \<^descr> @{attribute simplified}~\a\<^sub>1 \ a\<^sub>n\ causes a theorem to be simplified, either by exactly the specified rules \a\<^sub>1, \, a\<^sub>n\, or the implicit Simplifier context if no arguments are given. The result is fully simplified by default, including assumptions and conclusion; the options \no_asm\ etc.\ tune the Simplifier in the same way as the for the \simp\ method. Note that forward simplification restricts the Simplifier to its most basic operation of term rewriting; solver and looper tactics (\secref{sec:simp-strategies}) are \<^emph>\not\ involved here. The @{attribute simplified} attribute should be only rarely required under normal circumstances. \ section \The Classical Reasoner \label{sec:classical}\ subsection \Basic concepts\ text \Although Isabelle is generic, many users will be working in some extension of classical first-order logic. Isabelle/ZF is built upon theory FOL, while Isabelle/HOL conceptually contains first-order logic as a fragment. Theorem-proving in predicate logic is undecidable, but many automated strategies have been developed to assist in this task. Isabelle's classical reasoner is a generic package that accepts certain information about a logic and delivers a suite of automatic proof tools, based on rules that are classified and declared in the context. These proof procedures are slow and simplistic compared with high-end automated theorem provers, but they can save considerable time and effort in practice. They can prove theorems such as Pelletier's @{cite pelletier86} problems 40 and 41 in a few milliseconds (including full proof reconstruction):\ lemma "(\y. \x. F x y \ F x x) \ \ (\x. \y. \z. F z y \ \ F z x)" by blast lemma "(\z. \y. \x. f x y \ f x z \ \ f x x) \ \ (\z. \x. f x z)" by blast text \The proof tools are generic. They are not restricted to first-order logic, and have been heavily used in the development of the Isabelle/HOL library and applications. The tactics can be traced, and their components can be called directly; in this manner, any proof can be viewed interactively.\ subsubsection \The sequent calculus\ text \Isabelle supports natural deduction, which is easy to use for interactive proof. But natural deduction does not easily lend itself to automation, and has a bias towards intuitionism. For certain proofs in classical logic, it can not be called natural. The \<^emph>\sequent calculus\, a generalization of natural deduction, is easier to automate. A \<^bold>\sequent\ has the form \\ \ \\, where \\\ and \\\ are sets of formulae.\<^footnote>\For first-order logic, sequents can equivalently be made from lists or multisets of formulae.\ The sequent \P\<^sub>1, \, P\<^sub>m \ Q\<^sub>1, \, Q\<^sub>n\ is \<^bold>\valid\ if \P\<^sub>1 \ \ \ P\<^sub>m\ implies \Q\<^sub>1 \ \ \ Q\<^sub>n\. Thus \P\<^sub>1, \, P\<^sub>m\ represent assumptions, each of which is true, while \Q\<^sub>1, \, Q\<^sub>n\ represent alternative goals. A sequent is \<^bold>\basic\ if its left and right sides have a common formula, as in \P, Q \ Q, R\; basic sequents are trivially valid. Sequent rules are classified as \<^bold>\right\ or \<^bold>\left\, indicating which side of the \\\ symbol they operate on. Rules that operate on the right side are analogous to natural deduction's introduction rules, and left rules are analogous to elimination rules. The sequent calculus analogue of \(\I)\ is the rule \[ \infer[\(\R)\]{\\ \ \, P \ Q\}{\P, \ \ \, Q\} \] Applying the rule backwards, this breaks down some implication on the right side of a sequent; \\\ and \\\ stand for the sets of formulae that are unaffected by the inference. The analogue of the pair \(\I1)\ and \(\I2)\ is the single rule \[ \infer[\(\R)\]{\\ \ \, P \ Q\}{\\ \ \, P, Q\} \] This breaks down some disjunction on the right side, replacing it by both disjuncts. Thus, the sequent calculus is a kind of multiple-conclusion logic. To illustrate the use of multiple formulae on the right, let us prove the classical theorem \(P \ Q) \ (Q \ P)\. Working backwards, we reduce this formula to a basic sequent: \[ \infer[\(\R)\]{\\ (P \ Q) \ (Q \ P)\} {\infer[\(\R)\]{\\ (P \ Q), (Q \ P)\} {\infer[\(\R)\]{\P \ Q, (Q \ P)\} {\P, Q \ Q, P\}}} \] This example is typical of the sequent calculus: start with the desired theorem and apply rules backwards in a fairly arbitrary manner. This yields a surprisingly effective proof procedure. Quantifiers add only few complications, since Isabelle handles parameters and schematic variables. See @{cite \Chapter 10\ "paulson-ml2"} for further discussion.\ subsubsection \Simulating sequents by natural deduction\ text \Isabelle can represent sequents directly, as in the object-logic LK. But natural deduction is easier to work with, and most object-logics employ it. Fortunately, we can simulate the sequent \P\<^sub>1, \, P\<^sub>m \ Q\<^sub>1, \, Q\<^sub>n\ by the Isabelle formula \P\<^sub>1 \ \ \ P\<^sub>m \ \ Q\<^sub>2 \ ... \ \ Q\<^sub>n \ Q\<^sub>1\ where the order of the assumptions and the choice of \Q\<^sub>1\ are arbitrary. Elim-resolution plays a key role in simulating sequent proofs. We can easily handle reasoning on the left. Elim-resolution with the rules \(\E)\, \(\E)\ and \(\E)\ achieves a similar effect as the corresponding sequent rules. For the other connectives, we use sequent-style elimination rules instead of destruction rules such as \(\E1, 2)\ and \(\E)\. But note that the rule \(\L)\ has no effect under our representation of sequents! \[ \infer[\(\L)\]{\\ P, \ \ \\}{\\ \ \, P\} \] What about reasoning on the right? Introduction rules can only affect the formula in the conclusion, namely \Q\<^sub>1\. The other right-side formulae are represented as negated assumptions, \\ Q\<^sub>2, \, \ Q\<^sub>n\. In order to operate on one of these, it must first be exchanged with \Q\<^sub>1\. Elim-resolution with the \swap\ rule has this effect: \\ P \ (\ R \ P) \ R\ To ensure that swaps occur only when necessary, each introduction rule is converted into a swapped form: it is resolved with the second premise of \(swap)\. The swapped form of \(\I)\, which might be called \(\\E)\, is @{text [display] "\ (P \ Q) \ (\ R \ P) \ (\ R \ Q) \ R"} Similarly, the swapped form of \(\I)\ is @{text [display] "\ (P \ Q) \ (\ R \ P \ Q) \ R"} Swapped introduction rules are applied using elim-resolution, which deletes the negated formula. Our representation of sequents also requires the use of ordinary introduction rules. If we had no regard for readability of intermediate goal states, we could treat the right side more uniformly by representing sequents as @{text [display] "P\<^sub>1 \ \ \ P\<^sub>m \ \ Q\<^sub>1 \ \ \ \ Q\<^sub>n \ \"} \ subsubsection \Extra rules for the sequent calculus\ text \As mentioned, destruction rules such as \(\E1, 2)\ and \(\E)\ must be replaced by sequent-style elimination rules. In addition, we need rules to embody the classical equivalence between \P \ Q\ and \\ P \ Q\. The introduction rules \(\I1, 2)\ are replaced by a rule that simulates \(\R)\: @{text [display] "(\ Q \ P) \ P \ Q"} The destruction rule \(\E)\ is replaced by @{text [display] "(P \ Q) \ (\ P \ R) \ (Q \ R) \ R"} Quantifier replication also requires special rules. In classical logic, \\x. P x\ is equivalent to \\ (\x. \ P x)\; the rules \(\R)\ and \(\L)\ are dual: \[ \infer[\(\R)\]{\\ \ \, \x. P x\}{\\ \ \, \x. P x, P t\} \qquad \infer[\(\L)\]{\\x. P x, \ \ \\}{\P t, \x. P x, \ \ \\} \] Thus both kinds of quantifier may be replicated. Theorems requiring multiple uses of a universal formula are easy to invent; consider @{text [display] "(\x. P x \ P (f x)) \ P a \ P (f\<^sup>n a)"} for any \n > 1\. Natural examples of the multiple use of an existential formula are rare; a standard one is \\x. \y. P x \ P y\. Forgoing quantifier replication loses completeness, but gains decidability, since the search space becomes finite. Many useful theorems can be proved without replication, and the search generally delivers its verdict in a reasonable time. To adopt this approach, represent the sequent rules \(\R)\, \(\L)\ and \(\R)\ by \(\I)\, \(\E)\ and \(\I)\, respectively, and put \(\E)\ into elimination form: @{text [display] "\x. P x \ (P t \ Q) \ Q"} Elim-resolution with this rule will delete the universal formula after a single use. To replicate universal quantifiers, replace the rule by @{text [display] "\x. P x \ (P t \ \x. P x \ Q) \ Q"} To replicate existential quantifiers, replace \(\I)\ by @{text [display] "(\ (\x. P x) \ P t) \ \x. P x"} All introduction rules mentioned above are also useful in swapped form. Replication makes the search space infinite; we must apply the rules with care. The classical reasoner distinguishes between safe and unsafe rules, applying the latter only when there is no alternative. Depth-first search may well go down a blind alley; best-first search is better behaved in an infinite search space. However, quantifier replication is too expensive to prove any but the simplest theorems. \ subsection \Rule declarations\ text \The proof tools of the Classical Reasoner depend on collections of rules declared in the context, which are classified as introduction, elimination or destruction and as \<^emph>\safe\ or \<^emph>\unsafe\. In general, safe rules can be attempted blindly, while unsafe rules must be used with care. A safe rule must never reduce a provable goal to an unprovable set of subgoals. The rule \P \ P \ Q\ is unsafe because it reduces \P \ Q\ to \P\, which might turn out as premature choice of an - unprovable subgoal. Any rule is unsafe whose premises contain new - unknowns. The elimination rule \\x. P x \ (P t \ Q) \ Q\ is + unprovable subgoal. Any rule whose premises contain new unknowns is + unsafe. The elimination rule \\x. P x \ (P t \ Q) \ Q\ is unsafe, since it is applied via elim-resolution, which discards the assumption \\x. P x\ and replaces it by the weaker assumption \P t\. The rule \P t \ \x. P x\ is unsafe for similar reasons. The quantifier duplication rule \\x. P x \ (P t \ \x. P x \ Q) \ Q\ is unsafe in a different sense: since it keeps the assumption \\x. P x\, it is prone to looping. In classical first-order logic, all rules are safe except those mentioned above. The safe~/ unsafe distinction is vague, and may be regarded merely as a way of giving some rules priority over others. One could argue that \(\E)\ is unsafe, because repeated application of it could generate exponentially many subgoals. Induction rules are unsafe because inductive proofs are difficult to set up - automatically. Any inference is unsafe that instantiates an unknown - in the proof state --- thus matching must be used, rather than + automatically. Any inference that instantiates an unknown + in the proof state is unsafe --- thus matching must be used, rather than unification. Even proof by assumption is unsafe if it instantiates unknowns shared with other subgoals. \begin{matharray}{rcl} @{command_def "print_claset"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def intro} & : & \attribute\ \\ @{attribute_def elim} & : & \attribute\ \\ @{attribute_def dest} & : & \attribute\ \\ @{attribute_def rule} & : & \attribute\ \\ @{attribute_def iff} & : & \attribute\ \\ @{attribute_def swapped} & : & \attribute\ \\ \end{matharray} \<^rail>\ (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}? ; @@{attribute rule} 'del' ; @@{attribute iff} (((() | 'add') '?'?) | 'del') \ \<^descr> @{command "print_claset"} prints the collection of rules declared to the Classical Reasoner, i.e.\ the \<^ML_type>\claset\ within the context. \<^descr> @{attribute intro}, @{attribute elim}, and @{attribute dest} declare introduction, elimination, and destruction rules, respectively. By default, rules are considered as \<^emph>\unsafe\ (i.e.\ not applied blindly without backtracking), while ``\!\'' classifies as \<^emph>\safe\. Rule declarations marked by ``\?\'' coincide with those of Isabelle/Pure, cf.\ \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps of the @{method rule} method). The optional natural number specifies an explicit weight argument, which is ignored by the automated reasoning tools, but determines the search order of single rule steps. Introduction rules are those that can be applied using ordinary resolution. Their swapped forms are generated internally, which will be applied using elim-resolution. Elimination rules are applied using elim-resolution. Rules are sorted by the number of new subgoals they will yield; rules that generate the fewest subgoals will be tried first. Otherwise, later declarations take precedence over earlier ones. Rules already present in the context with the same classification are ignored. A warning is printed if the rule has already been added with some other classification, but the rule is added anyway as requested. \<^descr> @{attribute rule}~\del\ deletes all occurrences of a rule from the classical context, regardless of its classification as introduction~/ elimination~/ destruction and safe~/ unsafe. \<^descr> @{attribute iff} declares logical equivalences to the Simplifier and the Classical reasoner at the same time. Non-conditional rules result in a safe introduction and elimination pair; conditional ones are considered unsafe. Rules with negative conclusion are automatically inverted (using \\\-elimination internally). The ``\?\'' version of @{attribute iff} declares rules to the Isabelle/Pure context only, and omits the Simplifier declaration. \<^descr> @{attribute swapped} turns an introduction rule into an elimination, by resolving with the classical swap principle \\ P \ (\ R \ P) \ R\ in the second position. This is mainly for illustrative purposes: the Classical Reasoner already swaps rules internally as explained above. \ subsection \Structured methods\ text \ \begin{matharray}{rcl} @{method_def rule} & : & \method\ \\ @{method_def contradiction} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method rule} @{syntax thms}? \ \<^descr> @{method rule} as offered by the Classical Reasoner is a refinement over the Pure one (see \secref{sec:pure-meth-att}). Both versions work the same, but the classical version observes the classical rule context in addition to that of Isabelle/Pure. Common object logics (HOL, ZF, etc.) declare a rich collection of classical rules (even if these would qualify as intuitionistic ones), but only few declarations to the rule context of Isabelle/Pure (\secref{sec:pure-meth-att}). \<^descr> @{method contradiction} solves some goal by contradiction, deriving any result from both \\ A\ and \A\. Chained facts, which are guaranteed to participate, may appear in either order. \ subsection \Fully automated methods\ text \ \begin{matharray}{rcl} @{method_def blast} & : & \method\ \\ @{method_def auto} & : & \method\ \\ @{method_def force} & : & \method\ \\ @{method_def fast} & : & \method\ \\ @{method_def slow} & : & \method\ \\ @{method_def best} & : & \method\ \\ @{method_def fastforce} & : & \method\ \\ @{method_def slowsimp} & : & \method\ \\ @{method_def bestsimp} & : & \method\ \\ @{method_def deepen} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method blast} @{syntax nat}? (@{syntax clamod} * ) ; @@{method auto} (@{syntax nat} @{syntax nat})? (@{syntax clasimpmod} * ) ; @@{method force} (@{syntax clasimpmod} * ) ; (@@{method fast} | @@{method slow} | @@{method best}) (@{syntax clamod} * ) ; (@@{method fastforce} | @@{method slowsimp} | @@{method bestsimp}) (@{syntax clasimpmod} * ) ; @@{method deepen} (@{syntax nat} ?) (@{syntax clamod} * ) ; @{syntax_def clamod}: (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del') ':' @{syntax thms} ; @{syntax_def clasimpmod}: ('simp' (() | 'add' | 'del' | 'only') | 'cong' (() | 'add' | 'del') | 'split' (() | '!' | 'del') | 'iff' (((() | 'add') '?'?) | 'del') | (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del')) ':' @{syntax thms} \ \<^descr> @{method blast} is a separate classical tableau prover that uses the same classical rule declarations as explained before. Proof search is coded directly in ML using special data structures. A successful proof is then reconstructed using regular Isabelle inferences. It is faster and more powerful than the other classical reasoning tools, but has major limitations too. \<^item> It does not use the classical wrapper tacticals, such as the integration with the Simplifier of @{method fastforce}. \<^item> It does not perform higher-order unification, as needed by the rule @{thm [source=false] rangeI} in HOL. There are often alternatives to such rules, for example @{thm [source=false] range_eqI}. \<^item> Function variables may only be applied to parameters of the subgoal. (This restriction arises because the prover does not use higher-order unification.) If other function variables are present then the prover will fail with the message @{verbatim [display] \Function unknown's argument not a bound variable\} \<^item> Its proof strategy is more general than @{method fast} but can be slower. If @{method blast} fails or seems to be running forever, try @{method fast} and the other proof tools described below. The optional integer argument specifies a bound for the number of unsafe steps used in a proof. By default, @{method blast} starts with a bound of 0 and increases it successively to 20. In contrast, \(blast lim)\ tries to prove the goal using a search bound of \lim\. Sometimes a slow proof using @{method blast} can be made much faster by supplying the successful search bound to this proof method instead. \<^descr> @{method auto} combines classical reasoning with simplification. It is intended for situations where there are a lot of mostly trivial subgoals; it proves all the easy ones, leaving the ones it cannot prove. Occasionally, attempting to prove the hard ones may take a long time. The optional depth arguments in \(auto m n)\ refer to its builtin classical reasoning procedures: \m\ (default 4) is for @{method blast}, which is tried first, and \n\ (default 2) is for a slower but more general alternative that also takes wrappers into account. \<^descr> @{method force} is intended to prove the first subgoal completely, using many fancy proof tools and performing a rather exhaustive search. As a result, proof attempts may take rather long or diverge easily. \<^descr> @{method fast}, @{method best}, @{method slow} attempt to prove the first subgoal using sequent-style reasoning as explained before. Unlike @{method blast}, they construct proofs directly in Isabelle. There is a difference in search strategy and back-tracking: @{method fast} uses depth-first search and @{method best} uses best-first search (guided by a heuristic function: normally the total size of the proof state). Method @{method slow} is like @{method fast}, but conducts a broader search: it may, when backtracking from a failed proof attempt, undo even the step of proving a subgoal by assumption. \<^descr> @{method fastforce}, @{method slowsimp}, @{method bestsimp} are like @{method fast}, @{method slow}, @{method best}, respectively, but use the Simplifier as additional wrapper. The name @{method fastforce}, reflects the behaviour of this popular method better without requiring an understanding of its implementation. \<^descr> @{method deepen} works by exhaustive search up to a certain depth. The start depth is 4 (unless specified explicitly), and the depth is increased iteratively up to 10. Unsafe rules are modified to preserve the formula they act on, so that it be used repeatedly. This method can prove more goals than @{method fast}, but is much slower, for example if the assumptions have many universal quantifiers. Any of the above methods support additional modifiers of the context of classical (and simplifier) rules, but the ones related to the Simplifier are explicitly prefixed by \simp\ here. The semantics of these ad-hoc rule declarations is analogous to the attributes given before. Facts provided by forward chaining are inserted into the goal before commencing proof search. \ subsection \Partially automated methods\label{sec:classical:partial}\ text \These proof methods may help in situations when the fully-automated tools fail. The result is a simpler subgoal that can be tackled by other means, such as by manual instantiation of quantifiers. \begin{matharray}{rcl} @{method_def safe} & : & \method\ \\ @{method_def clarify} & : & \method\ \\ @{method_def clarsimp} & : & \method\ \\ \end{matharray} \<^rail>\ (@@{method safe} | @@{method clarify}) (@{syntax clamod} * ) ; @@{method clarsimp} (@{syntax clasimpmod} * ) \ \<^descr> @{method safe} repeatedly performs safe steps on all subgoals. It is deterministic, with at most one outcome. \<^descr> @{method clarify} performs a series of safe steps without splitting subgoals; see also @{method clarify_step}. \<^descr> @{method clarsimp} acts like @{method clarify}, but also does simplification. Note that if the Simplifier context includes a splitter for the premises, the subgoal may still be split. \ subsection \Single-step tactics\ text \ \begin{matharray}{rcl} @{method_def safe_step} & : & \method\ \\ @{method_def inst_step} & : & \method\ \\ @{method_def step} & : & \method\ \\ @{method_def slow_step} & : & \method\ \\ @{method_def clarify_step} & : & \method\ \\ \end{matharray} These are the primitive tactics behind the automated proof methods of the Classical Reasoner. By calling them yourself, you can execute these procedures one step at a time. \<^descr> @{method safe_step} performs a safe step on the first subgoal. The safe wrapper tacticals are applied to a tactic that may include proof by assumption or Modus Ponens (taking care not to instantiate unknowns), or substitution. \<^descr> @{method inst_step} is like @{method safe_step}, but allows unknowns to be instantiated. \<^descr> @{method step} is the basic step of the proof procedure, it operates on the first subgoal. The unsafe wrapper tacticals are applied to a tactic that tries @{method safe}, @{method inst_step}, or applies an unsafe rule from the context. \<^descr> @{method slow_step} resembles @{method step}, but allows backtracking between using safe rules with instantiation (@{method inst_step}) and using unsafe rules. The resulting search space is larger. \<^descr> @{method clarify_step} performs a safe step on the first subgoal; no splitting step is applied. For example, the subgoal \A \ B\ is left as a conjunction. Proof by assumption, Modus Ponens, etc., may be performed provided they do not instantiate unknowns. Assumptions of the form \x = t\ may be eliminated. The safe wrapper tactical is applied. \ subsection \Modifying the search step\ text \ \begin{mldecls} @{index_ML_type wrapper: "(int -> tactic) -> (int -> tactic)"} \\[0.5ex] @{index_ML_op addSWrapper: "Proof.context * (string * (Proof.context -> wrapper)) -> Proof.context"} \\ @{index_ML_op addSbefore: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ @{index_ML_op addSafter: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ @{index_ML_op delSWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] @{index_ML_op addWrapper: "Proof.context * (string * (Proof.context -> wrapper)) -> Proof.context"} \\ @{index_ML_op addbefore: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ @{index_ML_op addafter: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ @{index_ML_op delWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] @{index_ML addSss: "Proof.context -> Proof.context"} \\ @{index_ML addss: "Proof.context -> Proof.context"} \\ \end{mldecls} The proof strategy of the Classical Reasoner is simple. Perform as many safe inferences as possible; or else, apply certain safe rules, allowing instantiation of unknowns; or else, apply an unsafe rule. The tactics also eliminate assumptions of the form \x = t\ by substitution if they have been set up to do so. They may perform a form of Modus Ponens: if there are assumptions \P \ Q\ and \P\, then replace \P \ Q\ by \Q\. The classical reasoning tools --- except @{method blast} --- allow to modify this basic proof strategy by applying two lists of arbitrary \<^emph>\wrapper tacticals\ to it. The first wrapper list, which is considered to contain safe wrappers only, affects @{method safe_step} and all the tactics that call it. The second one, which may contain unsafe wrappers, affects the unsafe parts of @{method step}, @{method slow_step}, and the tactics that call them. A wrapper transforms each step of the search, for example by attempting other tactics before or after the original step tactic. All members of a wrapper list are applied in turn to the respective step tactic. Initially the two wrapper lists are empty, which means no modification of the step tactics. Safe and unsafe wrappers are added to the context with the functions given below, supplying them with wrapper names. These names may be used to selectively delete wrappers. \<^descr> \ctxt addSWrapper (name, wrapper)\ adds a new wrapper, which should yield a safe tactic, to modify the existing safe step tactic. \<^descr> \ctxt addSbefore (name, tac)\ adds the given tactic as a safe wrapper, such that it is tried \<^emph>\before\ each safe step of the search. \<^descr> \ctxt addSafter (name, tac)\ adds the given tactic as a safe wrapper, such that it is tried when a safe step of the search would fail. \<^descr> \ctxt delSWrapper name\ deletes the safe wrapper with the given name. \<^descr> \ctxt addWrapper (name, wrapper)\ adds a new wrapper to modify the existing (unsafe) step tactic. \<^descr> \ctxt addbefore (name, tac)\ adds the given tactic as an unsafe wrapper, such that it its result is concatenated \<^emph>\before\ the result of each unsafe step. \<^descr> \ctxt addafter (name, tac)\ adds the given tactic as an unsafe wrapper, such that it its result is concatenated \<^emph>\after\ the result of each unsafe step. \<^descr> \ctxt delWrapper name\ deletes the unsafe wrapper with the given name. \<^descr> \addSss\ adds the simpset of the context to its classical set. The assumptions and goal will be simplified, in a rather safe way, after each safe step of the search. \<^descr> \addss\ adds the simpset of the context to its classical set. The assumptions and goal will be simplified, before the each unsafe step of the search. \ section \Object-logic setup \label{sec:object-logic}\ text \ \begin{matharray}{rcl} @{command_def "judgment"} & : & \theory \ theory\ \\ @{method_def atomize} & : & \method\ \\ @{attribute_def atomize} & : & \attribute\ \\ @{attribute_def rule_format} & : & \attribute\ \\ @{attribute_def rulify} & : & \attribute\ \\ \end{matharray} The very starting point for any Isabelle object-logic is a ``truth judgment'' that links object-level statements to the meta-logic (with its minimal language of \prop\ that covers universal quantification \\\ and implication \\\). Common object-logics are sufficiently expressive to internalize rule statements over \\\ and \\\ within their own language. This is useful in certain situations where a rule needs to be viewed as an atomic statement from the meta-level perspective, e.g.\ \\x. x \ A \ P x\ versus \\x \ A. P x\. From the following language elements, only the @{method atomize} method and @{attribute rule_format} attribute are occasionally required by end-users, the rest is for those who need to setup their own object-logic. In the latter case existing formulations of Isabelle/FOL or Isabelle/HOL may be taken as realistic examples. Generic tools may refer to the information provided by object-logic declarations internally. \<^rail>\ @@{command judgment} @{syntax name} '::' @{syntax type} @{syntax mixfix}? ; @@{attribute atomize} ('(' 'full' ')')? ; @@{attribute rule_format} ('(' 'noasm' ')')? \ \<^descr> @{command "judgment"}~\c :: \ (mx)\ declares constant \c\ as the truth judgment of the current object-logic. Its type \\\ should specify a coercion of the category of object-level propositions to \prop\ of the Pure meta-logic; the mixfix annotation \(mx)\ would typically just link the object language (internally of syntactic category \logic\) with that of \prop\. Only one @{command "judgment"} declaration may be given in any theory development. \<^descr> @{method atomize} (as a method) rewrites any non-atomic premises of a sub-goal, using the meta-level equations declared via @{attribute atomize} (as an attribute) beforehand. As a result, heavily nested goals become amenable to fundamental operations such as resolution (cf.\ the @{method (Pure) rule} method). Giving the ``\(full)\'' option here means to turn the whole subgoal into an object-statement (if possible), including the outermost parameters and assumptions as well. A typical collection of @{attribute atomize} rules for a particular object-logic would provide an internalization for each of the connectives of \\\, \\\, and \\\. Meta-level conjunction should be covered as well (this is particularly important for locales, see \secref{sec:locale}). \<^descr> @{attribute rule_format} rewrites a theorem by the equalities declared as @{attribute rulify} rules in the current object-logic. By default, the result is fully normalized, including assumptions and conclusions at any depth. The \(no_asm)\ option restricts the transformation to the conclusion of a rule. In common object-logics (HOL, FOL, ZF), the effect of @{attribute rule_format} is to replace (bounded) universal quantification (\\\) and implication (\\\) by the corresponding rule statements over \\\ and \\\. \ section \Tracing higher-order unification\ text \ \begin{tabular}{rcll} @{attribute_def unify_trace_simp} & : & \attribute\ & default \false\ \\ @{attribute_def unify_trace_types} & : & \attribute\ & default \false\ \\ @{attribute_def unify_trace_bound} & : & \attribute\ & default \50\ \\ @{attribute_def unify_search_bound} & : & \attribute\ & default \60\ \\ \end{tabular} \<^medskip> Higher-order unification works well in most practical situations, but sometimes needs extra care to identify problems. These tracing options may help. \<^descr> @{attribute unify_trace_simp} controls tracing of the simplification phase of higher-order unification. \<^descr> @{attribute unify_trace_types} controls warnings of incompleteness, when unification is not considering all possible instantiations of schematic type variables. \<^descr> @{attribute unify_trace_bound} determines the depth where unification starts to print tracing information once it reaches depth; 0 for full tracing. At the default value, tracing information is almost never printed in practice. \<^descr> @{attribute unify_search_bound} prevents unification from searching past the given depth. Because of this bound, higher-order unification cannot return an infinite sequence, though it can return an exponentially long one. The search rarely approaches the default value in practice. If the search is cut off, unification prints a warning ``Unification bound exceeded''. \begin{warn} Options for unification cannot be modified in a local context. Only the global theory content is taken into account. \end{warn} \ end diff --git a/src/Doc/Isar_Ref/HOL_Specific.thy b/src/Doc/Isar_Ref/HOL_Specific.thy --- a/src/Doc/Isar_Ref/HOL_Specific.thy +++ b/src/Doc/Isar_Ref/HOL_Specific.thy @@ -1,2507 +1,2507 @@ (*:maxLineLen=78:*) theory HOL_Specific imports Main "HOL-Library.Old_Datatype" "HOL-Library.Old_Recdef" "HOL-Library.Adhoc_Overloading" "HOL-Library.Dlist" "HOL-Library.FSet" Base begin chapter \Higher-Order Logic\ text \ Isabelle/HOL is based on Higher-Order Logic, a polymorphic version of Church's Simple Theory of Types. HOL can be best understood as a simply-typed version of classical set theory. The logic was first implemented in Gordon's HOL system @{cite "mgordon-hol"}. It extends Church's original logic @{cite "church40"} by explicit type variables (naive polymorphism) and a sound axiomatization scheme for new types based on subsets of existing types. Andrews's book @{cite andrews86} is a full description of the original Church-style higher-order logic, with proofs of correctness and completeness wrt.\ certain set-theoretic interpretations. The particular extensions of Gordon-style HOL are explained semantically in two chapters of the 1993 HOL book @{cite pitts93}. Experience with HOL over decades has demonstrated that higher-order logic is widely applicable in many areas of mathematics and computer science. In a sense, Higher-Order Logic is simpler than First-Order Logic, because there are fewer restrictions and special cases. Note that HOL is \<^emph>\weaker\ than FOL with axioms for ZF set theory, which is traditionally considered the standard foundation of regular mathematics, but for most applications this does not matter. If you prefer ML to Lisp, you will probably prefer HOL to ZF. \<^medskip> The syntax of HOL follows \\\-calculus and functional programming. Function application is curried. To apply the function \f\ of type \\\<^sub>1 \ \\<^sub>2 \ \\<^sub>3\ to the arguments \a\ and \b\ in HOL, you simply write \f a b\ (as in ML or Haskell). There is no ``apply'' operator; the existing application of the Pure \\\-calculus is re-used. Note that in HOL \f (a, b)\ means ``\f\ applied to the pair \(a, b)\ (which is notation for \Pair a b\). The latter typically introduces extra formal efforts that can be avoided by currying functions by default. Explicit tuples are as infrequent in HOL formalizations as in good ML or Haskell programs. \<^medskip> Isabelle/HOL has a distinct feel, compared to other object-logics like Isabelle/ZF. It identifies object-level types with meta-level types, taking advantage of the default type-inference mechanism of Isabelle/Pure. HOL fully identifies object-level functions with meta-level functions, with native abstraction and application. These identifications allow Isabelle to support HOL particularly nicely, but they also mean that HOL requires some sophistication from the user. In particular, an understanding of Hindley-Milner type-inference with type-classes, which are both used extensively in the standard libraries and applications. \ chapter \Derived specification elements\ section \Inductive and coinductive definitions \label{sec:hol-inductive}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "inductive"} & : & \local_theory \ local_theory\ \\ @{command_def (HOL) "inductive_set"} & : & \local_theory \ local_theory\ \\ @{command_def (HOL) "coinductive"} & : & \local_theory \ local_theory\ \\ @{command_def (HOL) "coinductive_set"} & : & \local_theory \ local_theory\ \\ @{command_def "print_inductives"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def (HOL) mono} & : & \attribute\ \\ \end{matharray} An \<^emph>\inductive definition\ specifies the least predicate or set \R\ closed under given rules: applying a rule to elements of \R\ yields a result within \R\. For example, a structural operational semantics is an inductive definition of an evaluation relation. Dually, a \<^emph>\coinductive definition\ specifies the greatest predicate or set \R\ that is consistent with given rules: every element of \R\ can be seen as arising by applying a rule to elements of \R\. An important example is using bisimulation relations to formalise equivalence of processes and infinite data structures. Both inductive and coinductive definitions are based on the Knaster-Tarski fixed-point theorem for complete lattices. The collection of introduction rules given by the user determines a functor on subsets of set-theoretic relations. The required monotonicity of the recursion scheme is proven as a prerequisite to the fixed-point definition and the resulting consequences. This works by pushing inclusion through logical connectives and any other operator that might be wrapped around recursive occurrences of the defined relation: there must be a monotonicity theorem of the form \A \ B \ \ A \ \ B\, for each premise \\ R t\ in an introduction rule. The default rule declarations of Isabelle/HOL already take care of most common situations. \<^rail>\ (@@{command (HOL) inductive} | @@{command (HOL) inductive_set} | @@{command (HOL) coinductive} | @@{command (HOL) coinductive_set}) @{syntax vars} @{syntax for_fixes} \ (@'where' @{syntax multi_specs})? (@'monos' @{syntax thms})? ; @@{command print_inductives} ('!'?) ; @@{attribute (HOL) mono} (() | 'add' | 'del') \ \<^descr> @{command (HOL) "inductive"} and @{command (HOL) "coinductive"} define (co)inductive predicates from the introduction rules. The propositions given as \clauses\ in the @{keyword "where"} part are either rules of the usual \\/\\ format (with arbitrary nesting), or equalities using \\\. The latter specifies extra-logical abbreviations in the sense of @{command_ref abbreviation}. Introducing abstract syntax simultaneously with the actual introduction rules is occasionally useful for complex specifications. The optional @{keyword "for"} part contains a list of parameters of the (co)inductive predicates that remain fixed throughout the definition, in contrast to arguments of the relation that may vary in each occurrence within the given \clauses\. The optional @{keyword "monos"} declaration contains additional \<^emph>\monotonicity theorems\, which are required for each operator applied to a recursive set in the introduction rules. \<^descr> @{command (HOL) "inductive_set"} and @{command (HOL) "coinductive_set"} are wrappers for to the previous commands for native HOL predicates. This allows to define (co)inductive sets, where multiple arguments are simulated via tuples. \<^descr> @{command "print_inductives"} prints (co)inductive definitions and monotonicity rules; the ``\!\'' option indicates extra verbosity. \<^descr> @{attribute (HOL) mono} declares monotonicity rules in the context. These rule are involved in the automated monotonicity proof of the above inductive and coinductive definitions. \ subsection \Derived rules\ text \ A (co)inductive definition of \R\ provides the following main theorems: \<^descr> \R.intros\ is the list of introduction rules as proven theorems, for the recursive predicates (or sets). The rules are also available individually, using the names given them in the theory file; \<^descr> \R.cases\ is the case analysis (or elimination) rule; \<^descr> \R.induct\ or \R.coinduct\ is the (co)induction rule; \<^descr> \R.simps\ is the equation unrolling the fixpoint of the predicate one step. When several predicates \R\<^sub>1, \, R\<^sub>n\ are defined simultaneously, the list of introduction rules is called \R\<^sub>1_\_R\<^sub>n.intros\, the case analysis rules are called \R\<^sub>1.cases, \, R\<^sub>n.cases\, and the list of mutual induction rules is called \R\<^sub>1_\_R\<^sub>n.inducts\. \ subsection \Monotonicity theorems\ text \ The context maintains a default set of theorems that are used in monotonicity proofs. New rules can be declared via the @{attribute (HOL) mono} attribute. See the main Isabelle/HOL sources for some examples. The general format of such monotonicity theorems is as follows: \<^item> Theorems of the form \A \ B \ \ A \ \ B\, for proving monotonicity of inductive definitions whose introduction rules have premises involving terms such as \\ R t\. \<^item> Monotonicity theorems for logical operators, which are of the general form \(\ \ \) \ \ (\ \ \) \ \ \ \\. For example, in the case of the operator \\\, the corresponding theorem is \[ \infer{\P\<^sub>1 \ P\<^sub>2 \ Q\<^sub>1 \ Q\<^sub>2\}{\P\<^sub>1 \ Q\<^sub>1\ & \P\<^sub>2 \ Q\<^sub>2\} \] \<^item> De Morgan style equations for reasoning about the ``polarity'' of expressions, e.g. \[ \<^prop>\\ \ P \ P\ \qquad\qquad \<^prop>\\ (P \ Q) \ \ P \ \ Q\ \] \<^item> Equations for reducing complex operators to more primitive ones whose monotonicity can easily be proved, e.g. \[ \<^prop>\(P \ Q) \ \ P \ Q\ \qquad\qquad \<^prop>\Ball A P \ \x. x \ A \ P x\ \] \ subsubsection \Examples\ text \The finite powerset operator can be defined inductively like this:\ (*<*)experiment begin(*>*) inductive_set Fin :: "'a set \ 'a set set" for A :: "'a set" where empty: "{} \ Fin A" | insert: "a \ A \ B \ Fin A \ insert a B \ Fin A" text \The accessible part of a relation is defined as follows:\ inductive acc :: "('a \ 'a \ bool) \ 'a \ bool" for r :: "'a \ 'a \ bool" (infix "\" 50) where acc: "(\y. y \ x \ acc r y) \ acc r x" (*<*)end(*>*) text \ Common logical connectives can be easily characterized as non-recursive inductive definitions with parameters, but without arguments. \ (*<*)experiment begin(*>*) inductive AND for A B :: bool where "A \ B \ AND A B" inductive OR for A B :: bool where "A \ OR A B" | "B \ OR A B" inductive EXISTS for B :: "'a \ bool" where "B a \ EXISTS B" (*<*)end(*>*) text \ Here the \cases\ or \induct\ rules produced by the @{command inductive} package coincide with the expected elimination rules for Natural Deduction. Already in the original article by Gerhard Gentzen @{cite "Gentzen:1935"} there is a hint that each connective can be characterized by its introductions, and the elimination can be constructed systematically. \ section \Recursive functions \label{sec:recursion}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "primrec"} & : & \local_theory \ local_theory\ \\ @{command_def (HOL) "fun"} & : & \local_theory \ local_theory\ \\ @{command_def (HOL) "function"} & : & \local_theory \ proof(prove)\ \\ @{command_def (HOL) "termination"} & : & \local_theory \ proof(prove)\ \\ @{command_def (HOL) "fun_cases"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command (HOL) primrec} @{syntax specification} ; (@@{command (HOL) fun} | @@{command (HOL) function}) opts? @{syntax specification} ; opts: '(' (('sequential' | 'domintros') + ',') ')' ; @@{command (HOL) termination} @{syntax term}? ; @@{command (HOL) fun_cases} (@{syntax thmdecl}? @{syntax prop} + @'and') \ \<^descr> @{command (HOL) "primrec"} defines primitive recursive functions over datatypes (see also @{command_ref (HOL) datatype}). The given \equations\ specify reduction rules that are produced by instantiating the generic combinator for primitive recursion that is available for each datatype. Each equation needs to be of the form: @{text [display] "f x\<^sub>1 \ x\<^sub>m (C y\<^sub>1 \ y\<^sub>k) z\<^sub>1 \ z\<^sub>n = rhs"} such that \C\ is a datatype constructor, \rhs\ contains only the free variables on the left-hand side (or from the context), and all recursive occurrences of \f\ in \rhs\ are of the form \f \ y\<^sub>i \\ for some \i\. At most one reduction rule for each constructor can be given. The order does not matter. For missing constructors, the function is defined to return a default value, but this equation is made difficult to access for users. The reduction rules are declared as @{attribute simp} by default, which enables standard proof methods like @{method simp} and @{method auto} to normalize expressions of \f\ applied to datatype constructions, by simulating symbolic computation via rewriting. \<^descr> @{command (HOL) "function"} defines functions by general wellfounded recursion. A detailed description with examples can be found in @{cite "isabelle-function"}. The function is specified by a set of (possibly conditional) recursive equations with arbitrary pattern matching. The command generates proof obligations for the completeness and the compatibility of patterns. The defined function is considered partial, and the resulting simplification rules (named \f.psimps\) and induction rule (named \f.pinduct\) are guarded by a generated domain predicate \f_dom\. The @{command (HOL) "termination"} command can then be used to establish that the function is total. \<^descr> @{command (HOL) "fun"} is a shorthand notation for ``@{command (HOL) "function"}~\(sequential)\'', followed by automated proof attempts regarding pattern matching and termination. See @{cite "isabelle-function"} for further details. \<^descr> @{command (HOL) "termination"}~\f\ commences a termination proof for the previously defined function \f\. If this is omitted, the command refers to the most recent function definition. After the proof is closed, the recursive equations and the induction principle is established. \<^descr> @{command (HOL) "fun_cases"} generates specialized elimination rules for function equations. It expects one or more function equations and produces rules that eliminate the given equalities, following the cases given in the function definition. Recursive definitions introduced by the @{command (HOL) "function"} command accommodate reasoning by induction (cf.\ @{method induct}): rule \f.induct\ refers to a specific induction rule, with parameters named according to the user-specified equations. Cases are numbered starting from 1. For @{command (HOL) "primrec"}, the induction principle coincides with structural recursion on the datatype where the recursion is carried out. The equations provided by these packages may be referred later as theorem list \f.simps\, where \f\ is the (collective) name of the functions defined. Individual equations may be named explicitly as well. The @{command (HOL) "function"} command accepts the following options. \<^descr> \sequential\ enables a preprocessor which disambiguates overlapping patterns by making them mutually disjoint. Earlier equations take precedence over later ones. This allows to give the specification in a format very similar to functional programming. Note that the resulting simplification and induction rules correspond to the transformed specification, not the one given originally. This usually means that each equation given by the user may result in several theorems. Also note that this automatic transformation only works for ML-style datatype patterns. \<^descr> \domintros\ enables the automated generation of introduction rules for the domain predicate. While mostly not needed, they can be helpful in some proofs about partial functions. \ subsubsection \Example: evaluation of expressions\ text \ Subsequently, we define mutual datatypes for arithmetic and boolean expressions, and use @{command primrec} for evaluation functions that follow the same recursive structure. \ (*<*)experiment begin(*>*) datatype 'a aexp = IF "'a bexp" "'a aexp" "'a aexp" | Sum "'a aexp" "'a aexp" | Diff "'a aexp" "'a aexp" | Var 'a | Num nat and 'a bexp = Less "'a aexp" "'a aexp" | And "'a bexp" "'a bexp" | Neg "'a bexp" text \\<^medskip> Evaluation of arithmetic and boolean expressions\ primrec evala :: "('a \ nat) \ 'a aexp \ nat" and evalb :: "('a \ nat) \ 'a bexp \ bool" where "evala env (IF b a1 a2) = (if evalb env b then evala env a1 else evala env a2)" | "evala env (Sum a1 a2) = evala env a1 + evala env a2" | "evala env (Diff a1 a2) = evala env a1 - evala env a2" | "evala env (Var v) = env v" | "evala env (Num n) = n" | "evalb env (Less a1 a2) = (evala env a1 < evala env a2)" | "evalb env (And b1 b2) = (evalb env b1 \ evalb env b2)" | "evalb env (Neg b) = (\ evalb env b)" text \ Since the value of an expression depends on the value of its variables, the functions \<^const>\evala\ and \<^const>\evalb\ take an additional parameter, an \<^emph>\environment\ that maps variables to their values. \<^medskip> Substitution on expressions can be defined similarly. The mapping \f\ of type \<^typ>\'a \ 'a aexp\ given as a parameter is lifted canonically on the types \<^typ>\'a aexp\ and \<^typ>\'a bexp\, respectively. \ primrec substa :: "('a \ 'b aexp) \ 'a aexp \ 'b aexp" and substb :: "('a \ 'b aexp) \ 'a bexp \ 'b bexp" where "substa f (IF b a1 a2) = IF (substb f b) (substa f a1) (substa f a2)" | "substa f (Sum a1 a2) = Sum (substa f a1) (substa f a2)" | "substa f (Diff a1 a2) = Diff (substa f a1) (substa f a2)" | "substa f (Var v) = f v" | "substa f (Num n) = Num n" | "substb f (Less a1 a2) = Less (substa f a1) (substa f a2)" | "substb f (And b1 b2) = And (substb f b1) (substb f b2)" | "substb f (Neg b) = Neg (substb f b)" text \ In textbooks about semantics one often finds substitution theorems, which express the relationship between substitution and evaluation. For \<^typ>\'a aexp\ and \<^typ>\'a bexp\, we can prove such a theorem by mutual induction, followed by simplification. \ lemma subst_one: "evala env (substa (Var (v := a')) a) = evala (env (v := evala env a')) a" "evalb env (substb (Var (v := a')) b) = evalb (env (v := evala env a')) b" by (induct a and b) simp_all lemma subst_all: "evala env (substa s a) = evala (\x. evala env (s x)) a" "evalb env (substb s b) = evalb (\x. evala env (s x)) b" by (induct a and b) simp_all (*<*)end(*>*) subsubsection \Example: a substitution function for terms\ text \Functions on datatypes with nested recursion are also defined by mutual primitive recursion.\ (*<*)experiment begin(*>*) datatype ('a, 'b) "term" = Var 'a | App 'b "('a, 'b) term list" text \ A substitution function on type \<^typ>\('a, 'b) term\ can be defined as follows, by working simultaneously on \<^typ>\('a, 'b) term list\: \ primrec subst_term :: "('a \ ('a, 'b) term) \ ('a, 'b) term \ ('a, 'b) term" and subst_term_list :: "('a \ ('a, 'b) term) \ ('a, 'b) term list \ ('a, 'b) term list" where "subst_term f (Var a) = f a" | "subst_term f (App b ts) = App b (subst_term_list f ts)" | "subst_term_list f [] = []" | "subst_term_list f (t # ts) = subst_term f t # subst_term_list f ts" text \ The recursion scheme follows the structure of the unfolded definition of type \<^typ>\('a, 'b) term\. To prove properties of this substitution function, mutual induction is needed: \ lemma "subst_term (subst_term f1 \ f2) t = subst_term f1 (subst_term f2 t)" and "subst_term_list (subst_term f1 \ f2) ts = subst_term_list f1 (subst_term_list f2 ts)" by (induct t and ts rule: subst_term.induct subst_term_list.induct) simp_all (*<*)end(*>*) subsubsection \Example: a map function for infinitely branching trees\ text \Defining functions on infinitely branching datatypes by primitive recursion is just as easy.\ (*<*)experiment begin(*>*) datatype 'a tree = Atom 'a | Branch "nat \ 'a tree" primrec map_tree :: "('a \ 'b) \ 'a tree \ 'b tree" where "map_tree f (Atom a) = Atom (f a)" | "map_tree f (Branch ts) = Branch (\x. map_tree f (ts x))" text \ Note that all occurrences of functions such as \ts\ above must be applied to an argument. In particular, \<^term>\map_tree f \ ts\ is not allowed here. \<^medskip> Here is a simple composition lemma for \<^term>\map_tree\: \ lemma "map_tree g (map_tree f t) = map_tree (g \ f) t" by (induct t) simp_all (*<*)end(*>*) subsection \Proof methods related to recursive definitions\ text \ \begin{matharray}{rcl} @{method_def (HOL) pat_completeness} & : & \method\ \\ @{method_def (HOL) relation} & : & \method\ \\ @{method_def (HOL) lexicographic_order} & : & \method\ \\ @{method_def (HOL) size_change} & : & \method\ \\ @{attribute_def (HOL) termination_simp} & : & \attribute\ \\ @{method_def (HOL) induction_schema} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method (HOL) relation} @{syntax term} ; @@{method (HOL) lexicographic_order} (@{syntax clasimpmod} * ) ; @@{method (HOL) size_change} ( orders (@{syntax clasimpmod} * ) ) ; @@{method (HOL) induction_schema} ; orders: ( 'max' | 'min' | 'ms' ) * \ \<^descr> @{method (HOL) pat_completeness} is a specialized method to solve goals regarding the completeness of pattern matching, as required by the @{command (HOL) "function"} package (cf.\ @{cite "isabelle-function"}). \<^descr> @{method (HOL) relation}~\R\ introduces a termination proof using the relation \R\. The resulting proof state will contain goals expressing that \R\ is wellfounded, and that the arguments of recursive calls decrease with respect to \R\. Usually, this method is used as the initial proof step of manual termination proofs. \<^descr> @{method (HOL) "lexicographic_order"} attempts a fully automated termination proof by searching for a lexicographic combination of size measures on the arguments of the function. The method accepts the same arguments as the @{method auto} method, which it uses internally to prove local descents. The @{syntax clasimpmod} modifiers are accepted (as for @{method auto}). In case of failure, extensive information is printed, which can help to analyse the situation (cf.\ @{cite "isabelle-function"}). \<^descr> @{method (HOL) "size_change"} also works on termination goals, using a variation of the size-change principle, together with a graph decomposition technique (see @{cite krauss_phd} for details). Three kinds of orders are used internally: \max\, \min\, and \ms\ (multiset), which is only available when the theory \Multiset\ is loaded. When no order kinds are given, they are tried in order. The search for a termination proof uses SAT solving internally. For local descent proofs, the @{syntax clasimpmod} modifiers are accepted (as for @{method auto}). \<^descr> @{attribute (HOL) termination_simp} declares extra rules for the simplifier, when invoked in termination proofs. This can be useful, e.g., for special rules involving size estimations. \<^descr> @{method (HOL) induction_schema} derives user-specified induction rules from well-founded induction and completeness of patterns. This factors out some operations that are done internally by the function package and makes them available separately. See \<^file>\~~/src/HOL/ex/Induction_Schema.thy\ for examples. \ subsection \Functions with explicit partiality\ text \ \begin{matharray}{rcl} @{command_def (HOL) "partial_function"} & : & \local_theory \ local_theory\ \\ @{attribute_def (HOL) "partial_function_mono"} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{command (HOL) partial_function} '(' @{syntax name} ')' @{syntax specification} \ \<^descr> @{command (HOL) "partial_function"}~\(mode)\ defines recursive functions based on fixpoints in complete partial orders. No termination proof is required from the user or constructed internally. Instead, the possibility of non-termination is modelled explicitly in the result type, which contains an explicit bottom element. Pattern matching and mutual recursion are currently not supported. Thus, the specification consists of a single function described by a single recursive equation. There are no fixed syntactic restrictions on the body of the function, but the induced functional must be provably monotonic wrt.\ the underlying order. The monotonicity proof is performed internally, and the definition is rejected when it fails. The proof can be influenced by declaring hints using the @{attribute (HOL) partial_function_mono} attribute. The mandatory \mode\ argument specifies the mode of operation of the command, which directly corresponds to a complete partial order on the result type. By default, the following modes are defined: \<^descr> \option\ defines functions that map into the \<^type>\option\ type. Here, the value \<^term>\None\ is used to model a non-terminating computation. Monotonicity requires that if \<^term>\None\ is returned by a recursive call, then the overall result must also be \<^term>\None\. This is best achieved through the use of the monadic operator \<^const>\Option.bind\. \<^descr> \tailrec\ defines functions with an arbitrary result type and uses the slightly degenerated partial order where \<^term>\undefined\ is the bottom element. Now, monotonicity requires that if \<^term>\undefined\ is returned by a recursive call, then the overall result must also be \<^term>\undefined\. In practice, this is only satisfied when each recursive call is a tail call, whose result is directly returned. Thus, this mode of operation allows the definition of arbitrary tail-recursive functions. Experienced users may define new modes by instantiating the locale \<^const>\partial_function_definitions\ appropriately. \<^descr> @{attribute (HOL) partial_function_mono} declares rules for use in the internal monotonicity proofs of partial function definitions. \ subsection \Old-style recursive function definitions (TFL)\ text \ \begin{matharray}{rcl} @{command_def (HOL) "recdef"} & : & \theory \ theory)\ \\ \end{matharray} The old TFL command @{command (HOL) "recdef"} for defining recursive is mostly obsolete; @{command (HOL) "function"} or @{command (HOL) "fun"} should be used instead. \<^rail>\ @@{command (HOL) recdef} ('(' @'permissive' ')')? \ @{syntax name} @{syntax term} (@{syntax prop} +) hints? ; hints: '(' @'hints' ( recdefmod * ) ')' ; recdefmod: (('recdef_simp' | 'recdef_cong' | 'recdef_wf') (() | 'add' | 'del') ':' @{syntax thms}) | @{syntax clasimpmod} \ \<^descr> @{command (HOL) "recdef"} defines general well-founded recursive functions (using the TFL package). The ``\(permissive)\'' option tells TFL to recover from failed proof attempts, returning unfinished results. The \recdef_simp\, \recdef_cong\, and \recdef_wf\ hints refer to auxiliary rules to be used in the internal automated proof process of TFL. Additional @{syntax clasimpmod} declarations may be given to tune the context of the Simplifier (cf.\ \secref{sec:simplifier}) and Classical reasoner (cf.\ \secref{sec:classical}). \<^medskip> Hints for @{command (HOL) "recdef"} may be also declared globally, using the following attributes. \begin{matharray}{rcl} @{attribute_def (HOL) recdef_simp} & : & \attribute\ \\ @{attribute_def (HOL) recdef_cong} & : & \attribute\ \\ @{attribute_def (HOL) recdef_wf} & : & \attribute\ \\ \end{matharray} \<^rail>\ (@@{attribute (HOL) recdef_simp} | @@{attribute (HOL) recdef_cong} | @@{attribute (HOL) recdef_wf}) (() | 'add' | 'del') \ \ section \Adhoc overloading of constants\ text \ \begin{tabular}{rcll} @{command_def "adhoc_overloading"} & : & \local_theory \ local_theory\ \\ @{command_def "no_adhoc_overloading"} & : & \local_theory \ local_theory\ \\ @{attribute_def "show_variants"} & : & \attribute\ & default \false\ \\ \end{tabular} \<^medskip> Adhoc overloading allows to overload a constant depending on its type. Typically this involves the introduction of an uninterpreted constant (used for input and output) and the addition of some variants (used internally). For examples see \<^file>\~~/src/HOL/ex/Adhoc_Overloading_Examples.thy\ and \<^file>\~~/src/HOL/Library/Monad_Syntax.thy\. \<^rail>\ (@@{command adhoc_overloading} | @@{command no_adhoc_overloading}) (@{syntax name} (@{syntax term} + ) + @'and') \ \<^descr> @{command "adhoc_overloading"}~\c v\<^sub>1 ... v\<^sub>n\ associates variants with an existing constant. \<^descr> @{command "no_adhoc_overloading"} is similar to @{command "adhoc_overloading"}, but removes the specified variants from the present context. \<^descr> @{attribute "show_variants"} controls printing of variants of overloaded constants. If enabled, the internally used variants are printed instead of their respective overloaded constants. This is occasionally useful to check whether the system agrees with a user's expectations about derived variants. \ section \Definition by specification \label{sec:hol-specification}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "specification"} & : & \theory \ proof(prove)\ \\ \end{matharray} \<^rail>\ @@{command (HOL) specification} '(' (decl +) ')' \ (@{syntax thmdecl}? @{syntax prop} +) ; decl: (@{syntax name} ':')? @{syntax term} ('(' @'overloaded' ')')? \ \<^descr> @{command (HOL) "specification"}~\decls \\ sets up a goal stating the existence of terms with the properties specified to hold for the constants given in \decls\. After finishing the proof, the theory will be augmented with definitions for the given constants, as well as with theorems stating the properties for these constants. \decl\ declares a constant to be defined by the specification given. The definition for the constant \c\ is bound to the name \c_def\ unless a theorem name is given in the declaration. Overloaded constants should be declared as such. \ section \Old-style datatypes \label{sec:hol-datatype}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "old_rep_datatype"} & : & \theory \ proof(prove)\ \\ \end{matharray} \<^rail>\ @@{command (HOL) old_rep_datatype} ('(' (@{syntax name} +) ')')? (@{syntax term} +) ; spec: @{syntax typespec_sorts} @{syntax mixfix}? '=' (cons + '|') ; cons: @{syntax name} (@{syntax type} * ) @{syntax mixfix}? \ \<^descr> @{command (HOL) "old_rep_datatype"} represents existing types as old-style datatypes. These commands are mostly obsolete; @{command (HOL) "datatype"} should be used instead. See @{cite "isabelle-datatypes"} for more details on datatypes. Apart from proper proof methods for case analysis and induction, there are also emulations of ML tactics @{method (HOL) case_tac} and @{method (HOL) induct_tac} available, see \secref{sec:hol-induct-tac}; these admit to refer directly to the internal structure of subgoals (including internally bound parameters). \ subsubsection \Examples\ text \ We define a type of finite sequences, with slightly different names than the existing \<^typ>\'a list\ that is already in \<^theory>\Main\: \ (*<*)experiment begin(*>*) datatype 'a seq = Empty | Seq 'a "'a seq" text \We can now prove some simple lemma by structural induction:\ lemma "Seq x xs \ xs" proof (induct xs arbitrary: x) case Empty txt \This case can be proved using the simplifier: the freeness properties of the datatype are already declared as @{attribute simp} rules.\ show "Seq x Empty \ Empty" by simp next case (Seq y ys) txt \The step case is proved similarly.\ show "Seq x (Seq y ys) \ Seq y ys" using \Seq y ys \ ys\ by simp qed text \Here is a more succinct version of the same proof:\ lemma "Seq x xs \ xs" by (induct xs arbitrary: x) simp_all (*<*)end(*>*) section \Records \label{sec:hol-record}\ text \ In principle, records merely generalize the concept of tuples, where components may be addressed by labels instead of just position. The logical infrastructure of records in Isabelle/HOL is slightly more advanced, though, supporting truly extensible record schemes. This admits operations that are polymorphic with respect to record extension, yielding ``object-oriented'' effects like (single) inheritance. See also @{cite "NaraschewskiW-TPHOLs98"} for more details on object-oriented verification and record subtyping in HOL. \ subsection \Basic concepts\ text \ Isabelle/HOL supports both \<^emph>\fixed\ and \<^emph>\schematic\ records at the level of terms and types. The notation is as follows: \begin{center} \begin{tabular}{l|l|l} & record terms & record types \\ \hline fixed & \\x = a, y = b\\ & \\x :: A, y :: B\\ \\ schematic & \\x = a, y = b, \ = m\\ & \\x :: A, y :: B, \ :: M\\ \\ \end{tabular} \end{center} The ASCII representation of \\x = a\\ is \(| x = a |)\. A fixed record \\x = a, y = b\\ has field \x\ of value \a\ and field \y\ of value \b\. The corresponding type is \\x :: A, y :: B\\, assuming that \a :: A\ and \b :: B\. A record scheme like \\x = a, y = b, \ = m\\ contains fields \x\ and \y\ as before, but also possibly further fields as indicated by the ``\\\'' notation (which is actually part of the syntax). The improper field ``\\\'' of a record scheme is called the \<^emph>\more part\. Logically it is just a free variable, which is occasionally referred to as ``row variable'' in the literature. The more part of a record scheme may be instantiated by zero or more further components. For example, the previous scheme may get instantiated to \\x = a, y = b, z = c, \ = m'\\, where \m'\ refers to a different more part. Fixed records are special instances of record schemes, where ``\\\'' is properly terminated by the \() :: unit\ element. In fact, \\x = a, y = b\\ is just an abbreviation for \\x = a, y = b, \ = ()\\. \<^medskip> Two key observations make extensible records in a simply typed language like HOL work out: \<^enum> the more part is internalized, as a free term or type variable, \<^enum> field names are externalized, they cannot be accessed within the logic as first-class values. \<^medskip> In Isabelle/HOL record types have to be defined explicitly, fixing their field names and types, and their (optional) parent record. Afterwards, records may be formed using above syntax, while obeying the canonical order of fields as given by their declaration. The record package provides several standard operations like selectors and updates. The common setup for various generic proof tools enable succinct reasoning patterns. See also the Isabelle/HOL tutorial @{cite "isabelle-hol-book"} for further instructions on using records in practice. \ subsection \Record specifications\ text \ \begin{matharray}{rcl} @{command_def (HOL) "record"} & : & \theory \ theory\ \\ @{command_def (HOL) "print_record"} & : & \context \\ \\ \end{matharray} \<^rail>\ @@{command (HOL) record} @{syntax "overloaded"}? @{syntax typespec_sorts} '=' \ (@{syntax type} '+')? (constdecl +) ; constdecl: @{syntax name} '::' @{syntax type} @{syntax mixfix}? ; @@{command (HOL) print_record} modes? @{syntax typespec_sorts} ; modes: '(' (@{syntax name} +) ')' \ \<^descr> @{command (HOL) "record"}~\(\\<^sub>1, \, \\<^sub>m) t = \ + c\<^sub>1 :: \\<^sub>1 \ c\<^sub>n :: \\<^sub>n\ defines extensible record type \(\\<^sub>1, \, \\<^sub>m) t\, derived from the optional parent record \\\ by adding new field components \c\<^sub>i :: \\<^sub>i\ etc. The type variables of \\\ and \\\<^sub>i\ need to be covered by the (distinct) parameters \\\<^sub>1, \, \\<^sub>m\. Type constructor \t\ has to be new, while \\\ needs to specify an instance of an existing record type. At least one new field \c\<^sub>i\ has to be specified. Basically, field names need to belong to a unique record. This is not a real restriction in practice, since fields are qualified by the record name internally. The parent record specification \\\ is optional; if omitted \t\ becomes a root record. The hierarchy of all records declared within a theory context forms a forest structure, i.e.\ a set of trees starting with a root record each. There is no way to merge multiple parent records! For convenience, \(\\<^sub>1, \, \\<^sub>m) t\ is made a type abbreviation for the fixed record type \\c\<^sub>1 :: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n\\, likewise is \(\\<^sub>1, \, \\<^sub>m, \) t_scheme\ made an abbreviation for \\c\<^sub>1 :: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n, \ :: \\\. \<^descr> @{command (HOL) "print_record"}~\(\\<^sub>1, \, \\<^sub>m) t\ prints the definition of record \(\\<^sub>1, \, \\<^sub>m) t\. Optionally \modes\ can be specified, which are appended to the current print mode; see \secref{sec:print-modes}. \ subsection \Record operations\ text \ Any record definition of the form presented above produces certain standard operations. Selectors and updates are provided for any field, including the improper one ``\more\''. There are also cumulative record constructor functions. To simplify the presentation below, we assume for now that \(\\<^sub>1, \, \\<^sub>m) t\ is a root record with fields \c\<^sub>1 :: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n\. \<^medskip> \<^bold>\Selectors\ and \<^bold>\updates\ are available for any field (including ``\more\''): \begin{matharray}{lll} \c\<^sub>i\ & \::\ & \\\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i\ \\ \c\<^sub>i_update\ & \::\ & \\\<^sub>i \ \\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>c :: \<^vec>\, \ :: \\\ \\ \end{matharray} There is special syntax for application of updates: \r\x := a\\ abbreviates term \x_update a r\. Further notation for repeated updates is also available: \r\x := a\\y := b\\z := c\\ may be written \r\x := a, y := b, z := c\\. Note that because of postfix notation the order of fields shown here is reverse than in the actual term. Since repeated updates are just function applications, fields may be freely permuted in \\x := a, y := b, z := c\\, as far as logical equality is concerned. Thus commutativity of independent updates can be proven within the logic for any two fields, but not as a general theorem. \<^medskip> The \<^bold>\make\ operation provides a cumulative record constructor function: \begin{matharray}{lll} \t.make\ & \::\ & \\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\\ \\ \end{matharray} \<^medskip> We now reconsider the case of non-root records, which are derived of some parent. In general, the latter may depend on another parent as well, resulting in a list of \<^emph>\ancestor records\. Appending the lists of fields of all ancestors results in a certain field prefix. The record package automatically takes care of this by lifting operations over this context of ancestor fields. Assuming that \(\\<^sub>1, \, \\<^sub>m) t\ has ancestor fields \b\<^sub>1 :: \\<^sub>1, \, b\<^sub>k :: \\<^sub>k\, the above record operations will get the following types: \<^medskip> \begin{tabular}{lll} \c\<^sub>i\ & \::\ & \\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i\ \\ \c\<^sub>i_update\ & \::\ & \\\<^sub>i \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\\ \\ \t.make\ & \::\ & \\\<^sub>1 \ \ \\<^sub>k \ \\<^sub>1 \ \ \\<^sub>n \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\\ \\ \end{tabular} \<^medskip> Some further operations address the extension aspect of a derived record scheme specifically: \t.fields\ produces a record fragment consisting of exactly the new fields introduced here (the result may serve as a more part elsewhere); \t.extend\ takes a fixed record and adds a given more part; \t.truncate\ restricts a record scheme to a fixed record. \<^medskip> \begin{tabular}{lll} \t.fields\ & \::\ & \\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\\ \\ \t.extend\ & \::\ & \\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\ \ \ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\\ \\ \t.truncate\ & \::\ & \\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\\ \\ \end{tabular} \<^medskip> Note that \t.make\ and \t.fields\ coincide for root records. \ subsection \Derived rules and proof tools\ text \ The record package proves several results internally, declaring these facts to appropriate proof tools. This enables users to reason about record structures quite conveniently. Assume that \t\ is a record type as specified above. \<^enum> Standard conversions for selectors or updates applied to record constructor terms are made part of the default Simplifier context; thus proofs by reduction of basic operations merely require the @{method simp} method without further arguments. These rules are available as \t.simps\, too. \<^enum> Selectors applied to updated records are automatically reduced by an internal simplification procedure, which is also part of the standard Simplifier setup. \<^enum> Inject equations of a form analogous to \<^prop>\(x, y) = (x', y') \ x = x' \ y = y'\ are declared to the Simplifier and Classical Reasoner as @{attribute iff} rules. These rules are available as \t.iffs\. \<^enum> The introduction rule for record equality analogous to \x r = x r' \ y r = y r' \ \ r = r'\ is declared to the Simplifier, and as the basic rule context as ``@{attribute intro}\?\''. The rule is called \t.equality\. \<^enum> Representations of arbitrary record expressions as canonical constructor terms are provided both in @{method cases} and @{method induct} format (cf.\ the generic proof methods of the same name, \secref{sec:cases-induct}). Several variations are available, for fixed records, record schemes, more parts etc. The generic proof methods are sufficiently smart to pick the most sensible rule according to the type of the indicated record expression: users just need to apply something like ``\(cases r)\'' to a certain proof problem. \<^enum> The derived record operations \t.make\, \t.fields\, \t.extend\, \t.truncate\ are \<^emph>\not\ treated automatically, but usually need to be expanded by hand, using the collective fact \t.defs\. \ subsubsection \Examples\ text \See \<^file>\~~/src/HOL/ex/Records.thy\, for example.\ section \Semantic subtype definitions \label{sec:hol-typedef}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "typedef"} & : & \local_theory \ proof(prove)\ \\ \end{matharray} A type definition identifies a new type with a non-empty subset of an existing type. More precisely, the new type is defined by exhibiting an existing type \\\, a set \A :: \ set\, and proving \<^prop>\\x. x \ A\. Thus \A\ is a non-empty subset of \\\, and the new type denotes this subset. New functions are postulated that establish an isomorphism between the new type and the subset. In general, the type \\\ may involve type variables \\\<^sub>1, \, \\<^sub>n\ which means that the type definition produces a type constructor \(\\<^sub>1, \, \\<^sub>n) t\ depending on those type arguments. \<^rail>\ @@{command (HOL) typedef} @{syntax "overloaded"}? abs_type '=' rep_set ; @{syntax_def "overloaded"}: ('(' @'overloaded' ')') ; abs_type: @{syntax typespec_sorts} @{syntax mixfix}? ; rep_set: @{syntax term} (@'morphisms' @{syntax name} @{syntax name})? \ To understand the concept of type definition better, we need to recount its somewhat complex history. The HOL logic goes back to the ``Simple Theory of Types'' (STT) of A. Church @{cite "church40"}, which is further explained in the book by P. Andrews @{cite "andrews86"}. The overview article by W. Farmer @{cite "Farmer:2008"} points out the ``seven virtues'' of this relatively simple family of logics. STT has only ground types, without polymorphism and without type definitions. \<^medskip> M. Gordon @{cite "Gordon:1985:HOL"} augmented Church's STT by adding schematic polymorphism (type variables and type constructors) and a facility to introduce new types as semantic subtypes from existing types. This genuine extension of the logic was explained semantically by A. Pitts in the book of the original Cambridge HOL88 system @{cite "pitts93"}. Type definitions work in this setting, because the general model-theory of STT is restricted to models that ensure that the universe of type interpretations is closed by forming subsets (via predicates taken from the logic). \<^medskip> Isabelle/HOL goes beyond Gordon-style HOL by admitting overloaded constant definitions @{cite "Wenzel:1997:TPHOL" and "Haftmann-Wenzel:2006:classes"}, which are actually a concept of Isabelle/Pure and do not depend on particular set-theoretic semantics of HOL. Over many years, there was no formal checking of semantic type definitions in Isabelle/HOL versus syntactic constant definitions in Isabelle/Pure. So the @{command typedef} command was described as ``axiomatic'' in the sense of \secref{sec:axiomatizations}, only with some local checks of the given type and its representing set. Recent clarification of overloading in the HOL logic proper @{cite "Kuncar-Popescu:2015"} demonstrates how the dissimilar concepts of constant definitions versus type definitions may be understood uniformly. This requires an interpretation of Isabelle/HOL that substantially reforms the set-theoretic model of A. Pitts @{cite "pitts93"}, by taking a schematic view on polymorphism and interpreting only ground types in the set-theoretic sense of HOL88. Moreover, type-constructors may be explicitly overloaded, e.g.\ by making the subset depend on type-class parameters (cf.\ \secref{sec:class}). This is semantically like a dependent type: the meaning relies on the operations provided by different type-class instances. \<^descr> @{command (HOL) "typedef"}~\(\\<^sub>1, \, \\<^sub>n) t = A\ defines a new type \(\\<^sub>1, \, \\<^sub>n) t\ from the set \A\ over an existing type. The set \A\ may contain type variables \\\<^sub>1, \, \\<^sub>n\ as specified on the LHS, but no term variables. Non-emptiness of \A\ needs to be proven on the spot, in order to turn the internal conditional characterization into usable theorems. The ``\(overloaded)\'' option allows the @{command "typedef"} specification to depend on constants that are not (yet) specified and thus left open as parameters, e.g.\ type-class parameters. Within a local theory specification, the newly introduced type constructor cannot depend on parameters or assumptions of the context: this is syntactically impossible in HOL. The non-emptiness proof may formally depend on local assumptions, but this has little practical relevance. For @{command (HOL) "typedef"}~\t = A\ the newly introduced type \t\ is accompanied by a pair of morphisms to relate it to the representing set over the old type. By default, the injection from type to set is called \Rep_t\ and its inverse \Abs_t\: An explicit @{keyword (HOL) "morphisms"} specification allows to provide alternative names. The logical characterization of @{command typedef} uses the predicate of locale \<^const>\type_definition\ that is defined in Isabelle/HOL. Various basic consequences of that are instantiated accordingly, re-using the locale facts with names derived from the new type constructor. Thus the generic theorem @{thm type_definition.Rep} is turned into the specific \Rep_t\, for example. Theorems @{thm type_definition.Rep}, @{thm type_definition.Rep_inverse}, and @{thm type_definition.Abs_inverse} provide the most basic characterization as a corresponding injection/surjection pair (in both directions). The derived rules @{thm type_definition.Rep_inject} and @{thm type_definition.Abs_inject} provide a more convenient version of injectivity, suitable for automated proof tools (e.g.\ in declarations involving @{attribute simp} or @{attribute iff}). Furthermore, the rules @{thm type_definition.Rep_cases}~/ @{thm type_definition.Rep_induct}, and @{thm type_definition.Abs_cases}~/ @{thm type_definition.Abs_induct} provide alternative views on surjectivity. These rules are already declared as set or type rules for the generic @{method cases} and @{method induct} methods, respectively. \ subsubsection \Examples\ text \ The following trivial example pulls a three-element type into existence within the formal logical environment of Isabelle/HOL.\ (*<*)experiment begin(*>*) typedef three = "{(True, True), (True, False), (False, True)}" by blast definition "One = Abs_three (True, True)" definition "Two = Abs_three (True, False)" definition "Three = Abs_three (False, True)" lemma three_distinct: "One \ Two" "One \ Three" "Two \ Three" by (simp_all add: One_def Two_def Three_def Abs_three_inject) lemma three_cases: fixes x :: three obtains "x = One" | "x = Two" | "x = Three" by (cases x) (auto simp: One_def Two_def Three_def Abs_three_inject) (*<*)end(*>*) text \Note that such trivial constructions are better done with derived specification mechanisms such as @{command datatype}:\ (*<*)experiment begin(*>*) datatype three = One | Two | Three (*<*)end(*>*) text \This avoids re-doing basic definitions and proofs from the primitive @{command typedef} above.\ section \Functorial structure of types\ text \ \begin{matharray}{rcl} @{command_def (HOL) "functor"} & : & \local_theory \ proof(prove)\ \end{matharray} \<^rail>\ @@{command (HOL) functor} (@{syntax name} ':')? @{syntax term} \ \<^descr> @{command (HOL) "functor"}~\prefix: m\ allows to prove and register properties about the functorial structure of type constructors. These properties then can be used by other packages to deal with those type constructors in certain type constructions. Characteristic theorems are noted in the current local theory. By default, they are prefixed with the base name of the type constructor, an explicit prefix can be given alternatively. The given term \m\ is considered as \<^emph>\mapper\ for the corresponding type constructor and must conform to the following type pattern: \begin{matharray}{lll} \m\ & \::\ & \\\<^sub>1 \ \ \\<^sub>k \ (\<^vec>\\<^sub>n) t \ (\<^vec>\\<^sub>n) t\ \\ \end{matharray} where \t\ is the type constructor, \\<^vec>\\<^sub>n\ and \\<^vec>\\<^sub>n\ are distinct type variables free in the local theory and \\\<^sub>1\, \ldots, \\\<^sub>k\ is a subsequence of \\\<^sub>1 \ \\<^sub>1\, \\\<^sub>1 \ \\<^sub>1\, \ldots, \\\<^sub>n \ \\<^sub>n\, \\\<^sub>n \ \\<^sub>n\. \ section \Quotient types with lifting and transfer\ text \ The quotient package defines a new quotient type given a raw type and a partial equivalence relation (\secref{sec:quotient-type}). The package also historically includes automation for transporting definitions and theorems (\secref{sec:old-quotient}), but most of this automation was superseded by the Lifting (\secref{sec:lifting}) and Transfer (\secref{sec:transfer}) packages. \ subsection \Quotient type definition \label{sec:quotient-type}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "quotient_type"} & : & \local_theory \ proof(prove)\\\ \end{matharray} \<^rail>\ @@{command (HOL) quotient_type} @{syntax "overloaded"}? \ @{syntax typespec} @{syntax mixfix}? '=' quot_type \ quot_morphisms? quot_parametric? ; quot_type: @{syntax type} '/' ('partial' ':')? @{syntax term} ; quot_morphisms: @'morphisms' @{syntax name} @{syntax name} ; quot_parametric: @'parametric' @{syntax thm} \ \<^descr> @{command (HOL) "quotient_type"} defines a new quotient type \\\. The injection from a quotient type to a raw type is called \rep_\\, its inverse \abs_\\ unless explicit @{keyword (HOL) "morphisms"} specification provides alternative names. @{command (HOL) "quotient_type"} requires the user to prove that the relation is an equivalence relation (predicate \equivp\), unless the user specifies explicitly \partial\ in which case the obligation is \part_equivp\. A quotient defined with \partial\ is weaker in the sense that less things can be proved automatically. The command internally proves a Quotient theorem and sets up the Lifting package by the command @{command (HOL) setup_lifting}. Thus the Lifting and Transfer packages can be used also with quotient types defined by @{command (HOL) "quotient_type"} without any extra set-up. The parametricity theorem for the equivalence relation R can be provided as an extra argument of the command and is passed to the corresponding internal call of @{command (HOL) setup_lifting}. This theorem allows the Lifting package to generate a stronger transfer rule for equality. \ subsection \Lifting package \label{sec:lifting}\ text \ The Lifting package allows users to lift terms of the raw type to the abstract type, which is a necessary step in building a library for an abstract type. Lifting defines a new constant by combining coercion functions (\<^term>\Abs\ and \<^term>\Rep\) with the raw term. It also proves an appropriate transfer rule for the Transfer (\secref{sec:transfer}) package and, if possible, an equation for the code generator. The Lifting package provides two main commands: @{command (HOL) "setup_lifting"} for initializing the package to work with a new type, and @{command (HOL) "lift_definition"} for lifting constants. The Lifting package works with all four kinds of type abstraction: type copies, subtypes, total quotients and partial quotients. Theoretical background can be found in @{cite "Huffman-Kuncar:2013:lifting_transfer"}. \begin{matharray}{rcl} @{command_def (HOL) "setup_lifting"} & : & \local_theory \ local_theory\\\ @{command_def (HOL) "lift_definition"} & : & \local_theory \ proof(prove)\\\ @{command_def (HOL) "lifting_forget"} & : & \local_theory \ local_theory\\\ @{command_def (HOL) "lifting_update"} & : & \local_theory \ local_theory\\\ @{command_def (HOL) "print_quot_maps"} & : & \context \\\\ @{command_def (HOL) "print_quotients"} & : & \context \\\\ @{attribute_def (HOL) "quot_map"} & : & \attribute\ \\ @{attribute_def (HOL) "relator_eq_onp"} & : & \attribute\ \\ @{attribute_def (HOL) "relator_mono"} & : & \attribute\ \\ @{attribute_def (HOL) "relator_distr"} & : & \attribute\ \\ @{attribute_def (HOL) "quot_del"} & : & \attribute\ \\ @{attribute_def (HOL) "lifting_restore"} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{command (HOL) setup_lifting} @{syntax thm} @{syntax thm}? \ (@'parametric' @{syntax thm})? ; @@{command (HOL) lift_definition} ('(' 'code_dt' ')')? \ @{syntax name} '::' @{syntax type} @{syntax mixfix}? 'is' @{syntax term} \ (@'parametric' (@{syntax thm}+))? ; @@{command (HOL) lifting_forget} @{syntax name} ; @@{command (HOL) lifting_update} @{syntax name} ; @@{attribute (HOL) lifting_restore} @{syntax thm} (@{syntax thm} @{syntax thm})? \ \<^descr> @{command (HOL) "setup_lifting"} Sets up the Lifting package to work with a user-defined type. The command supports two modes. \<^enum> The first one is a low-level mode when the user must provide as a first argument of @{command (HOL) "setup_lifting"} a quotient theorem \<^term>\Quotient R Abs Rep T\. The package configures a transfer rule for equality, a domain transfer rules and sets up the @{command_def (HOL) "lift_definition"} command to work with the abstract type. An optional theorem \<^term>\reflp R\, which certifies that the equivalence relation R is total, can be provided as a second argument. This allows the package to generate stronger transfer rules. And finally, the parametricity theorem for \<^term>\R\ can be provided as a third argument. This allows the package to generate a stronger transfer rule for equality. Users generally will not prove the \Quotient\ theorem manually for new types, as special commands exist to automate the process. \<^enum> When a new subtype is defined by @{command (HOL) typedef}, @{command (HOL) "lift_definition"} can be used in its second mode, where only the \<^term>\type_definition\ theorem \<^term>\type_definition Rep Abs A\ is used as an argument of the command. The command internally proves the corresponding \<^term>\Quotient\ theorem and registers it with @{command (HOL) setup_lifting} using its first mode. For quotients, the command @{command (HOL) quotient_type} can be used. The command defines a new quotient type and similarly to the previous case, the corresponding Quotient theorem is proved and registered by @{command (HOL) setup_lifting}. \<^medskip> The command @{command (HOL) "setup_lifting"} also sets up the code generator for the new type. Later on, when a new constant is defined by @{command (HOL) "lift_definition"}, the Lifting package proves and registers a code equation (if there is one) for the new constant. \<^descr> @{command (HOL) "lift_definition"} \f :: \\ @{keyword (HOL) "is"} \t\ Defines a new function \f\ with an abstract type \\\ in terms of a corresponding operation \t\ on a representation type. More formally, if \t :: \\, then the command builds a term \F\ as a corresponding combination of abstraction and representation functions such that \F :: \ \ \\ and defines \f \ F t\. The term \t\ does not have to be necessarily a constant but it can be any term. The command opens a proof and the user must discharge a respectfulness proof obligation. For a type copy, i.e.\ a typedef with \UNIV\, the obligation is discharged automatically. The proof goal is presented in a user-friendly, readable form. A respectfulness theorem in the standard format \f.rsp\ and a transfer rule \f.transfer\ for the Transfer package are generated by the package. The user can specify a parametricity theorems for \t\ after the keyword @{keyword "parametric"}, which allows the command to generate parametric transfer rules for \f\. For each constant defined through trivial quotients (type copies or subtypes) \f.rep_eq\ is generated. The equation is a code certificate that defines \f\ using the representation function. For each constant \f.abs_eq\ is generated. The equation is unconditional for total quotients. The equation defines \f\ using the abstraction function. \<^medskip> Integration with [@{attribute code} abstract]: For subtypes (e.g.\ corresponding to a datatype invariant, such as \<^typ>\'a dlist\), @{command (HOL) "lift_definition"} uses a code certificate theorem \f.rep_eq\ as a code equation. Because of the limitation of the code generator, \f.rep_eq\ cannot be used as a code equation if the subtype occurs inside the result type rather than at the top level (e.g.\ function returning \<^typ>\'a dlist option\ vs. \<^typ>\'a dlist\). In this case, an extension of @{command (HOL) "lift_definition"} can be invoked by specifying the flag \code_dt\. This extension enables code execution through series of internal type and lifting definitions if the return type \\\ meets the following inductive conditions: \<^descr> \\\ is a type variable \<^descr> \\ = \\<^sub>1 \ \\<^sub>n \\, where \\\ is an abstract type constructor and \\\<^sub>1 \ \\<^sub>n\ do not contain abstract types (i.e.\ \<^typ>\int dlist\ is allowed whereas \<^typ>\int dlist dlist\ not) \<^descr> \\ = \\<^sub>1 \ \\<^sub>n \\, \\\ is a type constructor that was defined as a (co)datatype whose constructor argument types do not contain either non-free datatypes or the function type. Integration with [@{attribute code} equation]: For total quotients, @{command (HOL) "lift_definition"} uses \f.abs_eq\ as a code equation. \<^descr> @{command (HOL) lifting_forget} and @{command (HOL) lifting_update} These two commands serve for storing and deleting the set-up of the Lifting package and corresponding transfer rules defined by this package. This is useful for hiding of type construction details of an abstract type when the construction is finished but it still allows additions to this construction when this is later necessary. Whenever the Lifting package is set up with a new abstract type \\\ by @{command_def (HOL) "lift_definition"}, the package defines a new bundle that is called \\.lifting\. This bundle already includes set-up for the Lifting package. The new transfer rules introduced by @{command (HOL) "lift_definition"} can be stored in the bundle by the command @{command (HOL) "lifting_update"} \\.lifting\. The command @{command (HOL) "lifting_forget"} \\.lifting\ deletes set-up of the Lifting package for \\\ and deletes all the transfer rules that were introduced by @{command (HOL) "lift_definition"} using \\\ as an abstract type. The stored set-up in a bundle can be reintroduced by the Isar commands for including a bundle (@{command "include"}, @{keyword "includes"} and @{command "including"}). \<^descr> @{command (HOL) "print_quot_maps"} prints stored quotient map theorems. \<^descr> @{command (HOL) "print_quotients"} prints stored quotient theorems. \<^descr> @{attribute (HOL) quot_map} registers a quotient map theorem, a theorem showing how to ``lift'' quotients over type constructors. E.g.\ \<^term>\Quotient R Abs Rep T \ Quotient (rel_set R) (image Abs) (image Rep) (rel_set T)\. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. \<^descr> @{attribute (HOL) relator_eq_onp} registers a theorem that shows that a relator applied to an equality restricted by a predicate \<^term>\P\ (i.e.\ \<^term>\eq_onp P\) is equal to a predicator applied to the \<^term>\P\. The combinator \<^const>\eq_onp\ is used for internal encoding of proper subtypes. Such theorems allows the package to hide \eq_onp\ from a user in a user-readable form of a respectfulness theorem. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. \<^descr> @{attribute (HOL) "relator_mono"} registers a property describing a monotonicity of a relator. E.g.\ \<^prop>\A \ B \ rel_set A \ rel_set B\. This property is needed for proving a stronger transfer rule in @{command_def (HOL) "lift_definition"} when a parametricity theorem for the raw term is specified and also for the reflexivity prover. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. \<^descr> @{attribute (HOL) "relator_distr"} registers a property describing a distributivity of the relation composition and a relator. E.g.\ \rel_set R \\ rel_set S = rel_set (R \\ S)\. This property is needed for proving a stronger transfer rule in @{command_def (HOL) "lift_definition"} when a parametricity theorem for the raw term is specified. When this equality does not hold unconditionally (e.g.\ for the function type), the user can specified each direction separately and also register multiple theorems with different set of assumptions. This attribute can be used only after the monotonicity property was already registered by @{attribute (HOL) "relator_mono"}. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. \<^descr> @{attribute (HOL) quot_del} deletes a corresponding Quotient theorem from the Lifting infrastructure and thus de-register the corresponding quotient. This effectively causes that @{command (HOL) lift_definition} will not do any lifting for the corresponding type. This attribute is rather used for low-level manipulation with set-up of the Lifting package because @{command (HOL) lifting_forget} is preferred for normal usage. \<^descr> @{attribute (HOL) lifting_restore} \Quotient_thm pcr_def pcr_cr_eq_thm\ registers the Quotient theorem \Quotient_thm\ in the Lifting infrastructure and thus sets up lifting for an abstract type \\\ (that is defined by \Quotient_thm\). Optional theorems \pcr_def\ and \pcr_cr_eq_thm\ can be specified to register the parametrized correspondence relation for \\\. E.g.\ for \<^typ>\'a dlist\, \pcr_def\ is \pcr_dlist A \ list_all2 A \\ cr_dlist\ and \pcr_cr_eq_thm\ is \pcr_dlist (=) = (=)\. This attribute is rather used for low-level manipulation with set-up of the Lifting package because using of the bundle \\.lifting\ together with the commands @{command (HOL) lifting_forget} and @{command (HOL) lifting_update} is preferred for normal usage. \<^descr> Integration with the BNF package @{cite "isabelle-datatypes"}: As already mentioned, the theorems that are registered by the following attributes are proved and registered automatically if the involved type is BNF without dead variables: @{attribute (HOL) quot_map}, @{attribute (HOL) relator_eq_onp}, @{attribute (HOL) "relator_mono"}, @{attribute (HOL) "relator_distr"}. Also the definition of a relator and predicator is provided automatically. Moreover, if the BNF represents a datatype, simplification rules for a predicator are again proved automatically. \ subsection \Transfer package \label{sec:transfer}\ text \ \begin{matharray}{rcl} @{method_def (HOL) "transfer"} & : & \method\ \\ @{method_def (HOL) "transfer'"} & : & \method\ \\ @{method_def (HOL) "transfer_prover"} & : & \method\ \\ @{attribute_def (HOL) "Transfer.transferred"} & : & \attribute\ \\ @{attribute_def (HOL) "untransferred"} & : & \attribute\ \\ @{method_def (HOL) "transfer_start"} & : & \method\ \\ @{method_def (HOL) "transfer_prover_start"} & : & \method\ \\ @{method_def (HOL) "transfer_step"} & : & \method\ \\ @{method_def (HOL) "transfer_end"} & : & \method\ \\ @{method_def (HOL) "transfer_prover_end"} & : & \method\ \\ @{attribute_def (HOL) "transfer_rule"} & : & \attribute\ \\ @{attribute_def (HOL) "transfer_domain_rule"} & : & \attribute\ \\ @{attribute_def (HOL) "relator_eq"} & : & \attribute\ \\ @{attribute_def (HOL) "relator_domain"} & : & \attribute\ \\ \end{matharray} \<^descr> @{method (HOL) "transfer"} method replaces the current subgoal with a logically equivalent one that uses different types and constants. The replacement of types and constants is guided by the database of transfer rules. Goals are generalized over all free variables by default; this is necessary for variables whose types change, but can be overridden for specific variables with e.g. \transfer fixing: x y z\. \<^descr> @{method (HOL) "transfer'"} is a variant of @{method (HOL) transfer} that allows replacing a subgoal with one that is logically stronger (rather than equivalent). For example, a subgoal involving equality on a quotient type could be replaced with a subgoal involving equality (instead of the corresponding equivalence relation) on the underlying raw type. \<^descr> @{method (HOL) "transfer_prover"} method assists with proving a transfer rule for a new constant, provided the constant is defined in terms of other constants that already have transfer rules. It should be applied after unfolding the constant definitions. \<^descr> @{method (HOL) "transfer_start"}, @{method (HOL) "transfer_step"}, @{method (HOL) "transfer_end"}, @{method (HOL) "transfer_prover_start"} and @{method (HOL) "transfer_prover_end"} methods are meant to be used for debugging of @{method (HOL) "transfer"} and @{method (HOL) "transfer_prover"}, which we can decompose as follows: @{method (HOL) "transfer"} = (@{method (HOL) "transfer_start"}, @{method (HOL) "transfer_step"}+, @{method (HOL) "transfer_end"}) and @{method (HOL) "transfer_prover"} = (@{method (HOL) "transfer_prover_start"}, @{method (HOL) "transfer_step"}+, @{method (HOL) "transfer_prover_end"}). For usage examples see \<^file>\~~/src/HOL/ex/Transfer_Debug.thy\. \<^descr> @{attribute (HOL) "untransferred"} proves the same equivalent theorem as @{method (HOL) "transfer"} internally does. \<^descr> @{attribute (HOL) Transfer.transferred} works in the opposite direction than @{method (HOL) "transfer'"}. E.g.\ given the transfer relation \ZN x n \ (x = int n)\, corresponding transfer rules and the theorem \\x::int \ {0..}. x < x + 1\, the attribute would prove \\n::nat. n < n + 1\. The attribute is still in experimental phase of development. \<^descr> @{attribute (HOL) "transfer_rule"} attribute maintains a collection of transfer rules, which relate constants at two different types. Typical transfer rules may relate different type instances of the same polymorphic constant, or they may relate an operation on a raw type to a corresponding operation on an abstract type (quotient or subtype). For example: \((A ===> B) ===> list_all2 A ===> list_all2 B) map map\ \\ \(cr_int ===> cr_int ===> cr_int) (\(x,y) (u,v). (x+u, y+v)) plus\ Lemmas involving predicates on relations can also be registered using the same attribute. For example: \bi_unique A \ (list_all2 A ===> (=)) distinct distinct\ \\ \\bi_unique A; bi_unique B\ \ bi_unique (rel_prod A B)\ Preservation of predicates on relations (\bi_unique, bi_total, right_unique, right_total, left_unique, left_total\) with the respect to a relator is proved automatically if the involved type is BNF @{cite "isabelle-datatypes"} without dead variables. \<^descr> @{attribute (HOL) "transfer_domain_rule"} attribute maintains a collection of rules, which specify a domain of a transfer relation by a predicate. E.g.\ given the transfer relation \ZN x n \ (x = int n)\, one can register the following transfer domain rule: \Domainp ZN = (\x. x \ 0)\. The rules allow the package to produce more readable transferred goals, e.g.\ when quantifiers are transferred. \<^descr> @{attribute (HOL) relator_eq} attribute collects identity laws for relators of various type constructors, e.g. \<^term>\rel_set (=) = (=)\. The @{method (HOL) transfer} method uses these lemmas to infer transfer rules for non-polymorphic constants on the fly. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. \<^descr> @{attribute_def (HOL) "relator_domain"} attribute collects rules describing domains of relators by predicators. E.g.\ \<^term>\Domainp (rel_set T) = (\A. Ball A (Domainp T))\. This allows the package to lift transfer domain rules through type constructors. For examples see \<^file>\~~/src/HOL/Lifting_Set.thy\ or \<^file>\~~/src/HOL/Lifting.thy\. This property is proved automatically if the involved type is BNF without dead variables. Theoretical background can be found in @{cite "Huffman-Kuncar:2013:lifting_transfer"}. \ subsection \Old-style definitions for quotient types \label{sec:old-quotient}\ text \ \begin{matharray}{rcl} @{command_def (HOL) "quotient_definition"} & : & \local_theory \ proof(prove)\\\ @{command_def (HOL) "print_quotmapsQ3"} & : & \context \\\\ @{command_def (HOL) "print_quotientsQ3"} & : & \context \\\\ @{command_def (HOL) "print_quotconsts"} & : & \context \\\\ @{method_def (HOL) "lifting"} & : & \method\ \\ @{method_def (HOL) "lifting_setup"} & : & \method\ \\ @{method_def (HOL) "descending"} & : & \method\ \\ @{method_def (HOL) "descending_setup"} & : & \method\ \\ @{method_def (HOL) "partiality_descending"} & : & \method\ \\ @{method_def (HOL) "partiality_descending_setup"} & : & \method\ \\ @{method_def (HOL) "regularize"} & : & \method\ \\ @{method_def (HOL) "injection"} & : & \method\ \\ @{method_def (HOL) "cleaning"} & : & \method\ \\ @{attribute_def (HOL) "quot_thm"} & : & \attribute\ \\ @{attribute_def (HOL) "quot_lifted"} & : & \attribute\ \\ @{attribute_def (HOL) "quot_respect"} & : & \attribute\ \\ @{attribute_def (HOL) "quot_preserve"} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{command (HOL) quotient_definition} constdecl? @{syntax thmdecl}? \ @{syntax term} 'is' @{syntax term} ; constdecl: @{syntax name} ('::' @{syntax type})? @{syntax mixfix}? ; @@{method (HOL) lifting} @{syntax thms}? ; @@{method (HOL) lifting_setup} @{syntax thms}? \ \<^descr> @{command (HOL) "quotient_definition"} defines a constant on the quotient type. \<^descr> @{command (HOL) "print_quotmapsQ3"} prints quotient map functions. \<^descr> @{command (HOL) "print_quotientsQ3"} prints quotients. \<^descr> @{command (HOL) "print_quotconsts"} prints quotient constants. \<^descr> @{method (HOL) "lifting"} and @{method (HOL) "lifting_setup"} methods match the current goal with the given raw theorem to be lifted producing three new subgoals: regularization, injection and cleaning subgoals. @{method (HOL) "lifting"} tries to apply the heuristics for automatically solving these three subgoals and leaves only the subgoals unsolved by the heuristics to the user as opposed to @{method (HOL) "lifting_setup"} which leaves the three subgoals unsolved. \<^descr> @{method (HOL) "descending"} and @{method (HOL) "descending_setup"} try to guess a raw statement that would lift to the current subgoal. Such statement is assumed as a new subgoal and @{method (HOL) "descending"} continues in the same way as @{method (HOL) "lifting"} does. @{method (HOL) "descending"} tries to solve the arising regularization, injection and cleaning subgoals with the analogous method @{method (HOL) "descending_setup"} which leaves the four unsolved subgoals. \<^descr> @{method (HOL) "partiality_descending"} finds the regularized theorem that would lift to the current subgoal, lifts it and leaves as a subgoal. This method can be used with partial equivalence quotients where the non regularized statements would not be true. @{method (HOL) "partiality_descending_setup"} leaves the injection and cleaning subgoals unchanged. \<^descr> @{method (HOL) "regularize"} applies the regularization heuristics to the current subgoal. \<^descr> @{method (HOL) "injection"} applies the injection heuristics to the current goal using the stored quotient respectfulness theorems. \<^descr> @{method (HOL) "cleaning"} applies the injection cleaning heuristics to the current subgoal using the stored quotient preservation theorems. \<^descr> @{attribute (HOL) quot_lifted} attribute tries to automatically transport the theorem to the quotient type. The attribute uses all the defined quotients types and quotient constants often producing undesired results or theorems that cannot be lifted. \<^descr> @{attribute (HOL) quot_respect} and @{attribute (HOL) quot_preserve} attributes declare a theorem as a respectfulness and preservation theorem respectively. These are stored in the local theory store and used by the @{method (HOL) "injection"} and @{method (HOL) "cleaning"} methods respectively. \<^descr> @{attribute (HOL) quot_thm} declares that a certain theorem is a quotient extension theorem. Quotient extension theorems allow for quotienting inside container types. Given a polymorphic type that serves as a container, a map function defined for this container using @{command (HOL) "functor"} and a relation map defined for for the container type, the quotient extension theorem should be \<^term>\Quotient3 R Abs Rep \ Quotient3 (rel_map R) (map Abs) (map Rep)\. Quotient extension theorems are stored in a database and are used all the steps of lifting theorems. \ chapter \Proof tools\ section \Proving propositions\ text \ In addition to the standard proof methods, a number of diagnosis tools search for proofs and provide an Isar proof snippet on success. These tools are available via the following commands. \begin{matharray}{rcl} @{command_def (HOL) "solve_direct"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "try"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "try0"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "sledgehammer"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "sledgehammer_params"} & : & \theory \ theory\ \end{matharray} \<^rail>\ @@{command (HOL) try} ; @@{command (HOL) try0} ( ( ( 'simp' | 'intro' | 'elim' | 'dest' ) ':' @{syntax thms} ) + ) ? @{syntax nat}? ; @@{command (HOL) sledgehammer} ( '[' args ']' )? facts? @{syntax nat}? ; @@{command (HOL) sledgehammer_params} ( ( '[' args ']' ) ? ) ; args: ( @{syntax name} '=' value + ',' ) ; facts: '(' ( ( ( ( 'add' | 'del' ) ':' ) ? @{syntax thms} ) + ) ? ')' \ % FIXME check args "value" \<^descr> @{command (HOL) "solve_direct"} checks whether the current subgoals can be solved directly by an existing theorem. Duplicate lemmas can be detected in this way. \<^descr> @{command (HOL) "try0"} attempts to prove a subgoal using a combination of standard proof methods (@{method auto}, @{method simp}, @{method blast}, etc.). Additional facts supplied via \simp:\, \intro:\, \elim:\, and \dest:\ are passed to the appropriate proof methods. \<^descr> @{command (HOL) "try"} attempts to prove or disprove a subgoal using a combination of provers and disprovers (@{command (HOL) "solve_direct"}, @{command (HOL) "quickcheck"}, @{command (HOL) "try0"}, @{command (HOL) "sledgehammer"}, @{command (HOL) "nitpick"}). \<^descr> @{command (HOL) "sledgehammer"} attempts to prove a subgoal using external automatic provers (resolution provers and SMT solvers). See the Sledgehammer manual @{cite "isabelle-sledgehammer"} for details. \<^descr> @{command (HOL) "sledgehammer_params"} changes @{command (HOL) "sledgehammer"} configuration options persistently. \ section \Checking and refuting propositions\ text \ Identifying incorrect propositions usually involves evaluation of particular assignments and systematic counterexample search. This is supported by the following commands. \begin{matharray}{rcl} @{command_def (HOL) "value"}\\<^sup>*\ & : & \context \\ \\ @{command_def (HOL) "values"}\\<^sup>*\ & : & \context \\ \\ @{command_def (HOL) "quickcheck"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "nitpick"}\\<^sup>*\ & : & \proof \\ \\ @{command_def (HOL) "quickcheck_params"} & : & \theory \ theory\ \\ @{command_def (HOL) "nitpick_params"} & : & \theory \ theory\ \\ @{command_def (HOL) "quickcheck_generator"} & : & \theory \ theory\ \\ @{command_def (HOL) "find_unused_assms"} & : & \context \\ \end{matharray} \<^rail>\ @@{command (HOL) value} ( '[' @{syntax name} ']' )? modes? @{syntax term} ; @@{command (HOL) values} modes? @{syntax nat}? @{syntax term} ; (@@{command (HOL) quickcheck} | @@{command (HOL) nitpick}) ( '[' args ']' )? @{syntax nat}? ; (@@{command (HOL) quickcheck_params} | @@{command (HOL) nitpick_params}) ( '[' args ']' )? ; @@{command (HOL) quickcheck_generator} @{syntax name} \ 'operations:' ( @{syntax term} +) ; @@{command (HOL) find_unused_assms} @{syntax name}? ; modes: '(' (@{syntax name} +) ')' ; args: ( @{syntax name} '=' value + ',' ) \ % FIXME check "value" \<^descr> @{command (HOL) "value"}~\t\ evaluates and prints a term; optionally \modes\ can be specified, which are appended to the current print mode; see \secref{sec:print-modes}. Evaluation is tried first using ML, falling back to normalization by evaluation if this fails. Alternatively a specific evaluator can be selected using square brackets; typical evaluators use the current set of code equations to normalize and include \simp\ for fully symbolic evaluation using the simplifier, \nbe\ for \<^emph>\normalization by evaluation\ and \<^emph>\code\ for code generation in SML. \<^descr> @{command (HOL) "values"}~\t\ enumerates a set comprehension by evaluation and prints its values up to the given number of solutions; optionally \modes\ can be specified, which are appended to the current print mode; see \secref{sec:print-modes}. \<^descr> @{command (HOL) "quickcheck"} tests the current goal for counterexamples using a series of assignments for its free variables; by default the first subgoal is tested, an other can be selected explicitly using an optional goal index. Assignments can be chosen exhausting the search space up to a given size, or using a fixed number of random assignments in the search space, or exploring the search space symbolically using narrowing. By default, quickcheck uses exhaustive testing. A number of configuration options are supported for @{command (HOL) "quickcheck"}, notably: \<^descr>[\tester\] specifies which testing approach to apply. There are three testers, \exhaustive\, \random\, and \narrowing\. An unknown configuration option is treated as an argument to tester, making \tester =\ optional. When multiple testers are given, these are applied in parallel. If no tester is specified, quickcheck uses the testers that are set active, i.e.\ configurations @{attribute quickcheck_exhaustive_active}, @{attribute quickcheck_random_active}, @{attribute quickcheck_narrowing_active} are set to true. \<^descr>[\size\] specifies the maximum size of the search space for assignment values. \<^descr>[\genuine_only\] sets quickcheck only to return genuine counterexample, but not potentially spurious counterexamples due to underspecified functions. \<^descr>[\abort_potential\] sets quickcheck to abort once it found a potentially spurious counterexample and to not continue to search for a further genuine counterexample. For this option to be effective, the \genuine_only\ option must be set to false. \<^descr>[\eval\] takes a term or a list of terms and evaluates these terms under the variable assignment found by quickcheck. This option is currently only supported by the default (exhaustive) tester. \<^descr>[\iterations\] sets how many sets of assignments are generated for each particular size. \<^descr>[\no_assms\] specifies whether assumptions in structured proofs should be ignored. \<^descr>[\locale\] specifies how to process conjectures in a locale context, i.e.\ they can be interpreted or expanded. The option is a whitespace-separated list of the two words \interpret\ and \expand\. The list determines the order they are employed. The default setting is to first use interpretations and then test the expanded conjecture. The option is only provided as attribute declaration, but not as parameter to the command. \<^descr>[\timeout\] sets the time limit in seconds. \<^descr>[\default_type\] sets the type(s) generally used to instantiate type variables. \<^descr>[\report\] if set quickcheck reports how many tests fulfilled the preconditions. \<^descr>[\use_subtype\] if set quickcheck automatically lifts conjectures to registered subtypes if possible, and tests the lifted conjecture. \<^descr>[\quiet\] if set quickcheck does not output anything while testing. \<^descr>[\verbose\] if set quickcheck informs about the current size and cardinality while testing. \<^descr>[\expect\] can be used to check if the user's expectation was met (\no_expectation\, \no_counterexample\, or \counterexample\). These option can be given within square brackets. Using the following type classes, the testers generate values and convert them back into Isabelle terms for displaying counterexamples. \<^descr>[\exhaustive\] The parameters of the type classes \<^class>\exhaustive\ and \<^class>\full_exhaustive\ implement the testing. They take a testing function as a parameter, which takes a value of type \<^typ>\'a\ and optionally produces a counterexample, and a size parameter for the test values. In \<^class>\full_exhaustive\, the testing function parameter additionally expects a lazy term reconstruction in the type \<^typ>\Code_Evaluation.term\ of the tested value. The canonical implementation for \exhaustive\ testers calls the given testing function on all values up to the given size and stops as soon as a counterexample is found. \<^descr>[\random\] The operation \<^const>\Quickcheck_Random.random\ of the type class \<^class>\random\ generates a pseudo-random value of the given size and a lazy term reconstruction of the value in the type \<^typ>\Code_Evaluation.term\. A pseudo-randomness generator is defined in theory \<^theory>\HOL.Random\. \<^descr>[\narrowing\] implements Haskell's Lazy Smallcheck @{cite "runciman-naylor-lindblad"} using the type classes \<^class>\narrowing\ and \<^class>\partial_term_of\. Variables in the current goal are initially represented as symbolic variables. If the execution of the goal tries to evaluate one of them, the test engine replaces it with refinements provided by \<^const>\narrowing\. Narrowing views every value as a sum-of-products which is expressed using the operations \<^const>\Quickcheck_Narrowing.cons\ (embedding a value), \<^const>\Quickcheck_Narrowing.apply\ (product) and \<^const>\Quickcheck_Narrowing.sum\ (sum). The refinement should enable further evaluation of the goal. For example, \<^const>\narrowing\ for the list type \<^typ>\'a :: narrowing list\ can be recursively defined as \<^term>\Quickcheck_Narrowing.sum (Quickcheck_Narrowing.cons []) (Quickcheck_Narrowing.apply (Quickcheck_Narrowing.apply (Quickcheck_Narrowing.cons (#)) narrowing) narrowing)\. If a symbolic variable of type \<^typ>\_ list\ is evaluated, it is replaced by (i)~the empty list \<^term>\[]\ and (ii)~by a non-empty list whose head and tail can then be recursively refined if needed. To reconstruct counterexamples, the operation \<^const>\partial_term_of\ transforms \narrowing\'s deep representation of terms to the type \<^typ>\Code_Evaluation.term\. The deep representation models symbolic variables as \<^const>\Quickcheck_Narrowing.Narrowing_variable\, which are normally converted to \<^const>\Code_Evaluation.Free\, and refined values as \<^term>\Quickcheck_Narrowing.Narrowing_constructor i args\, where \<^term>\i :: integer\ denotes the index in the sum of refinements. In the above example for lists, \<^term>\0\ corresponds to \<^term>\[]\ and \<^term>\1\ to \<^term>\(#)\. The command @{command (HOL) "code_datatype"} sets up \<^const>\partial_term_of\ such that the \<^term>\i\-th refinement is interpreted as the \<^term>\i\-th constructor, but it does not ensures consistency with \<^const>\narrowing\. \<^descr> @{command (HOL) "quickcheck_params"} changes @{command (HOL) "quickcheck"} configuration options persistently. \<^descr> @{command (HOL) "quickcheck_generator"} creates random and exhaustive value generators for a given type and operations. It generates values by using the operations as if they were constructors of that type. \<^descr> @{command (HOL) "nitpick"} tests the current goal for counterexamples using a reduction to first-order relational logic. See the Nitpick manual @{cite "isabelle-nitpick"} for details. \<^descr> @{command (HOL) "nitpick_params"} changes @{command (HOL) "nitpick"} configuration options persistently. \<^descr> @{command (HOL) "find_unused_assms"} finds potentially superfluous assumptions in theorems using quickcheck. It takes the theory name to be checked for superfluous assumptions as optional argument. If not provided, it checks the current theory. Options to the internal quickcheck invocations can be changed with common configuration declarations. \ section \Coercive subtyping\ text \ \begin{matharray}{rcl} @{attribute_def (HOL) coercion} & : & \attribute\ \\ @{attribute_def (HOL) coercion_delete} & : & \attribute\ \\ @{attribute_def (HOL) coercion_enabled} & : & \attribute\ \\ @{attribute_def (HOL) coercion_map} & : & \attribute\ \\ @{attribute_def (HOL) coercion_args} & : & \attribute\ \\ \end{matharray} Coercive subtyping allows the user to omit explicit type conversions, also called \<^emph>\coercions\. Type inference will add them as necessary when parsing a term. See @{cite "traytel-berghofer-nipkow-2011"} for details. \<^rail>\ @@{attribute (HOL) coercion} (@{syntax term}) ; @@{attribute (HOL) coercion_delete} (@{syntax term}) ; @@{attribute (HOL) coercion_map} (@{syntax term}) ; @@{attribute (HOL) coercion_args} (@{syntax const}) (('+' | '0' | '-')+) \ \<^descr> @{attribute (HOL) "coercion"}~\f\ registers a new coercion function \f :: \\<^sub>1 \ \\<^sub>2\ where \\\<^sub>1\ and \\\<^sub>2\ are type constructors without arguments. Coercions are composed by the inference algorithm if needed. Note that the type inference algorithm is complete only if the registered coercions form a lattice. \<^descr> @{attribute (HOL) "coercion_delete"}~\f\ deletes a preceding declaration (using @{attribute (HOL) "coercion"}) of the function \f :: \\<^sub>1 \ \\<^sub>2\ as a coercion. \<^descr> @{attribute (HOL) "coercion_map"}~\map\ registers a new map function to lift coercions through type constructors. The function \map\ must conform to the following type pattern \begin{matharray}{lll} \map\ & \::\ & \f\<^sub>1 \ \ \ f\<^sub>n \ (\\<^sub>1, \, \\<^sub>n) t \ (\\<^sub>1, \, \\<^sub>n) t\ \\ \end{matharray} where \t\ is a type constructor and \f\<^sub>i\ is of type \\\<^sub>i \ \\<^sub>i\ or \\\<^sub>i \ \\<^sub>i\. Registering a map function overwrites any existing map function for this particular type constructor. \<^descr> @{attribute (HOL) "coercion_args"} can be used to disallow coercions to be inserted in certain positions in a term. For example, given the constant \c :: \\<^sub>1 \ \\<^sub>2 \ \\<^sub>3 \ \\<^sub>4\ and the list of policies \- + 0\ as arguments, coercions will not be inserted in the first argument of \c\ (policy \-\); they may be inserted in the second argument (policy \+\) even if the constant \c\ itself is in a position where coercions are disallowed; the third argument inherits the allowance of coercsion insertion from the position of the constant \c\ (policy \0\). The standard usage of policies is the definition of syntatic constructs (usually extralogical, i.e., processed and stripped during type inference), that should not be destroyed by the insertion of coercions (see, for example, the setup for the case syntax in \<^theory>\HOL.Ctr_Sugar\). \<^descr> @{attribute (HOL) "coercion_enabled"} enables the coercion inference algorithm. \ section \Arithmetic proof support\ text \ \begin{matharray}{rcl} @{method_def (HOL) arith} & : & \method\ \\ @{attribute_def (HOL) arith} & : & \attribute\ \\ @{attribute_def (HOL) arith_split} & : & \attribute\ \\ \end{matharray} \<^descr> @{method (HOL) arith} decides linear arithmetic problems (on types \nat\, \int\, \real\). Any current facts are inserted into the goal before running the procedure. \<^descr> @{attribute (HOL) arith} declares facts that are supplied to the arithmetic provers implicitly. \<^descr> @{attribute (HOL) arith_split} attribute declares case split rules to be expanded before @{method (HOL) arith} is invoked. Note that a simpler (but faster) arithmetic prover is already invoked by the Simplifier. \ section \Intuitionistic proof search\ text \ \begin{matharray}{rcl} @{method_def (HOL) iprover} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method (HOL) iprover} (@{syntax rulemod} *) \ \<^descr> @{method (HOL) iprover} performs intuitionistic proof search, depending on specifically declared rules from the context, or given as explicit arguments. Chained facts are inserted into the goal before commencing proof search. Rules need to be classified as @{attribute (Pure) intro}, @{attribute (Pure) elim}, or @{attribute (Pure) dest}; here the ``\!\'' indicator refers to ``safe'' rules, which may be applied aggressively (without considering back-tracking later). Rules declared with ``\?\'' are ignored in proof search (the single-step @{method (Pure) rule} method still observes these). An explicit weight annotation may be given as well; otherwise the number of rule premises will be taken into account here. \ section \Model Elimination and Resolution\ text \ \begin{matharray}{rcl} @{method_def (HOL) "meson"} & : & \method\ \\ @{method_def (HOL) "metis"} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method (HOL) meson} @{syntax thms}? ; @@{method (HOL) metis} ('(' ('partial_types' | 'full_types' | 'no_types' | @{syntax name}) ')')? @{syntax thms}? \ \<^descr> @{method (HOL) meson} implements Loveland's model elimination procedure @{cite "loveland-78"}. See \<^file>\~~/src/HOL/ex/Meson_Test.thy\ for examples. \<^descr> @{method (HOL) metis} combines ordered resolution and ordered paramodulation to find first-order (or mildly higher-order) proofs. The first optional argument specifies a type encoding; see the Sledgehammer manual @{cite "isabelle-sledgehammer"} for details. The directory \<^dir>\~~/src/HOL/Metis_Examples\ contains several small theories developed to a large extent using @{method (HOL) metis}. \ section \Algebraic reasoning via Gr\"obner bases\ text \ \begin{matharray}{rcl} @{method_def (HOL) "algebra"} & : & \method\ \\ @{attribute_def (HOL) algebra} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{method (HOL) algebra} ('add' ':' @{syntax thms})? ('del' ':' @{syntax thms})? ; @@{attribute (HOL) algebra} (() | 'add' | 'del') \ \<^descr> @{method (HOL) algebra} performs algebraic reasoning via Gr\"obner bases, see also @{cite "Chaieb-Wenzel:2007"} and @{cite \\S3.2\ "Chaieb-thesis"}. The method handles deals with two main classes of problems: \<^enum> Universal problems over multivariate polynomials in a (semi)-ring/field/idom; the capabilities of the method are augmented according to properties of these structures. For this problem class the method is only complete for algebraically closed fields, since the underlying method is based on Hilbert's Nullstellensatz, where the equivalence only holds for algebraically closed fields. The problems can contain equations \p = 0\ or inequations \q \ 0\ anywhere within a universal problem statement. \<^enum> All-exists problems of the following restricted (but useful) form: @{text [display] "\x\<^sub>1 \ x\<^sub>n. e\<^sub>1(x\<^sub>1, \, x\<^sub>n) = 0 \ \ \ e\<^sub>m(x\<^sub>1, \, x\<^sub>n) = 0 \ (\y\<^sub>1 \ y\<^sub>k. p\<^sub>1\<^sub>1(x\<^sub>1, \ ,x\<^sub>n) * y\<^sub>1 + \ + p\<^sub>1\<^sub>k(x\<^sub>1, \, x\<^sub>n) * y\<^sub>k = 0 \ \ \ p\<^sub>t\<^sub>1(x\<^sub>1, \, x\<^sub>n) * y\<^sub>1 + \ + p\<^sub>t\<^sub>k(x\<^sub>1, \, x\<^sub>n) * y\<^sub>k = 0)"} Here \e\<^sub>1, \, e\<^sub>n\ and the \p\<^sub>i\<^sub>j\ are multivariate polynomials only in the variables mentioned as arguments. The proof method is preceded by a simplification step, which may be modified by using the form \(algebra add: ths\<^sub>1 del: ths\<^sub>2)\. This acts like declarations for the Simplifier (\secref{sec:simplifier}) on a private simpset for this tool. \<^descr> @{attribute algebra} (as attribute) manages the default collection of pre-simplification rules of the above proof method. \ subsubsection \Example\ text \ The subsequent example is from geometry: collinearity is invariant by rotation. \ (*<*)experiment begin(*>*) type_synonym point = "int \ int" fun collinear :: "point \ point \ point \ bool" where "collinear (Ax, Ay) (Bx, By) (Cx, Cy) \ (Ax - Bx) * (By - Cy) = (Ay - By) * (Bx - Cx)" lemma collinear_inv_rotation: assumes "collinear (Ax, Ay) (Bx, By) (Cx, Cy)" and "c\<^sup>2 + s\<^sup>2 = 1" shows "collinear (Ax * c - Ay * s, Ay * c + Ax * s) (Bx * c - By * s, By * c + Bx * s) (Cx * c - Cy * s, Cy * c + Cx * s)" using assms by (algebra add: collinear.simps) (*<*)end(*>*) text \ See also \<^file>\~~/src/HOL/ex/Groebner_Examples.thy\. \ section \Coherent Logic\ text \ \begin{matharray}{rcl} @{method_def (HOL) "coherent"} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method (HOL) coherent} @{syntax thms}? \ \<^descr> @{method (HOL) coherent} solves problems of \<^emph>\Coherent Logic\ @{cite "Bezem-Coquand:2005"}, which covers applications in confluence theory, lattice theory and projective geometry. See \<^file>\~~/src/HOL/ex/Coherent.thy\ for some examples. \ section \Unstructured case analysis and induction \label{sec:hol-induct-tac}\ text \ The following tools of Isabelle/HOL support cases analysis and induction in unstructured tactic scripts; see also \secref{sec:cases-induct} for proper Isar versions of similar ideas. \begin{matharray}{rcl} @{method_def (HOL) case_tac}\\<^sup>*\ & : & \method\ \\ @{method_def (HOL) induct_tac}\\<^sup>*\ & : & \method\ \\ @{method_def (HOL) ind_cases}\\<^sup>*\ & : & \method\ \\ @{command_def (HOL) "inductive_cases"}\\<^sup>*\ & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{method (HOL) case_tac} @{syntax goal_spec}? @{syntax term} rule? ; @@{method (HOL) induct_tac} @{syntax goal_spec}? (@{syntax insts} * @'and') rule? ; @@{method (HOL) ind_cases} (@{syntax prop}+) @{syntax for_fixes} ; @@{command (HOL) inductive_cases} (@{syntax thmdecl}? (@{syntax prop}+) + @'and') ; rule: 'rule' ':' @{syntax thm} \ \<^descr> @{method (HOL) case_tac} and @{method (HOL) induct_tac} admit to reason about inductive types. Rules are selected according to the declarations by the @{attribute cases} and @{attribute induct} attributes, cf.\ \secref{sec:cases-induct}. The @{command (HOL) datatype} package already takes care of this. These unstructured tactics feature both goal addressing and dynamic instantiation. Note that named rule cases are \<^emph>\not\ provided as would be by the proper @{method cases} and @{method induct} proof methods (see \secref{sec:cases-induct}). Unlike the @{method induct} method, @{method induct_tac} does not handle structured rule statements, only the compact object-logic conclusion of the subgoal being addressed. \<^descr> @{method (HOL) ind_cases} and @{command (HOL) "inductive_cases"} provide an interface to the internal \<^ML_text>\mk_cases\ operation. Rules are simplified in an unrestricted forward manner. While @{method (HOL) ind_cases} is a proof method to apply the result immediately as elimination rules, @{command (HOL) "inductive_cases"} provides case split theorems at the theory level for later use. The @{keyword "for"} argument of the @{method (HOL) ind_cases} method allows to specify a list of variables that should be generalized before applying the resulting rule. \ section \Adhoc tuples\ text \ \begin{matharray}{rcl} @{attribute_def (HOL) split_format}\\<^sup>*\ & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute (HOL) split_format} ('(' 'complete' ')')? \ \<^descr> @{attribute (HOL) split_format}\ \(complete)\ causes arguments in function applications to be represented canonically according to their tuple type structure. Note that this operation tends to invent funny names for new local parameters introduced. \ chapter \Executable code \label{ch:export-code}\ text \ For validation purposes, it is often useful to \<^emph>\execute\ specifications. In principle, execution could be simulated by Isabelle's inference kernel, i.e. by a combination of resolution and simplification. Unfortunately, this approach is rather inefficient. A more efficient way of executing specifications is to translate them into a functional programming language such as ML. Isabelle provides a generic framework to support code generation from executable specifications. Isabelle/HOL instantiates these mechanisms in a way that is amenable to end-user applications. Code can be generated for functional programs (including overloading using type classes) targeting SML @{cite SML}, OCaml @{cite OCaml}, Haskell @{cite "haskell-revised-report"} and Scala @{cite "scala-overview-tech-report"}. Conceptually, code generation is split up in three steps: \<^emph>\selection\ of code theorems, \<^emph>\translation\ into an abstract executable view and \<^emph>\serialization\ to a specific \<^emph>\target language\. Inductive specifications can be executed using the predicate compiler which operates within HOL. See @{cite "isabelle-codegen"} for an introduction. \begin{matharray}{rcl} @{command_def (HOL) "export_code"}\\<^sup>*\ & : & \local_theory \ local_theory\ \\ @{attribute_def (HOL) code} & : & \attribute\ \\ @{command_def (HOL) "code_datatype"} & : & \theory \ theory\ \\ @{command_def (HOL) "print_codesetup"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def (HOL) code_unfold} & : & \attribute\ \\ @{attribute_def (HOL) code_post} & : & \attribute\ \\ @{attribute_def (HOL) code_abbrev} & : & \attribute\ \\ @{command_def (HOL) "print_codeproc"}\\<^sup>*\ & : & \context \\ \\ @{command_def (HOL) "code_thms"}\\<^sup>*\ & : & \context \\ \\ @{command_def (HOL) "code_deps"}\\<^sup>*\ & : & \context \\ \\ @{command_def (HOL) "code_reserved"} & : & \theory \ theory\ \\ @{command_def (HOL) "code_printing"} & : & \theory \ theory\ \\ @{command_def (HOL) "code_identifier"} & : & \theory \ theory\ \\ @{command_def (HOL) "code_monad"} & : & \theory \ theory\ \\ @{command_def (HOL) "code_reflect"} & : & \theory \ theory\ \\ @{command_def (HOL) "code_pred"} & : & \theory \ proof(prove)\ \end{matharray} \<^rail>\ @@{command (HOL) export_code} @'open'? \ (const_expr+) (export_target*) ; export_target: @'in' target (@'module_name' @{syntax name})? \ (@'file_prefix' @{syntax path})? ('(' args ')')? ; target: 'SML' | 'OCaml' | 'Haskell' | 'Scala' | 'Eval' ; const_expr: (const | 'name._' | '_') ; const: @{syntax term} ; type_constructor: @{syntax name} ; class: @{syntax name} ; path: @{syntax embedded} ; @@{attribute (HOL) code} ('equation' | 'nbe' | 'abstype' | 'abstract' | 'del' | 'drop:' (const+) | 'abort:' (const+))? ; @@{command (HOL) code_datatype} (const+) ; @@{attribute (HOL) code_unfold} 'del'? ; @@{attribute (HOL) code_post} 'del'? ; @@{attribute (HOL) code_abbrev} 'del'? ; @@{command (HOL) code_thms} (const_expr+) ; @@{command (HOL) code_deps} (const_expr+) ; @@{command (HOL) code_reserved} target (@{syntax string}+) ; symbol_const: @'constant' const ; symbol_type_constructor: @'type_constructor' type_constructor ; symbol_class: @'type_class' class ; symbol_class_relation: @'class_relation' class ('<' | '\') class ; symbol_class_instance: @'class_instance' type_constructor @'::' class ; symbol_module: @'code_module' name ; syntax: @{syntax string} | (@'infix' | @'infixl' | @'infixr') @{syntax nat} @{syntax string} ; printing_const: symbol_const ('\' | '=>') \ ('(' target ')' syntax ? + @'and') ; printing_type_constructor: symbol_type_constructor ('\' | '=>') \ ('(' target ')' syntax ? + @'and') ; printing_class: symbol_class ('\' | '=>') \ ('(' target ')' @{syntax string} ? + @'and') ; printing_class_relation: symbol_class_relation ('\' | '=>') \ ('(' target ')' @{syntax string} ? + @'and') ; printing_class_instance: symbol_class_instance ('\'| '=>') \ ('(' target ')' '-' ? + @'and') ; printing_module: symbol_module ('\' | '=>') \ ('(' target ')' (@{syntax string} for_symbol?)? + @'and') ; for_symbol: @'for' ((symbol_const | symbol_typeconstructor | symbol_class | symbol_class_relation | symbol_class_instance)+) ; @@{command (HOL) code_printing} ((printing_const | printing_type_constructor | printing_class | printing_class_relation | printing_class_instance | printing_module) + '|') ; @@{command (HOL) code_identifier} ((symbol_const | symbol_type_constructor | symbol_class | symbol_class_relation | symbol_class_instance | symbol_module ) ('\' | '=>') \ ('(' target ')' @{syntax string} ? + @'and') + '|') ; @@{command (HOL) code_monad} const const target ; @@{command (HOL) code_reflect} @{syntax string} \ (@'datatypes' (@{syntax string} '=' ('_' | (@{syntax string} + '|') + @'and')))? \ (@'functions' (@{syntax string} +))? (@'file_prefix' @{syntax path})? ; @@{command (HOL) code_pred} \ ('(' @'modes' ':' modedecl ')')? \ const ; modedecl: (modes | ((const ':' modes) \ (@'and' ((const ':' modes @'and')+))?)) ; modes: mode @'as' const \ \<^descr> @{command (HOL) "export_code"} generates code for a given list of constants in the specified target language(s). If no serialization instruction is given, only abstract code is generated internally. Constants may be specified by giving them literally, referring to all executable constants within a certain theory by giving \name._\, or referring to \<^emph>\all\ executable constants currently available by giving \_\. By default, exported identifiers are minimized per module. This can be suppressed by prepending @{keyword "open"} before the list of constants. By default, for each involved theory one corresponding name space module is generated. Alternatively, a module name may be specified after the @{keyword "module_name"} keyword; then \<^emph>\all\ code is placed in this module. Generated code is output as logical files within the theory context, as well as session exports that can be retrieved using @{tool_ref export}, or @{tool build} with option \<^verbatim>\-e\ and suitable \isakeyword{export\_files} specifications in the session \<^verbatim>\ROOT\ entry. All files have a common directory prefix: the long theory name plus ``\<^verbatim>\code\''. The actual file name is determined by the target language together with an optional \<^theory_text>\file_prefix\ (the default is ``\<^verbatim>\export\'' with a consecutive number within the current theory). For \SML\, \OCaml\ and \Scala\, the file prefix becomes a plain file with extension (e.g.\ ``\<^verbatim>\.ML\'' for SML). For \Haskell\ the file prefix becomes a directory that is populated with a separate file for each module (with extension ``\<^verbatim>\.hs\''). Serializers take an optional list of arguments in parentheses. \<^item> For \<^emph>\Haskell\ a module name prefix may be given using the ``\root:\'' argument; ``\string_classes\'' adds a ``\<^verbatim>\deriving (Read, Show)\'' clause to each appropriate datatype declaration. \<^item> For \<^emph>\Scala\, ``\case_insensitive\'' avoids name clashes on case-insensitive file systems. \<^descr> @{attribute (HOL) code} declares code equations for code generation. Variant \code equation\ declares a conventional equation as code equation. Variants \code abstype\ and \code abstract\ declare abstract datatype certificates or code equations on abstract datatype representations respectively. Vanilla \code\ falls back to \code equation\ or \code abstract\ depending on the syntactic shape of the underlying equation. Variant \code del\ deselects a code equation for code generation. Variant \code nbe\ accepts also non-left-linear equations for \<^emph>\normalization by evaluation\ only. - Variants \code drop:\ and \code abort:\ take a list of constant as arguments + Variants \code drop:\ and \code abort:\ take a list of constants as arguments and drop all code equations declared for them. In the case of \abort\, these constants then do not require to a specification by means of code equations; if needed these are implemented by program abort (exception) instead. Packages declaring code equations usually provide a reasonable default setup. \<^descr> @{command (HOL) "code_datatype"} specifies a constructor set for a logical type. \<^descr> @{command (HOL) "print_codesetup"} gives an overview on selected code equations and code generator datatypes. \<^descr> @{attribute (HOL) code_unfold} declares (or with option ``\del\'' removes) theorems which during preprocessing are applied as rewrite rules to any code equation or evaluation input. \<^descr> @{attribute (HOL) code_post} declares (or with option ``\del\'' removes) theorems which are applied as rewrite rules to any result of an evaluation. \<^descr> @{attribute (HOL) code_abbrev} declares (or with option ``\del\'' removes) equations which are applied as rewrite rules to any result of an evaluation and symmetrically during preprocessing to any code equation or evaluation input. \<^descr> @{command (HOL) "print_codeproc"} prints the setup of the code generator preprocessor. \<^descr> @{command (HOL) "code_thms"} prints a list of theorems representing the corresponding program containing all given constants after preprocessing. \<^descr> @{command (HOL) "code_deps"} visualizes dependencies of theorems representing the corresponding program containing all given constants after preprocessing. \<^descr> @{command (HOL) "code_reserved"} declares a list of names as reserved for a given target, preventing it to be shadowed by any generated code. \<^descr> @{command (HOL) "code_printing"} associates a series of symbols (constants, type constructors, classes, class relations, instances, module names) with target-specific serializations; omitting a serialization deletes an existing serialization. \<^descr> @{command (HOL) "code_monad"} provides an auxiliary mechanism to generate monadic code for Haskell. \<^descr> @{command (HOL) "code_identifier"} associates a a series of symbols (constants, type constructors, classes, class relations, instances, module names) with target-specific hints how these symbols shall be named. These hints gain precedence over names for symbols with no hints at all. Conflicting hints are subject to name disambiguation. \<^emph>\Warning:\ It is at the discretion of the user to ensure that name prefixes of identifiers in compound statements like type classes or datatypes are still the same. \<^descr> @{command (HOL) "code_reflect"} without a ``\<^theory_text>\file_prefix\'' argument compiles code into the system runtime environment and modifies the code generator setup that future invocations of system runtime code generation referring to one of the ``\datatypes\'' or ``\functions\'' entities use these precompiled entities. With a ``\<^theory_text>\file_prefix\'' argument, the corresponding code is generated/exported to the specified file (as for \<^theory_text>\export_code\) without modifying the code generator setup. \<^descr> @{command (HOL) "code_pred"} creates code equations for a predicate given a set of introduction rules. Optional mode annotations determine which arguments are supposed to be input or output. If alternative introduction rules are declared, one must prove a corresponding elimination rule. \ end diff --git a/src/Doc/Isar_Ref/Proof.thy b/src/Doc/Isar_Ref/Proof.thy --- a/src/Doc/Isar_Ref/Proof.thy +++ b/src/Doc/Isar_Ref/Proof.thy @@ -1,1441 +1,1441 @@ (*:maxLineLen=78:*) theory Proof imports Main Base begin chapter \Proofs \label{ch:proofs}\ text \ Proof commands perform transitions of Isar/VM machine configurations, which are block-structured, consisting of a stack of nodes with three main components: logical proof context, current facts, and open goals. Isar/VM transitions are typed according to the following three different modes of operation: \<^descr> \proof(prove)\ means that a new goal has just been stated that is now to be \<^emph>\proven\; the next command may refine it by some proof method, and enter a sub-proof to establish the actual result. \<^descr> \proof(state)\ is like a nested theory mode: the context may be augmented by \<^emph>\stating\ additional assumptions, intermediate results etc. \<^descr> \proof(chain)\ is intermediate between \proof(state)\ and \proof(prove)\: existing facts (i.e.\ the contents of the special @{fact_ref this} register) have been just picked up in order to be used when refining the goal claimed next. The proof mode indicator may be understood as an instruction to the writer, telling what kind of operation may be performed next. The corresponding typings of proof commands restricts the shape of well-formed proof texts to particular command sequences. So dynamic arrangements of commands eventually turn out as static texts of a certain structure. \Appref{ap:refcard} gives a simplified grammar of the (extensible) language emerging that way from the different types of proof commands. The main ideas of the overall Isar framework are explained in \chref{ch:isar-framework}. \ section \Proof structure\ subsection \Formal notepad\ text \ \begin{matharray}{rcl} @{command_def "notepad"} & : & \local_theory \ proof(state)\ \\ \end{matharray} \<^rail>\ @@{command notepad} @'begin' ; @@{command end} \ \<^descr> @{command "notepad"}~@{keyword "begin"} opens a proof state without any goal statement. This allows to experiment with Isar, without producing any persistent result. The notepad is closed by @{command "end"}. \ subsection \Blocks\ text \ \begin{matharray}{rcl} @{command_def "next"} & : & \proof(state) \ proof(state)\ \\ @{command_def "{"} & : & \proof(state) \ proof(state)\ \\ @{command_def "}"} & : & \proof(state) \ proof(state)\ \\ \end{matharray} While Isar is inherently block-structured, opening and closing blocks is mostly handled rather casually, with little explicit user-intervention. Any local goal statement automatically opens \<^emph>\two\ internal blocks, which are closed again when concluding the sub-proof (by @{command "qed"} etc.). Sections of different context within a sub-proof may be switched via @{command "next"}, which is just a single block-close followed by block-open again. The effect of @{command "next"} is to reset the local proof context; there is no goal focus involved here! For slightly more advanced applications, there are explicit block parentheses as well. These typically achieve a stronger forward style of reasoning. \<^descr> @{command "next"} switches to a fresh block within a sub-proof, resetting the local context to the initial one. \<^descr> @{command "{"} and @{command "}"} explicitly open and close blocks. Any current facts pass through ``@{command "{"}'' unchanged, while ``@{command "}"}'' causes any result to be \<^emph>\exported\ into the enclosing context. Thus fixed variables are generalized, assumptions discharged, and local definitions unfolded (cf.\ \secref{sec:proof-context}). There is no difference of @{command "assume"} and @{command "presume"} in this mode of forward reasoning --- in contrast to plain backward reasoning with the result exported at @{command "show"} time. \ subsection \Omitting proofs\ text \ \begin{matharray}{rcl} @{command_def "oops"} & : & \proof \ local_theory | theory\ \\ \end{matharray} The @{command "oops"} command discontinues the current proof attempt, while considering the partial proof text as properly processed. This is conceptually quite different from ``faking'' actual proofs via @{command_ref "sorry"} (see \secref{sec:proof-steps}): @{command "oops"} does not observe the proof structure at all, but goes back right to the theory level. Furthermore, @{command "oops"} does not produce any result theorem --- there is no intended claim to be able to complete the proof in any way. A typical application of @{command "oops"} is to explain Isar proofs \<^emph>\within\ the system itself, in conjunction with the document preparation tools of Isabelle described in \chref{ch:document-prep}. Thus partial or even wrong proof attempts can be discussed in a logically sound manner. Note that the Isabelle {\LaTeX} macros can be easily adapted to print something like ``\\\'' instead of the keyword ``@{command "oops"}''. \ section \Statements\ subsection \Context elements \label{sec:proof-context}\ text \ \begin{matharray}{rcl} @{command_def "fix"} & : & \proof(state) \ proof(state)\ \\ @{command_def "assume"} & : & \proof(state) \ proof(state)\ \\ @{command_def "presume"} & : & \proof(state) \ proof(state)\ \\ @{command_def "define"} & : & \proof(state) \ proof(state)\ \\ \end{matharray} The logical proof context consists of fixed variables and assumptions. The former closely correspond to Skolem constants, or meta-level universal quantification as provided by the Isabelle/Pure logical framework. Introducing some \<^emph>\arbitrary, but fixed\ variable via ``@{command "fix"}~\x\'' results in a local value that may be used in the subsequent proof as any other variable or constant. Furthermore, any result \\ \[x]\ exported from the context will be universally closed wrt.\ \x\ at the outermost level: \\ \x. \[x]\ (this is expressed in normal form using Isabelle's meta-variables). Similarly, introducing some assumption \\\ has two effects. On the one hand, a local theorem is created that may be used as a fact in subsequent proof steps. On the other hand, any result \\ \ \\ exported from the context becomes conditional wrt.\ the assumption: \\ \ \ \\. Thus, solving an enclosing goal using such a result would basically introduce a new subgoal stemming from the assumption. How this situation is handled depends on the version of assumption command used: while @{command "assume"} insists on solving the subgoal by unification with some premise of the goal, @{command "presume"} leaves the subgoal unchanged in order to be proved later by the user. Local definitions, introduced by ``\<^theory_text>\define x where x = t\'', are achieved by combining ``@{command "fix"}~\x\'' with another version of assumption that causes any hypothetical equation \x \ t\ to be eliminated by the reflexivity rule. Thus, exporting some result \x \ t \ \[x]\ yields \\ \[t]\. \<^rail>\ @@{command fix} @{syntax vars} ; (@@{command assume} | @@{command presume}) concl prems @{syntax for_fixes} ; concl: (@{syntax props} + @'and') ; prems: (@'if' (@{syntax props'} + @'and'))? ; @@{command define} @{syntax vars} @'where' (@{syntax props} + @'and') @{syntax for_fixes} \ \<^descr> @{command "fix"}~\x\ introduces a local variable \x\ that is \<^emph>\arbitrary, but fixed\. \<^descr> @{command "assume"}~\a: \\ and @{command "presume"}~\a: \\ introduce a local fact \\ \ \\ by assumption. Subsequent results applied to an enclosing goal (e.g.\ by @{command_ref "show"}) are handled as follows: @{command "assume"} expects to be able to unify with existing premises in the goal, while @{command "presume"} leaves \\\ as new subgoals. Several lists of assumptions may be given (separated by @{keyword_ref "and"}; the resulting list of current facts consists of all of these concatenated. A structured assumption like \<^theory_text>\assume "B x" if "A x" for x\ is equivalent to \<^theory_text>\assume "\x. A x \ B x"\, but vacuous quantification is avoided: a for-context only effects propositions according to actual use of variables. \<^descr> \<^theory_text>\define x where "x = t"\ introduces a local (non-polymorphic) definition. In results that are exported from the context, \x\ is replaced by \t\. Internally, equational assumptions are added to the context in Pure form, using \x \ t\ instead of \x = t\ or \x \ t\ from the object-logic. When exporting results from the context, \x\ is generalized and the assumption discharged by reflexivity, causing the replacement by \t\. The default name for the definitional fact is \x_def\. Several simultaneous definitions may be given as well, with a collective default name. \<^medskip> It is also possible to abstract over local parameters as follows: \<^theory_text>\define f :: "'a \ 'b" where "f x = t" for x :: 'a\. \ subsection \Term abbreviations \label{sec:term-abbrev}\ text \ \begin{matharray}{rcl} @{command_def "let"} & : & \proof(state) \ proof(state)\ \\ @{keyword_def "is"} & : & syntax \\ \end{matharray} Abbreviations may be either bound by explicit @{command "let"}~\p \ t\ statements, or by annotating assumptions or goal statements with a list of patterns ``\<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\''. In both cases, higher-order matching is invoked to bind extra-logical term variables, which may be either named schematic variables of the form \?x\, or nameless dummies ``@{variable _}'' (underscore). Note that in the @{command "let"} form the patterns occur on the left-hand side, while the @{keyword "is"} patterns are in postfix position. Polymorphism of term bindings is handled in Hindley-Milner style, similar to ML. Type variables referring to local assumptions or open goal statements are \<^emph>\fixed\, while those of finished results or bound by @{command "let"} may occur in \<^emph>\arbitrary\ instances later. Even though actual polymorphism should be rarely used in practice, this mechanism is essential to achieve proper incremental type-inference, as the user proceeds to build up the Isar proof text from left to right. \<^medskip> Term abbreviations are quite different from local definitions as introduced via @{command "define"} (see \secref{sec:proof-context}). The latter are visible within the logic as actual equations, while abbreviations disappear during the input process just after type checking. Also note that @{command "define"} does not support polymorphism. \<^rail>\ @@{command let} ((@{syntax term} + @'and') '=' @{syntax term} + @'and') \ The syntax of @{keyword "is"} patterns follows @{syntax term_pat} or @{syntax prop_pat} (see \secref{sec:term-decls}). \<^descr> \<^theory_text>\let p\<^sub>1 = t\<^sub>1 and \ p\<^sub>n = t\<^sub>n\ binds any text variables in patterns \p\<^sub>1, \, p\<^sub>n\ by simultaneous higher-order matching against terms \t\<^sub>1, \, t\<^sub>n\. \<^descr> \<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\ resembles @{command "let"}, but matches \p\<^sub>1, \, p\<^sub>n\ against the preceding statement. Also note that @{keyword "is"} is not a separate command, but part of others (such as @{command "assume"}, @{command "have"} etc.). Some \<^emph>\implicit\ term abbreviations\index{term abbreviations} for goals and facts are available as well. For any open goal, @{variable_ref thesis} refers to its object-level statement, abstracted over any meta-level parameters (if present). Likewise, @{variable_ref this} is bound for fact statements resulting from assumptions or finished goals. In case @{variable this} refers to an object-logic statement that is an application \f t\, then \t\ is bound to the special text variable ``@{variable "\"}'' (three dots). The canonical application of this convenience are calculational proofs (see \secref{sec:calculation}). \ subsection \Facts and forward chaining \label{sec:proof-facts}\ text \ \begin{matharray}{rcl} @{command_def "note"} & : & \proof(state) \ proof(state)\ \\ @{command_def "then"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "from"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "with"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "using"} & : & \proof(prove) \ proof(prove)\ \\ @{command_def "unfolding"} & : & \proof(prove) \ proof(prove)\ \\ @{method_def "use"} & : & \method\ \\ @{fact_def "method_facts"} & : & \fact\ \\ \end{matharray} New facts are established either by assumption or proof of local statements. Any fact will usually be involved in further proofs, either as explicit arguments of proof methods, or when forward chaining towards the next goal via @{command "then"} (and variants); @{command "from"} and @{command "with"} are composite forms involving @{command "note"}. The @{command "using"} elements augments the collection of used facts \<^emph>\after\ a goal has been stated. Note that the special theorem name @{fact_ref this} refers to the most recently established facts, but only \<^emph>\before\ issuing a follow-up claim. \<^rail>\ @@{command note} (@{syntax thmdef}? @{syntax thms} + @'and') ; (@@{command from} | @@{command with} | @@{command using} | @@{command unfolding}) (@{syntax thms} + @'and') ; @{method use} @{syntax thms} @'in' @{syntax method} \ \<^descr> @{command "note"}~\a = b\<^sub>1 \ b\<^sub>n\ recalls existing facts \b\<^sub>1, \, b\<^sub>n\, binding the result as \a\. Note that attributes may be involved as well, both on the left and right hand sides. \<^descr> @{command "then"} indicates forward chaining by the current facts in order to establish the goal to be claimed next. The initial proof method invoked to refine that will be offered the facts to do ``anything appropriate'' (see also \secref{sec:proof-steps}). For example, method @{method (Pure) rule} (see \secref{sec:pure-meth-att}) would typically do an elimination rather than an introduction. Automatic methods usually insert the facts into the goal state before operation. This provides a simple scheme to control relevance of facts in automated proof search. \<^descr> @{command "from"}~\b\ abbreviates ``@{command "note"}~\b\~@{command "then"}''; thus @{command "then"} is equivalent to ``@{command "from"}~\this\''. \<^descr> @{command "with"}~\b\<^sub>1 \ b\<^sub>n\ abbreviates ``@{command "from"}~\b\<^sub>1 \ b\<^sub>n \ this\''; thus the forward chaining is from earlier facts together with the current ones. \<^descr> @{command "using"}~\b\<^sub>1 \ b\<^sub>n\ augments the facts to be used by a subsequent refinement step (such as @{command_ref "apply"} or @{command_ref "proof"}). \<^descr> @{command "unfolding"}~\b\<^sub>1 \ b\<^sub>n\ is structurally similar to @{command "using"}, but unfolds definitional equations \b\<^sub>1 \ b\<^sub>n\ throughout the goal state and facts. See also the proof method @{method_ref unfold}. \<^descr> \<^theory_text>\(use b\<^sub>1 \ b\<^sub>n in method)\ uses the facts in the given method expression. The facts provided by the proof state (via @{command "using"} etc.) are ignored, but it is possible to refer to @{fact method_facts} explicitly. \<^descr> @{fact method_facts} is a dynamic fact that refers to the currently used facts of the goal state. Forward chaining with an empty list of theorems is the same as not chaining at all. Thus ``@{command "from"}~\nothing\'' has no effect apart from entering \prove(chain)\ mode, since @{fact_ref nothing} is bound to the empty list of theorems. Basic proof methods (such as @{method_ref (Pure) rule}) expect multiple facts to be given in their proper order, corresponding to a prefix of the premises of the rule involved. Note that positions may be easily skipped using something like @{command "from"}~\_ \ a \ b\, for example. This involves the trivial rule \PROP \ \ PROP \\, which is bound in Isabelle/Pure as ``@{fact_ref "_"}'' (underscore). Automated methods (such as @{method simp} or @{method auto}) just insert any given facts before their usual operation. Depending on the kind of procedure involved, the order of facts is less significant here. \ subsection \Goals \label{sec:goals}\ text \ \begin{matharray}{rcl} @{command_def "lemma"} & : & \local_theory \ proof(prove)\ \\ @{command_def "theorem"} & : & \local_theory \ proof(prove)\ \\ @{command_def "corollary"} & : & \local_theory \ proof(prove)\ \\ @{command_def "proposition"} & : & \local_theory \ proof(prove)\ \\ @{command_def "schematic_goal"} & : & \local_theory \ proof(prove)\ \\ @{command_def "have"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "show"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "hence"} & : & \proof(state) \ proof(prove)\ \\ @{command_def "thus"} & : & \proof(state) \ proof(prove)\ \\ @{command_def "print_statement"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} From a theory context, proof mode is entered by an initial goal command such as @{command "lemma"}. Within a proof context, new claims may be introduced locally; there are variants to interact with the overall proof structure specifically, such as @{command have} or @{command show}. Goals may consist of multiple statements, resulting in a list of facts eventually. A pending multi-goal is internally represented as a meta-level conjunction (\&&&\), which is usually split into the corresponding number of sub-goals prior to an initial method application, via @{command_ref "proof"} (\secref{sec:proof-steps}) or @{command_ref "apply"} (\secref{sec:tactic-commands}). The @{method_ref induct} method covered in \secref{sec:cases-induct} acts on multiple claims simultaneously. Claims at the theory level may be either in short or long form. A short goal merely consists of several simultaneous propositions (often just one). A long goal includes an explicit context specification for the subsequent conclusion, involving local parameters and assumptions. Here the role of each part of the statement is explicitly marked by separate keywords (see also \secref{sec:locale}); the local assumptions being introduced here are available as @{fact_ref assms} in the proof. Moreover, there are two kinds of conclusions: @{element_def "shows"} states several simultaneous propositions (essentially a big conjunction), while @{element_def "obtains"} claims several simultaneous simultaneous contexts of (essentially a big disjunction of eliminated parameters and assumptions, cf.\ \secref{sec:obtain}). \<^rail>\ (@@{command lemma} | @@{command theorem} | @@{command corollary} | @@{command proposition} | @@{command schematic_goal}) (long_statement | short_statement) ; (@@{command have} | @@{command show} | @@{command hence} | @@{command thus}) stmt cond_stmt @{syntax for_fixes} ; @@{command print_statement} @{syntax modes}? @{syntax thms} ; stmt: (@{syntax props} + @'and') ; cond_stmt: ((@'if' | @'when') stmt)? ; short_statement: stmt (@'if' stmt)? @{syntax for_fixes} ; long_statement: @{syntax thmdecl}? context conclusion ; context: (@{syntax_ref "includes"}?) (@{syntax context_elem} *) ; conclusion: @'shows' stmt | @'obtains' @{syntax obtain_clauses} ; @{syntax_def obtain_clauses}: (@{syntax par_name}? obtain_case + '|') ; @{syntax_def obtain_case}: @{syntax vars} @'where' (@{syntax thmdecl}? (@{syntax prop}+) + @'and') \ \<^descr> @{command "lemma"}~\a: \\ enters proof mode with \\\ as main goal, eventually resulting in some fact \\ \\ to be put back into the target context. A @{syntax long_statement} may build up an initial proof context for the subsequent claim, potentially including local definitions and syntax; see also @{syntax "includes"} in \secref{sec:bundle} and @{syntax context_elem} in \secref{sec:locale}. A @{syntax short_statement} consists of propositions as conclusion, with an option context of premises and parameters, via \<^verbatim>\if\/\<^verbatim>\for\ in postfix notation, corresponding to \<^verbatim>\assumes\/\<^verbatim>\fixes\ in the long prefix notation. Local premises (if present) are called ``\assms\'' for @{syntax long_statement}, and ``\that\'' for @{syntax short_statement}. \<^descr> @{command "theorem"}, @{command "corollary"}, and @{command "proposition"} are the same as @{command "lemma"}. The different command names merely serve as a formal comment in the theory source. \<^descr> @{command "schematic_goal"} is similar to @{command "theorem"}, but allows the statement to contain unbound schematic variables. Under normal circumstances, an Isar proof text needs to specify claims explicitly. Schematic goals are more like goals in Prolog, where certain results are synthesized in the course of reasoning. With schematic statements, the inherent compositionality of Isar proofs is lost, which also impacts performance, because proof checking is forced into sequential mode. \<^descr> @{command "have"}~\a: \\ claims a local goal, eventually resulting in a fact within the current logical context. This operation is completely independent of any pending sub-goals of an enclosing goal statements, so @{command "have"} may be freely used for experimental exploration of potential results within a proof body. \<^descr> @{command "show"}~\a: \\ is like @{command "have"}~\a: \\ plus a second stage to refine some pending sub-goal for each one of the finished result, after having been exported into the corresponding context (at the head of the sub-proof of this @{command "show"} command). To accommodate interactive debugging, resulting rules are printed before being applied internally. Even more, interactive execution of @{command "show"} predicts potential failure and displays the resulting error as a warning beforehand. Watch out for the following message: @{verbatim [display] \Local statement fails to refine any pending goal\} \<^descr> @{command "hence"} expands to ``@{command "then"}~@{command "have"}'' and @{command "thus"} expands to ``@{command "then"}~@{command "show"}''. These conflations are left-over from early history of Isar. The expanded syntax is more orthogonal and improves readability and maintainability of proofs. \<^descr> @{command "print_statement"}~\a\ prints facts from the current theory or proof context in long statement form, according to the syntax for @{command "lemma"} given above. Any goal statement causes some term abbreviations (such as @{variable_ref "?thesis"}) to be bound automatically, see also \secref{sec:term-abbrev}. Structured goal statements involving @{keyword_ref "if"} or @{keyword_ref "when"} define the special fact @{fact_ref that} to refer to these assumptions in the proof body. The user may provide separate names according to the syntax of the statement. \ section \Calculational reasoning \label{sec:calculation}\ text \ \begin{matharray}{rcl} @{command_def "also"} & : & \proof(state) \ proof(state)\ \\ @{command_def "finally"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "moreover"} & : & \proof(state) \ proof(state)\ \\ @{command_def "ultimately"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "print_trans_rules"}\\<^sup>*\ & : & \context \\ \\ @{attribute trans} & : & \attribute\ \\ @{attribute sym} & : & \attribute\ \\ @{attribute symmetric} & : & \attribute\ \\ \end{matharray} Calculational proof is forward reasoning with implicit application of transitivity rules (such those of \=\, \\\, \<\). Isabelle/Isar maintains an auxiliary fact register @{fact_ref calculation} for accumulating results obtained by transitivity composed with the current result. Command @{command "also"} updates @{fact calculation} involving @{fact this}, while @{command "finally"} exhibits the final @{fact calculation} by forward chaining towards the next goal statement. Both commands require valid current facts, i.e.\ may occur only after commands that produce theorems such as @{command "assume"}, @{command "note"}, or some finished proof of @{command "have"}, @{command "show"} etc. The @{command "moreover"} and @{command "ultimately"} commands are similar to @{command "also"} and @{command "finally"}, but only collect further results in @{fact calculation} without applying any rules yet. Also note that the implicit term abbreviation ``\\\'' has its canonical application with calculational proofs. It refers to the argument of the preceding statement. (The argument of a curried infix expression happens to be its right-hand side.) Isabelle/Isar calculations are implicitly subject to block structure in the sense that new threads of calculational reasoning are commenced for any new block (as opened by a local goal, for example). This means that, apart from being able to nest calculations, there is no separate \<^emph>\begin-calculation\ command required. \<^medskip> The Isar calculation proof commands may be defined as follows:\<^footnote>\We suppress internal bookkeeping such as proper handling of block-structure.\ \begin{matharray}{rcl} @{command "also"}\\<^sub>0\ & \equiv & @{command "note"}~\calculation = this\ \\ @{command "also"}\\<^sub>n\<^sub>+\<^sub>1\ & \equiv & @{command "note"}~\calculation = trans [OF calculation this]\ \\[0.5ex] @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~\calculation\ \\[0.5ex] @{command "moreover"} & \equiv & @{command "note"}~\calculation = calculation this\ \\ @{command "ultimately"} & \equiv & @{command "moreover"}~@{command "from"}~\calculation\ \\ \end{matharray} \<^rail>\ (@@{command also} | @@{command finally}) ('(' @{syntax thms} ')')? ; @@{attribute trans} (() | 'add' | 'del') \ \<^descr> @{command "also"}~\(a\<^sub>1 \ a\<^sub>n)\ maintains the auxiliary @{fact calculation} register as follows. The first occurrence of @{command "also"} in some calculational thread initializes @{fact calculation} by @{fact this}. Any subsequent @{command "also"} on the same level of block-structure updates @{fact calculation} by some transitivity rule applied to @{fact calculation} and @{fact this} (in that order). Transitivity rules are picked from the current context, unless alternative rules are given as explicit arguments. \<^descr> @{command "finally"}~\(a\<^sub>1 \ a\<^sub>n)\ maintains @{fact calculation} in the same way as @{command "also"} and then concludes the current calculational thread. The final result is exhibited as fact for forward chaining towards the next goal. Basically, @{command "finally"} abbreviates @{command "also"}~@{command "from"}~@{fact calculation}. Typical idioms for concluding calculational proofs are ``@{command "finally"}~@{command "show"}~\?thesis\~@{command "."}'' and ``@{command "finally"}~@{command "have"}~\\\~@{command "."}''. \<^descr> @{command "moreover"} and @{command "ultimately"} are analogous to @{command "also"} and @{command "finally"}, but collect results only, without applying rules. \<^descr> @{command "print_trans_rules"} prints the list of transitivity rules (for calculational commands @{command "also"} and @{command "finally"}) and symmetry rules (for the @{attribute symmetric} operation and single step elimination patters) of the current context. \<^descr> @{attribute trans} declares theorems as transitivity rules. \<^descr> @{attribute sym} declares symmetry rules, as well as @{attribute "Pure.elim"}\?\ rules. \<^descr> @{attribute symmetric} resolves a theorem with some rule declared as @{attribute sym} in the current context. For example, ``@{command "assume"}~\[symmetric]: x = y\'' produces a swapped fact derived from that assumption. In structured proof texts it is often more appropriate to use an explicit single-step elimination proof, such as ``@{command "assume"}~\x = y\~@{command "then"}~@{command "have"}~\y = x\~@{command ".."}''. \ section \Refinement steps\ subsection \Proof method expressions \label{sec:proof-meth}\ text \ Proof methods are either basic ones, or expressions composed of methods via ``\<^verbatim>\,\'' (sequential composition), ``\<^verbatim>\;\'' (structural composition), ``\<^verbatim>\|\'' (alternative choices), ``\<^verbatim>\?\'' (try), ``\<^verbatim>\+\'' (repeat at least once), ``\<^verbatim>\[\\n\\<^verbatim>\]\'' (restriction to first \n\ subgoals). In practice, proof methods are usually just a comma separated list of @{syntax name}~@{syntax args} specifications. Note that parentheses may be dropped for single method specifications (with no arguments). The syntactic precedence of method combinators is \<^verbatim>\|\ \<^verbatim>\;\ \<^verbatim>\,\ \<^verbatim>\[]\ \<^verbatim>\+\ \<^verbatim>\?\ (from low to high). \<^rail>\ @{syntax_def method}: (@{syntax name} | '(' methods ')') (() | '?' | '+' | '[' @{syntax nat}? ']') ; methods: (@{syntax name} @{syntax args} | @{syntax method}) + (',' | ';' | '|') \ Regular Isar proof methods do \<^emph>\not\ admit direct goal addressing, but refer to the first subgoal or to all subgoals uniformly. Nonetheless, the subsequent mechanisms allow to imitate the effect of subgoal addressing that is known from ML tactics. \<^medskip> Goal \<^emph>\restriction\ means the proof state is wrapped-up in a way that certain subgoals are exposed, and other subgoals are ``parked'' elsewhere. Thus a proof method has no other chance than to operate on the subgoals that are presently exposed. Structural composition ``\m\<^sub>1\\<^verbatim>\;\~\m\<^sub>2\'' means that method \m\<^sub>1\ is applied with restriction to the first subgoal, then \m\<^sub>2\ is applied consecutively with restriction to each subgoal that has newly emerged due to \m\<^sub>1\. This is analogous to the tactic combinator \<^ML_op>\THEN_ALL_NEW\ in Isabelle/ML, see also @{cite "isabelle-implementation"}. For example, \(rule r; blast)\ applies rule \r\ and then solves all new subgoals by \blast\. Moreover, the explicit goal restriction operator ``\[n]\'' exposes only the first \n\ subgoals (which need to exist), with default \n = 1\. For example, the method expression ``\simp_all[3]\'' simplifies the first three subgoals, while ``\(rule r, simp_all)[]\'' simplifies all new goals that emerge from applying rule \r\ to the originally first one. \<^medskip> Improper methods, notably tactic emulations, offer low-level goal addressing as explicit argument to the individual tactic being involved. Here ``\[!]\'' refers to all goals, and ``\[n-]\'' to all goals starting from \n\. \<^rail>\ @{syntax_def goal_spec}: '[' (@{syntax nat} '-' @{syntax nat} | @{syntax nat} '-' | @{syntax nat} | '!' ) ']' \ \ subsection \Initial and terminal proof steps \label{sec:proof-steps}\ text \ \begin{matharray}{rcl} @{command_def "proof"} & : & \proof(prove) \ proof(state)\ \\ @{command_def "qed"} & : & \proof(state) \ proof(state) | local_theory | theory\ \\ @{command_def "by"} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def ".."} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def "."} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def "sorry"} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{method_def standard} & : & \method\ \\ \end{matharray} Arbitrary goal refinement via tactics is considered harmful. Structured proof composition in Isar admits proof methods to be invoked in two places only. \<^enum> An \<^emph>\initial\ refinement step @{command_ref "proof"}~\m\<^sub>1\ reduces a newly stated goal to a number of sub-goals that are to be solved later. Facts are passed to \m\<^sub>1\ for forward chaining, if so indicated by \proof(chain)\ mode. \<^enum> A \<^emph>\terminal\ conclusion step @{command_ref "qed"}~\m\<^sub>2\ is intended to solve remaining goals. No facts are passed to \m\<^sub>2\. The only other (proper) way to affect pending goals in a proof body is by @{command_ref "show"}, which involves an explicit statement of what is to be solved eventually. Thus we avoid the fundamental problem of unstructured tactic scripts that consist of numerous consecutive goal transformations, with invisible effects. \<^medskip> As a general rule of thumb for good proof style, initial proof methods should either solve the goal completely, or constitute some well-understood reduction to new sub-goals. Arbitrary automatic proof tools that are prone leave a large number of badly structured sub-goals are no help in continuing the proof document in an intelligible manner. Unless given explicitly by the user, the default initial method is @{method standard}, which subsumes at least @{method_ref (Pure) rule} or its classical variant @{method_ref (HOL) rule}. These methods apply a single standard elimination or introduction rule according to the topmost logical connective involved. There is no separate default terminal method. Any remaining goals are always solved by assumption in the very last step. \<^rail>\ @@{command proof} method? ; @@{command qed} method? ; @@{command "by"} method method? ; (@@{command "."} | @@{command ".."} | @@{command sorry}) \ \<^descr> @{command "proof"}~\m\<^sub>1\ refines the goal by proof method \m\<^sub>1\; facts for forward chaining are passed if so indicated by \proof(chain)\ mode. \<^descr> @{command "qed"}~\m\<^sub>2\ refines any remaining goals by proof method \m\<^sub>2\ and concludes the sub-proof by assumption. If the goal had been \show\, some pending sub-goal is solved as well by the rule resulting from the result \<^emph>\exported\ into the enclosing goal context. Thus \qed\ may fail for two reasons: either \m\<^sub>2\ fails, or the resulting rule does not fit to any pending goal\<^footnote>\This includes any additional ``strong'' assumptions as introduced by @{command "assume"}.\ of the enclosing context. Debugging such a situation might involve temporarily changing @{command "show"} into @{command "have"}, or weakening the local context by replacing occurrences of @{command "assume"} by @{command "presume"}. \<^descr> @{command "by"}~\m\<^sub>1 m\<^sub>2\ is a \<^emph>\terminal proof\\index{proof!terminal}; it abbreviates @{command "proof"}~\m\<^sub>1\~@{command "qed"}~\m\<^sub>2\, but with backtracking across both methods. Debugging an unsuccessful @{command "by"}~\m\<^sub>1 m\<^sub>2\ command can be done by expanding its definition; in many cases @{command "proof"}~\m\<^sub>1\ (or even \apply\~\m\<^sub>1\) is already sufficient to see the problem. \<^descr> ``@{command ".."}'' is a \<^emph>\standard proof\\index{proof!standard}; it abbreviates @{command "by"}~\standard\. \<^descr> ``@{command "."}'' is a \<^emph>\trivial proof\\index{proof!trivial}; it abbreviates @{command "by"}~\this\. \<^descr> @{command "sorry"} is a \<^emph>\fake proof\\index{proof!fake} pretending to solve the pending claim without further ado. This only works in interactive development, or if the @{attribute quick_and_dirty} is enabled. Facts emerging from fake proofs are not the real thing. Internally, the derivation object is tainted by an oracle invocation, which may be inspected via the command @{command "thm_oracles"} (\secref{sec:oracles}). The most important application of @{command "sorry"} is to support experimentation and top-down proof development. \<^descr> @{method standard} refers to the default refinement step of some Isar language elements (notably @{command proof} and ``@{command ".."}''). It is \<^emph>\dynamically scoped\, so the behaviour depends on the application environment. In Isabelle/Pure, @{method standard} performs elementary introduction~/ elimination steps (@{method_ref (Pure) rule}), introduction of type classes (@{method_ref intro_classes}) and locales (@{method_ref intro_locales}). In Isabelle/HOL, @{method standard} also takes classical rules into account (cf.\ \secref{sec:classical}). \ subsection \Fundamental methods and attributes \label{sec:pure-meth-att}\ text \ The following proof methods and attributes refer to basic logical operations of Isar. Further methods and attributes are provided by several generic and object-logic specific tools and packages (see \chref{ch:gen-tools} and \partref{part:hol}). \begin{matharray}{rcl} @{command_def "print_rules"}\\<^sup>*\ & : & \context \\ \\[0.5ex] @{method_def "-"} & : & \method\ \\ @{method_def "goal_cases"} & : & \method\ \\ @{method_def "subproofs"} & : & \method\ \\ @{method_def "fact"} & : & \method\ \\ @{method_def "assumption"} & : & \method\ \\ @{method_def "this"} & : & \method\ \\ @{method_def (Pure) "rule"} & : & \method\ \\ @{attribute_def (Pure) "intro"} & : & \attribute\ \\ @{attribute_def (Pure) "elim"} & : & \attribute\ \\ @{attribute_def (Pure) "dest"} & : & \attribute\ \\ @{attribute_def (Pure) "rule"} & : & \attribute\ \\[0.5ex] @{attribute_def "OF"} & : & \attribute\ \\ @{attribute_def "of"} & : & \attribute\ \\ @{attribute_def "where"} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{method goal_cases} (@{syntax name}*) ; @@{method subproofs} @{syntax method} ; @@{method fact} @{syntax thms}? ; @@{method (Pure) rule} @{syntax thms}? ; rulemod: ('intro' | 'elim' | 'dest') ((('!' | () | '?') @{syntax nat}?) | 'del') ':' @{syntax thms} ; (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}? ; @@{attribute (Pure) rule} 'del' ; @@{attribute OF} @{syntax thms} ; @@{attribute of} @{syntax insts} ('concl' ':' @{syntax insts})? @{syntax for_fixes} ; @@{attribute "where"} @{syntax named_insts} @{syntax for_fixes} \ \<^descr> @{command "print_rules"} prints rules declared via attributes @{attribute (Pure) intro}, @{attribute (Pure) elim}, @{attribute (Pure) dest} of Isabelle/Pure. See also the analogous @{command "print_claset"} command for similar rule declarations of the classical reasoner (\secref{sec:classical}). \<^descr> ``@{method "-"}'' (minus) inserts the forward chaining facts as premises into the goal, and nothing else. Note that command @{command_ref "proof"} without any method actually performs a single reduction step using the @{method_ref (Pure) rule} method; thus a plain \<^emph>\do-nothing\ proof step would be ``@{command "proof"}~\-\'' rather than @{command "proof"} alone. \<^descr> @{method "goal_cases"}~\a\<^sub>1 \ a\<^sub>n\ turns the current subgoals into cases within the context (see also \secref{sec:cases-induct}). The specified case names are used if present; otherwise cases are numbered starting from 1. Invoking cases in the subsequent proof body via the @{command_ref case} command will @{command fix} goal parameters, @{command assume} goal premises, and @{command let} variable @{variable_ref ?case} refer to the conclusion. \<^descr> @{method "subproofs"}~\m\ applies the method expression \m\ consecutively to each subgoal, constructing individual subproofs internally (analogous to ``\<^theory_text>\show goal by m\'' for each subgoal of the proof state). Search alternatives of \m\ are truncated: the method is forced to be deterministic. This method combinator impacts the internal construction of proof terms: it makes a cascade of let-expressions within the derivation tree and may thus improve scalability. \<^descr> @{method "fact"}~\a\<^sub>1 \ a\<^sub>n\ composes some fact from \a\<^sub>1, \, a\<^sub>n\ (or implicitly from the current proof context) modulo unification of schematic type and term variables. The rule structure is not taken into account, i.e.\ meta-level implication is considered atomic. This is the same principle underlying literal facts (cf.\ \secref{sec:syn-att}): ``@{command "have"}~\\\~@{command "by"}~\fact\'' is equivalent to ``@{command "note"}~\<^verbatim>\`\\\\\<^verbatim>\`\'' provided that \\ \\ is an instance of some known \\ \\ in the proof context. \<^descr> @{method assumption} solves some goal by a single assumption step. All given facts are guaranteed to participate in the refinement; this means there may be only 0 or 1 in the first place. Recall that @{command "qed"} (\secref{sec:proof-steps}) already concludes any remaining sub-goals by assumption, so structured proofs usually need not quote the @{method assumption} method at all. \<^descr> @{method this} applies all of the current facts directly as rules. Recall that ``@{command "."}'' (dot) abbreviates ``@{command "by"}~\this\''. \<^descr> @{method (Pure) rule}~\a\<^sub>1 \ a\<^sub>n\ applies some rule given as argument in backward manner; facts are used to reduce the rule before applying it to the goal. Thus @{method (Pure) rule} without facts is plain introduction, while with facts it becomes elimination. When no arguments are given, the @{method (Pure) rule} method tries to pick appropriate rules automatically, as declared in the current context using the @{attribute (Pure) intro}, @{attribute (Pure) elim}, @{attribute (Pure) dest} attributes (see below). This is included in the standard behaviour of @{command "proof"} and ``@{command ".."}'' (double-dot) steps (see \secref{sec:proof-steps}). \<^descr> @{attribute (Pure) intro}, @{attribute (Pure) elim}, and @{attribute (Pure) dest} declare introduction, elimination, and destruct rules, to be used with method @{method (Pure) rule}, and similar tools. Note that the latter will ignore rules declared with ``\?\'', while ``\!\'' are used most aggressively. The classical reasoner (see \secref{sec:classical}) introduces its own variants of these attributes; use qualified names to access the present versions of Isabelle/Pure, i.e.\ @{attribute (Pure) "Pure.intro"}. \<^descr> @{attribute (Pure) rule}~\del\ undeclares introduction, elimination, or destruct rules. \<^descr> @{attribute OF}~\a\<^sub>1 \ a\<^sub>n\ applies some theorem to all of the given rules \a\<^sub>1, \, a\<^sub>n\ in canonical right-to-left order, which means that premises stemming from the \a\<^sub>i\ emerge in parallel in the result, without interfering with each other. In many practical situations, the \a\<^sub>i\ do not have premises themselves, so \rule [OF a\<^sub>1 \ a\<^sub>n]\ can be actually read as functional application (modulo unification). Argument positions may be effectively skipped by using ``\_\'' (underscore), which refers to the propositional identity rule in the Pure theory. \<^descr> @{attribute of}~\t\<^sub>1 \ t\<^sub>n\ performs positional instantiation of term variables. The terms \t\<^sub>1, \, t\<^sub>n\ are substituted for any schematic variables occurring in a theorem from left to right; ``\_\'' (underscore) indicates to skip a position. Arguments following a ``\concl:\'' specification refer to positions of the conclusion of a rule. An optional context of local variables \\ x\<^sub>1 \ x\<^sub>m\ may be specified: the instantiated theorem is exported, and these variables become schematic (usually with some shifting of indices). \<^descr> @{attribute "where"}~\x\<^sub>1 = t\<^sub>1 \ \ x\<^sub>n = t\<^sub>n\ performs named instantiation of schematic type and term variables occurring in a theorem. Schematic variables have to be specified on the left-hand side (e.g.\ \?x1.3\). The question mark may be omitted if the variable name is a plain identifier without index. As type instantiations are inferred from term instantiations, explicit type instantiations are seldom necessary. An optional context of local variables \\ x\<^sub>1 \ x\<^sub>m\ may be specified as for @{attribute "of"} above. \ subsection \Defining proof methods\ text \ \begin{matharray}{rcl} @{command_def "method_setup"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command method_setup} @{syntax name} '=' @{syntax text} @{syntax text}? \ \<^descr> @{command "method_setup"}~\name = text description\ defines a proof method in the current context. The given \text\ has to be an ML expression of type \<^ML_type>\(Proof.context -> Proof.method) context_parser\, cf.\ basic parsers defined in structure \<^ML_structure>\Args\ and \<^ML_structure>\Attrib\. There are also combinators like \<^ML>\METHOD\ and \<^ML>\SIMPLE_METHOD\ to turn certain tactic forms into official proof methods; the primed versions refer to tactics with explicit goal addressing. Here are some example method definitions: \ (*<*)experiment begin(*>*) method_setup my_method1 = \Scan.succeed (K (SIMPLE_METHOD' (fn i: int => no_tac)))\ "my first method (without any arguments)" method_setup my_method2 = \Scan.succeed (fn ctxt: Proof.context => SIMPLE_METHOD' (fn i: int => no_tac))\ "my second method (with context)" method_setup my_method3 = \Attrib.thms >> (fn thms: thm list => fn ctxt: Proof.context => SIMPLE_METHOD' (fn i: int => no_tac))\ "my third method (with theorem arguments and context)" (*<*)end(*>*) section \Proof by cases and induction \label{sec:cases-induct}\ subsection \Rule contexts\ text \ \begin{matharray}{rcl} @{command_def "case"} & : & \proof(state) \ proof(state)\ \\ @{command_def "print_cases"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def case_names} & : & \attribute\ \\ @{attribute_def case_conclusion} & : & \attribute\ \\ @{attribute_def params} & : & \attribute\ \\ @{attribute_def consumes} & : & \attribute\ \\ \end{matharray} The puristic way to build up Isar proof contexts is by explicit language elements like @{command "fix"}, @{command "assume"}, @{command "let"} (see \secref{sec:proof-context}). This is adequate for plain natural deduction, but easily becomes unwieldy in concrete verification tasks, which typically involve big induction rules with several cases. The @{command "case"} command provides a shorthand to refer to a local context symbolically: certain proof methods provide an environment of named ``cases'' of the form \c: x\<^sub>1, \, x\<^sub>m, \\<^sub>1, \, \\<^sub>n\; the effect of ``@{command "case"}~\c\'' is then equivalent to ``@{command "fix"}~\x\<^sub>1 \ x\<^sub>m\~@{command "assume"}~\c: \\<^sub>1 \ \\<^sub>n\''. Term bindings may be covered as well, notably @{variable ?case} for the main conclusion. By default, the ``terminology'' \x\<^sub>1, \, x\<^sub>m\ of a case value is marked as hidden, i.e.\ there is no way to refer to such parameters in the subsequent proof text. After all, original rule parameters stem from somewhere outside of the current proof text. By using the explicit form ``@{command "case"}~\(c y\<^sub>1 \ y\<^sub>m)\'' instead, the proof author is able to chose local names that fit nicely into the current context. \<^medskip> It is important to note that proper use of @{command "case"} does not provide means to peek at the current goal state, which is not directly observable in Isar! Nonetheless, goal refinement commands do provide named cases \goal\<^sub>i\ for each subgoal \i = 1, \, n\ of the resulting goal state. Using this extra feature requires great care, because some bits of the internal tactical machinery intrude the proof text. In particular, parameter names stemming from the left-over of automated reasoning tools are usually quite unpredictable. Under normal circumstances, the text of cases emerge from standard elimination or induction rules, which in turn are derived from previous theory specifications in a canonical way (say from @{command "inductive"} definitions). \<^medskip> Proper cases are only available if both the proof method and the rules involved support this. By using appropriate attributes, case names, conclusions, and parameters may be also declared by hand. Thus variant versions of rules that have been derived manually become ready to use in advanced case analysis later. \<^rail>\ @@{command case} @{syntax thmdecl}? (name | '(' name (('_' | @{syntax name}) *) ')') ; @@{attribute case_names} ((@{syntax name} ( '[' (('_' | @{syntax name}) *) ']' ) ? ) +) ; @@{attribute case_conclusion} @{syntax name} (@{syntax name} * ) ; @@{attribute params} ((@{syntax name} * ) + @'and') ; @@{attribute consumes} @{syntax int}? \ \<^descr> @{command "case"}~\a: (c x\<^sub>1 \ x\<^sub>m)\ invokes a named local context \c: x\<^sub>1, \, x\<^sub>m, \\<^sub>1, \, \\<^sub>m\, as provided by an appropriate proof method (such as @{method_ref cases} and @{method_ref induct}). The command ``@{command "case"}~\a: (c x\<^sub>1 \ x\<^sub>m)\'' abbreviates ``@{command "fix"}~\x\<^sub>1 \ x\<^sub>m\~@{command "assume"}~\a.c: \\<^sub>1 \ \\<^sub>n\''. Each local fact is qualified by the prefix \a\, and all such facts are collectively bound to the name \a\. The fact name is specification \a\ is optional, the default is to re-use \c\. So @{command "case"}~\(c x\<^sub>1 \ x\<^sub>m)\ is the same as @{command "case"}~\c: (c x\<^sub>1 \ x\<^sub>m)\. \<^descr> @{command "print_cases"} prints all local contexts of the current state, using Isar proof language notation. \<^descr> @{attribute case_names}~\c\<^sub>1 \ c\<^sub>k\ declares names for the local contexts of premises of a theorem; \c\<^sub>1, \, c\<^sub>k\ refers to the \<^emph>\prefix\ of the list of premises. Each of the cases \c\<^sub>i\ can be of the form \c[h\<^sub>1 \ h\<^sub>n]\ where the \h\<^sub>1 \ h\<^sub>n\ are the names of the hypotheses in case \c\<^sub>i\ from left to right. \<^descr> @{attribute case_conclusion}~\c d\<^sub>1 \ d\<^sub>k\ declares names for the conclusions of a named premise \c\; here \d\<^sub>1, \, d\<^sub>k\ refers to the prefix of arguments of a logical formula built by nesting a binary connective (e.g.\ \\\). Note that proof methods such as @{method induct} and @{method coinduct} already provide a default name for the conclusion as a whole. The need to name subformulas only arises with cases that split into several sub-cases, as in common co-induction rules. \<^descr> @{attribute params}~\p\<^sub>1 \ p\<^sub>m \ \ q\<^sub>1 \ q\<^sub>n\ renames the innermost parameters of premises \1, \, n\ of some theorem. An empty list of names may be given to skip positions, leaving the present parameters unchanged. Note that the default usage of case rules does \<^emph>\not\ directly expose parameters to the proof context. \<^descr> @{attribute consumes}~\n\ declares the number of ``major premises'' of a rule, i.e.\ the number of facts to be consumed when it is applied by an appropriate proof method. The default value of @{attribute consumes} is \n = 1\, which is appropriate for the usual kind of cases and induction rules for inductive sets (cf.\ \secref{sec:hol-inductive}). Rules without any @{attribute consumes} declaration given are treated as if @{attribute consumes}~\0\ had been specified. A negative \n\ is interpreted relatively to the total number of premises of the rule in the target context. Thus its absolute value specifies the remaining number of premises, after subtracting the prefix of major premises as indicated above. This form of declaration has the technical advantage of being stable under more morphisms, notably those that export the result from a nested @{command_ref context} with additional assumptions. Note that explicit @{attribute consumes} declarations are only rarely needed; this is already taken care of automatically by the higher-level @{attribute cases}, @{attribute induct}, and @{attribute coinduct} declarations. \ subsection \Proof methods\ text \ \begin{matharray}{rcl} @{method_def cases} & : & \method\ \\ @{method_def induct} & : & \method\ \\ @{method_def induction} & : & \method\ \\ @{method_def coinduct} & : & \method\ \\ \end{matharray} The @{method cases}, @{method induct}, @{method induction}, and @{method coinduct} methods provide a uniform interface to common proof techniques over datatypes, inductive predicates (or sets), recursive functions etc. The corresponding rules may be specified and instantiated in a casual manner. Furthermore, these methods provide named local contexts that may be invoked via the @{command "case"} proof command within the subsequent proof text. This accommodates compact proof texts even when reasoning about large specifications. The @{method induct} method also provides some additional infrastructure in order to be applicable to structure statements (either using explicit meta-level connectives, or including facts and parameters separately). This avoids cumbersome encoding of ``strengthened'' inductive statements within the object-logic. Method @{method induction} differs from @{method induct} only in the names of the facts in the local context invoked by the @{command "case"} command. \<^rail>\ @@{method cases} ('(' 'no_simp' ')')? \ (@{syntax insts} * @'and') rule? ; (@@{method induct} | @@{method induction}) ('(' 'no_simp' ')')? (definsts * @'and') \ arbitrary? taking? rule? ; @@{method coinduct} @{syntax insts} taking rule? ; rule: ('type' | 'pred' | 'set') ':' (@{syntax name} +) | 'rule' ':' (@{syntax thm} +) ; definst: @{syntax name} ('==' | '\') @{syntax term} | '(' @{syntax term} ')' | @{syntax inst} ; definsts: ( definst * ) ; arbitrary: 'arbitrary' ':' ((@{syntax term} * ) @'and' +) ; taking: 'taking' ':' @{syntax insts} \ \<^descr> @{method cases}~\insts R\ applies method @{method rule} with an appropriate case distinction theorem, instantiated to the subjects \insts\. Symbolic case names are bound according to the rule's local contexts. The rule is determined as follows, according to the facts and arguments passed to the @{method cases} method: \<^medskip> \begin{tabular}{llll} facts & & arguments & rule \\\hline \\ R\ & @{method cases} & & implicit rule \R\ \\ & @{method cases} & & classical case split \\ & @{method cases} & \t\ & datatype exhaustion (type of \t\) \\ \\ A t\ & @{method cases} & \\\ & inductive predicate/set elimination (of \A\) \\ \\\ & @{method cases} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Several instantiations may be given, referring to the \<^emph>\suffix\ of premises of the case rule; within each premise, the \<^emph>\prefix\ of variables is instantiated. In most situations, only a single term needs to be specified; this refers to the first variable of the last premise (it is usually the same for all cases). The \(no_simp)\ option can be used to disable pre-simplification of cases (see the description of @{method induct} below for details). \<^descr> @{method induct}~\insts R\ and @{method induction}~\insts R\ are analogous to the @{method cases} method, but refer to induction rules, which are determined as follows: \<^medskip> \begin{tabular}{llll} facts & & arguments & rule \\\hline & @{method induct} & \P x\ & datatype induction (type of \x\) \\ \\ A x\ & @{method induct} & \\\ & predicate/set induction (of \A\) \\ \\\ & @{method induct} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Several instantiations may be given, each referring to some part of a mutual inductive definition or datatype --- only related partial induction rules may be used together, though. Any of the lists of terms \P, x, \\ refers to the \<^emph>\suffix\ of variables present in the induction rule. This enables the writer to specify only induction variables, or both predicates and variables, for example. Instantiations may be definitional: equations \x \ t\ introduce local definitions, which are inserted into the claim and discharged after applying the induction rule. Equalities reappear in the inductive cases, but have been transformed according to the induction principle being involved here. In order to achieve practically useful induction hypotheses, some variables occurring in \t\ need to be fixed (see below). Instantiations of the form \t\, where \t\ is not a variable, are taken as a shorthand for \x \ t\, where \x\ is a fresh variable. If this is not intended, \t\ has to be enclosed in parentheses. By default, the equalities generated by definitional instantiations are pre-simplified using a specific set of rules, usually consisting of distinctness and injectivity theorems for datatypes. This pre-simplification may cause some of the parameters of an inductive case to disappear, or may even completely delete some of the inductive cases, if one of the equalities occurring in their premises can be simplified to \False\. The \(no_simp)\ option can be used to disable pre-simplification. Additional rules to be used in pre-simplification can be declared using the @{attribute_def induct_simp} attribute. The optional ``\arbitrary: x\<^sub>1 \ x\<^sub>m\'' specification generalizes variables - \x\<^sub>1, \, x\<^sub>m\ of the original goal before applying induction. One can - separate variables by ``\and\'' to generalize them in other goals then the - first. Thus induction hypotheses may become sufficiently general to get the - proof through. Together with definitional instantiations, one may + \x\<^sub>1, \, x\<^sub>m\ of the original goal before applying induction. It is possible + to separate variables by ``\and\'' to generalize in goals other than + the first. Thus induction hypotheses may become sufficiently general to get + the proof through. Together with definitional instantiations, one may effectively perform induction over expressions of a certain structure. The optional ``\taking: t\<^sub>1 \ t\<^sub>n\'' specification provides additional instantiations of a prefix of pending variables in the rule. Such schematic induction rules rarely occur in practice, though. \<^descr> @{method coinduct}~\inst R\ is analogous to the @{method induct} method, but refers to coinduction rules, which are determined as follows: \<^medskip> \begin{tabular}{llll} goal & & arguments & rule \\\hline & @{method coinduct} & \x\ & type coinduction (type of \x\) \\ \A x\ & @{method coinduct} & \\\ & predicate/set coinduction (of \A\) \\ \\\ & @{method coinduct} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Coinduction is the dual of induction. Induction essentially eliminates \A x\ towards a generic result \P x\, while coinduction introduces \A x\ starting with \B x\, for a suitable ``bisimulation'' \B\. The cases of a coinduct rule are typically named after the predicates or sets being covered, while the conclusions consist of several alternatives being named after the individual destructor patterns. The given instantiation refers to the \<^emph>\suffix\ of variables occurring in the rule's major premise, or conclusion if unavailable. An additional ``\taking: t\<^sub>1 \ t\<^sub>n\'' specification may be required in order to specify the bisimulation to be used in the coinduction step. Above methods produce named local contexts, as determined by the instantiated rule as given in the text. Beyond that, the @{method induct} and @{method coinduct} methods guess further instantiations from the goal specification itself. Any persisting unresolved schematic variables of the resulting rule will render the the corresponding case invalid. The term binding @{variable ?case} for the conclusion will be provided with each case, provided that term is fully specified. The @{command "print_cases"} command prints all named cases present in the current proof state. \<^medskip> Despite the additional infrastructure, both @{method cases} and @{method coinduct} merely apply a certain rule, after instantiation, while conforming due to the usual way of monotonic natural deduction: the context of a structured statement \\x\<^sub>1 \ x\<^sub>m. \\<^sub>1 \ \ \\<^sub>n \ \\ reappears unchanged after the case split. The @{method induct} method is fundamentally different in this respect: the meta-level structure is passed through the ``recursive'' course involved in the induction. Thus the original statement is basically replaced by separate copies, corresponding to the induction hypotheses and conclusion; the original goal context is no longer available. Thus local assumptions, fixed parameters and definitions effectively participate in the inductive rephrasing of the original statement. In @{method induct} proofs, local assumptions introduced by cases are split into two different kinds: \hyps\ stemming from the rule and \prems\ from the goal statement. This is reflected in the extracted cases accordingly, so invoking ``@{command "case"}~\c\'' will provide separate facts \c.hyps\ and \c.prems\, as well as fact \c\ to hold the all-inclusive list. In @{method induction} proofs, local assumptions introduced by cases are split into three different kinds: \IH\, the induction hypotheses, \hyps\, the remaining hypotheses stemming from the rule, and \prems\, the assumptions from the goal statement. The names are \c.IH\, \c.hyps\ and \c.prems\, as above. \<^medskip> Facts presented to either method are consumed according to the number of ``major premises'' of the rule involved, which is usually 0 for plain cases and induction rules of datatypes etc.\ and 1 for rules of inductive predicates or sets and the like. The remaining facts are inserted into the goal verbatim before the actual \cases\, \induct\, or \coinduct\ rule is applied. \ subsection \Declaring rules\ text \ \begin{matharray}{rcl} @{command_def "print_induct_rules"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def cases} & : & \attribute\ \\ @{attribute_def induct} & : & \attribute\ \\ @{attribute_def coinduct} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute cases} spec ; @@{attribute induct} spec ; @@{attribute coinduct} spec ; spec: (('type' | 'pred' | 'set') ':' @{syntax name}) | 'del' \ \<^descr> @{command "print_induct_rules"} prints cases and induct rules for predicates (or sets) and types of the current context. \<^descr> @{attribute cases}, @{attribute induct}, and @{attribute coinduct} (as attributes) declare rules for reasoning about (co)inductive predicates (or sets) and types, using the corresponding methods of the same name. Certain definitional packages of object-logics usually declare emerging cases and induction rules as expected, so users rarely need to intervene. Rules may be deleted via the \del\ specification, which covers all of the \type\/\pred\/\set\ sub-categories simultaneously. For example, @{attribute cases}~\del\ removes any @{attribute cases} rules declared for some type, predicate, or set. Manual rule declarations usually refer to the @{attribute case_names} and @{attribute params} attributes to adjust names of cases and parameters of a rule; the @{attribute consumes} declaration is taken care of automatically: @{attribute consumes}~\0\ is specified for ``type'' rules and @{attribute consumes}~\1\ for ``predicate'' / ``set'' rules. \ section \Generalized elimination and case splitting \label{sec:obtain}\ text \ \begin{matharray}{rcl} @{command_def "consider"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "obtain"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "guess"}\\<^sup>*\ & : & \proof(state) | proof(chain) \ proof(prove)\ \\ \end{matharray} Generalized elimination means that hypothetical parameters and premises may be introduced in the current context, potentially with a split into cases. This works by virtue of a locally proven rule that establishes the soundness of this temporary context extension. As representative examples, one may think of standard rules from Isabelle/HOL like this: \<^medskip> \begin{tabular}{ll} \\x. B x \ (\x. B x \ thesis) \ thesis\ \\ \A \ B \ (A \ B \ thesis) \ thesis\ \\ \A \ B \ (A \ thesis) \ (B \ thesis) \ thesis\ \\ \end{tabular} \<^medskip> In general, these particular rules and connectives need to get involved at all: this concept works directly in Isabelle/Pure via Isar commands defined below. In particular, the logic of elimination and case splitting is delegated to an Isar proof, which often involves automated tools. \<^rail>\ @@{command consider} @{syntax obtain_clauses} ; @@{command obtain} @{syntax par_name}? @{syntax vars} \ @'where' concl prems @{syntax for_fixes} ; concl: (@{syntax props} + @'and') ; prems: (@'if' (@{syntax props'} + @'and'))? ; @@{command guess} @{syntax vars} \ \<^descr> @{command consider}~\(a) \<^vec>x \ \<^vec>A \<^vec>x | (b) \<^vec>y \ \<^vec>B \<^vec>y | \\ states a rule for case splitting into separate subgoals, such that each case involves new parameters and premises. After the proof is finished, the resulting rule may be used directly with the @{method cases} proof method (\secref{sec:cases-induct}), in order to perform actual case-splitting of the proof text via @{command case} and @{command next} as usual. Optional names in round parentheses refer to case names: in the proof of the rule this is a fact name, in the resulting rule it is used as annotation with the @{attribute_ref case_names} attribute. \<^medskip> Formally, the command @{command consider} is defined as derived Isar language element as follows: \begin{matharray}{l} @{command "consider"}~\(a) \<^vec>x \ \<^vec>A \<^vec>x | (b) \<^vec>y \ \<^vec>B \<^vec>y | \ \\ \\[1ex] \quad @{command "have"}~\[case_names a b \]: thesis\ \\ \qquad \\ a [Pure.intro?]: \\<^vec>x. \<^vec>A \<^vec>x \ thesis\ \\ \qquad \\ b [Pure.intro?]: \\<^vec>y. \<^vec>B \<^vec>y \ thesis\ \\ \qquad \\ \\ \\ \qquad \\ thesis\ \\ \qquad @{command "apply"}~\(insert a b \)\ \\ \end{matharray} See also \secref{sec:goals} for @{keyword "obtains"} in toplevel goal statements, as well as @{command print_statement} to print existing rules in a similar format. \<^descr> @{command obtain}~\\<^vec>x \ \<^vec>A \<^vec>x\ states a generalized elimination rule with exactly one case. After the proof is finished, it is activated for the subsequent proof text: the context is augmented via @{command fix}~\\<^vec>x\ @{command assume}~\\<^vec>A \<^vec>x\, with special provisions to export later results by discharging these assumptions again. Note that according to the parameter scopes within the elimination rule, results \<^emph>\must not\ refer to hypothetical parameters; otherwise the export will fail! This restriction conforms to the usual manner of existential reasoning in Natural Deduction. \<^medskip> Formally, the command @{command obtain} is defined as derived Isar language element as follows, using an instrumented variant of @{command assume}: \begin{matharray}{l} @{command "obtain"}~\\<^vec>x \ a: \<^vec>A \<^vec>x \proof\ \\ \\[1ex] \quad @{command "have"}~\thesis\ \\ \qquad \\ that [Pure.intro?]: \\<^vec>x. \<^vec>A \<^vec>x \ thesis\ \\ \qquad \\ thesis\ \\ \qquad @{command "apply"}~\(insert that)\ \\ \qquad \\proof\\ \\ \quad @{command "fix"}~\\<^vec>x\~@{command "assume"}\\<^sup>* a: \<^vec>A \<^vec>x\ \\ \end{matharray} \<^descr> @{command guess} is similar to @{command obtain}, but it derives the obtained context elements from the course of tactical reasoning in the proof. Thus it can considerably obscure the proof: it is classified as \<^emph>\improper\. A proof with @{command guess} starts with a fixed goal \thesis\. The subsequent refinement steps may turn this to anything of the form \\\<^vec>x. \<^vec>A \<^vec>x \ thesis\, but without splitting into new subgoals. The final goal state is then used as reduction rule for the obtain pattern described above. Obtained parameters \\<^vec>x\ are marked as internal by default, and thus inaccessible in the proof text. The variable names and type constraints given as arguments for @{command "guess"} specify a prefix of accessible parameters. In the proof of @{command consider} and @{command obtain} the local premises are always bound to the fact name @{fact_ref that}, according to structured Isar statements involving @{keyword_ref "if"} (\secref{sec:goals}). Facts that are established by @{command "obtain"} and @{command "guess"} may not be polymorphic: any type-variables occurring here are fixed in the present context. This is a natural consequence of the role of @{command fix} and @{command assume} in these constructs. \ end diff --git a/src/Doc/Isar_Ref/Spec.thy b/src/Doc/Isar_Ref/Spec.thy --- a/src/Doc/Isar_Ref/Spec.thy +++ b/src/Doc/Isar_Ref/Spec.thy @@ -1,1505 +1,1505 @@ (*:maxLineLen=78:*) theory Spec imports Main Base begin chapter \Specifications\ text \ The Isabelle/Isar theory format integrates specifications and proofs, with support for interactive development by continuous document editing. There is a separate document preparation system (see \chref{ch:document-prep}), for typesetting formal developments together with informal text. The resulting hyper-linked PDF documents can be used both for WWW presentation and printed copies. The Isar proof language (see \chref{ch:proofs}) is embedded into the theory language as a proper sub-language. Proof mode is entered by stating some \<^theory_text>\theorem\ or \<^theory_text>\lemma\ at the theory level, and left again with the final conclusion (e.g.\ via \<^theory_text>\qed\). \ section \Defining theories \label{sec:begin-thy}\ text \ \begin{matharray}{rcl} @{command_def "theory"} & : & \toplevel \ theory\ \\ @{command_def (global) "end"} & : & \theory \ toplevel\ \\ @{command_def "thy_deps"}\\<^sup>*\ & : & \theory \\ \\ \end{matharray} Isabelle/Isar theories are defined via theory files, which consist of an outermost sequence of definition--statement--proof elements. Some definitions are self-sufficient (e.g.\ \<^theory_text>\fun\ in Isabelle/HOL), with foundational proofs performed internally. Other definitions require an explicit proof as justification (e.g.\ \<^theory_text>\function\ and \<^theory_text>\termination\ in Isabelle/HOL). Plain statements like \<^theory_text>\theorem\ or \<^theory_text>\lemma\ are merely a special case of that, defining a theorem from a given proposition and its proof. The theory body may be sub-structured by means of \<^emph>\local theory targets\, such as \<^theory_text>\locale\ and \<^theory_text>\class\. It is also possible to use \<^theory_text>\context begin \ end\ blocks to delimited a local theory context: a \<^emph>\named context\ to augment a locale or class specification, or an \<^emph>\unnamed context\ to refer to local parameters and assumptions that are discharged later. See \secref{sec:target} for more details. \<^medskip> A theory is commenced by the \<^theory_text>\theory\ command, which indicates imports of previous theories, according to an acyclic foundational order. Before the initial \<^theory_text>\theory\ command, there may be optional document header material (like \<^theory_text>\section\ or \<^theory_text>\text\, see \secref{sec:markup}). The document header is outside of the formal theory context, though. A theory is concluded by a final @{command (global) "end"} command, one that does not belong to a local theory target. No further commands may follow such a global @{command (global) "end"}. \<^rail>\ @@{command theory} @{syntax system_name} @'imports' (@{syntax system_name} +) \ keywords? abbrevs? @'begin' ; keywords: @'keywords' (keyword_decls + @'and') ; keyword_decls: (@{syntax string} +) ('::' @{syntax name} @{syntax tags})? ; abbrevs: @'abbrevs' (((text+) '=' (text+)) + @'and') ; @@{command thy_deps} (thy_bounds thy_bounds?)? ; thy_bounds: @{syntax name} | '(' (@{syntax name} + @'|') ')' \ \<^descr> \<^theory_text>\theory A imports B\<^sub>1 \ B\<^sub>n begin\ starts a new theory \A\ based on the merge of existing theories \B\<^sub>1 \ B\<^sub>n\. Due to the possibility to import more than one ancestor, the resulting theory structure of an Isabelle session forms a directed acyclic graph (DAG). Isabelle takes care that sources contributing to the development graph are always up-to-date: changed files are automatically rechecked whenever a theory header specification is processed. Empty imports are only allowed in the bootstrap process of the special theory \<^theory>\Pure\, which is the start of any other formal development based on Isabelle. Regular user theories usually refer to some more complex entry point, such as theory \<^theory>\Main\ in Isabelle/HOL. The @{keyword_def "keywords"} specification declares outer syntax (\chref{ch:outer-syntax}) that is introduced in this theory later on (rare in end-user applications). Both minor keywords and major keywords of the Isar command language need to be specified, in order to make parsing of proof documents work properly. Command keywords need to be classified according to their structural role in the formal text. Examples may be seen in Isabelle/HOL sources itself, such as @{keyword "keywords"}~\<^verbatim>\"typedef"\ \:: thy_goal_defn\ or @{keyword "keywords"}~\<^verbatim>\"datatype"\ \:: thy_defn\ for theory-level definitions with and without proof, respectively. Additional @{syntax tags} provide defaults for document preparation (\secref{sec:document-markers}). The @{keyword_def "abbrevs"} specification declares additional abbreviations for syntactic completion. The default for a new keyword is just its name, but completion may be avoided by defining @{keyword "abbrevs"} with empty text. \<^descr> @{command (global) "end"} concludes the current theory definition. Note that some other commands, e.g.\ local theory targets \<^theory_text>\locale\ or \<^theory_text>\class\ may involve a \<^theory_text>\begin\ that needs to be matched by @{command (local) "end"}, according to the usual rules for nested blocks. \<^descr> \<^theory_text>\thy_deps\ visualizes the theory hierarchy as a directed acyclic graph. By default, all imported theories are shown. This may be restricted by specifying bounds wrt. the theory inclusion relation. \ section \Local theory targets \label{sec:target}\ text \ \begin{matharray}{rcll} @{command_def "context"} & : & \theory \ local_theory\ \\ @{command_def (local) "end"} & : & \local_theory \ theory\ \\ @{keyword_def "private"} \\ @{keyword_def "qualified"} \\ \end{matharray} A local theory target is a specification context that is managed separately within the enclosing theory. Contexts may introduce parameters (fixed variables) and assumptions (hypotheses). Definitions and theorems depending on the context may be added incrementally later on. \<^emph>\Named contexts\ refer to locales (cf.\ \secref{sec:locale}) or type classes (cf.\ \secref{sec:class}); the name ``\-\'' signifies the global theory context. \<^emph>\Unnamed contexts\ may introduce additional parameters and assumptions, and results produced in the context are generalized accordingly. Such auxiliary contexts may be nested within other targets, like \<^theory_text>\locale\, \<^theory_text>\class\, \<^theory_text>\instantiation\, \<^theory_text>\overloading\. \<^rail>\ @@{command context} @{syntax name} @'begin' ; @@{command context} @{syntax_ref "includes"}? (@{syntax context_elem} * ) @'begin' ; @{syntax_def target}: '(' @'in' @{syntax name} ')' \ \<^descr> \<^theory_text>\context c begin\ opens a named context, by recommencing an existing locale or class \c\. Note that locale and class definitions allow to include the \<^theory_text>\begin\ keyword as well, in order to continue the local theory immediately after the initial specification. \<^descr> \<^theory_text>\context bundles elements begin\ opens an unnamed context, by extending the enclosing global or local theory target by the given declaration bundles (\secref{sec:bundle}) and context elements (\<^theory_text>\fixes\, \<^theory_text>\assumes\ etc.). This means any results stemming from definitions and proofs in the extended context will be exported into the enclosing target by lifting over extra parameters and premises. \<^descr> @{command (local) "end"} concludes the current local theory, according to the nesting of contexts. Note that a global @{command (global) "end"} has a different meaning: it concludes the theory itself (\secref{sec:begin-thy}). \<^descr> \<^theory_text>\private\ or \<^theory_text>\qualified\ may be given as modifiers before any local theory command. This restricts name space accesses to the local scope, as determined by the enclosing \<^theory_text>\context begin \ end\ block. Outside its scope, a \<^theory_text>\private\ name is inaccessible, and a \<^theory_text>\qualified\ name is only accessible with some qualification. Neither a global \<^theory_text>\theory\ nor a \<^theory_text>\locale\ target provides a local scope by itself: an extra unnamed context is required to use \<^theory_text>\private\ or \<^theory_text>\qualified\ here. \<^descr> \(\@{keyword_def "in"}~\c)\ given after any local theory command specifies an immediate target, e.g.\ ``\<^theory_text>\definition (in c)\'' or ``\<^theory_text>\theorem (in c)\''. This works both in a local or global theory context; the current target context will be suspended for this command only. Note that ``\<^theory_text>\(in -)\'' will always produce a global result independently of the current target context. Any specification element that operates on \local_theory\ according to this manual implicitly allows the above target syntax \<^theory_text>\(in c)\, but individual syntax diagrams omit that aspect for clarity. \<^medskip> The exact meaning of results produced within a local theory context depends on the underlying target infrastructure (locale, type class etc.). The general idea is as follows, considering a context named \c\ with parameter \x\ and assumption \A[x]\. Definitions are exported by introducing a global version with additional arguments; a syntactic abbreviation links the long form with the abstract version of the target context. For example, \a \ t[x]\ becomes \c.a ?x \ t[?x]\ at the theory level (for arbitrary \?x\), together with a local - abbreviation \c \ c.a x\ in the target context (for the fixed parameter + abbreviation \a \ c.a x\ in the target context (for the fixed parameter \x\). Theorems are exported by discharging the assumptions and generalizing the parameters of the context. For example, \a: B[x]\ becomes \c.a: A[?x] \ B[?x]\, again for arbitrary \?x\. \ section \Bundled declarations \label{sec:bundle}\ text \ \begin{matharray}{rcl} @{command_def "bundle"} & : & \local_theory \ local_theory\ \\ @{command "bundle"} & : & \theory \ local_theory\ \\ @{command_def "print_bundles"}\\<^sup>*\ & : & \context \\ \\ @{command_def "include"} & : & \proof(state) \ proof(state)\ \\ @{command_def "including"} & : & \proof(prove) \ proof(prove)\ \\ @{keyword_def "includes"} & : & syntax \\ \end{matharray} The outer syntax of fact expressions (\secref{sec:syn-att}) involves theorems and attributes, which are evaluated in the context and applied to it. Attributes may declare theorems to the context, as in \this_rule [intro] that_rule [elim]\ for example. Configuration options (\secref{sec:config}) are special declaration attributes that operate on the context without a theorem, as in \[[show_types = false]]\ for example. Expressions of this form may be defined as \<^emph>\bundled declarations\ in the context, and included in other situations later on. Including declaration bundles augments a local context casually without logical dependencies, which is in contrast to locales and locale interpretation (\secref{sec:locale}). \<^rail>\ @@{command bundle} @{syntax name} ( '=' @{syntax thms} @{syntax for_fixes} | @'begin') ; @@{command print_bundles} ('!'?) ; (@@{command include} | @@{command including}) (@{syntax name}+) ; @{syntax_def "includes"}: @'includes' (@{syntax name}+) \ \<^descr> \<^theory_text>\bundle b = decls\ defines a bundle of declarations in the current context. The RHS is similar to the one of the \<^theory_text>\declare\ command. Bundles defined in local theory targets are subject to transformations via morphisms, when moved into different application contexts; this works analogously to any other local theory specification. \<^descr> \<^theory_text>\bundle b begin body end\ defines a bundle of declarations from the \body\ of local theory specifications. It may consist of commands that are technically equivalent to \<^theory_text>\declare\ or \<^theory_text>\declaration\, which also includes \<^theory_text>\notation\, for example. Named fact declarations like ``\<^theory_text>\lemmas a [simp] = b\'' or ``\<^theory_text>\lemma a [simp]: B \\'' are also admitted, but the name bindings are not recorded in the bundle. \<^descr> \<^theory_text>\print_bundles\ prints the named bundles that are available in the current context; the ``\!\'' option indicates extra verbosity. \<^descr> \<^theory_text>\unbundle b\<^sub>1 \ b\<^sub>n\ activates the declarations from the given bundles in the current local theory context. This is analogous to \<^theory_text>\lemmas\ (\secref{sec:theorems}) with the expanded bundles. \<^descr> \<^theory_text>\include\ is similar to \<^theory_text>\unbundle\, but works in a proof body (forward mode). This is analogous to \<^theory_text>\note\ (\secref{sec:proof-facts}) with the expanded bundles. \<^descr> \<^theory_text>\including\ is similar to \<^theory_text>\include\, but works in proof refinement (backward mode). This is analogous to \<^theory_text>\using\ (\secref{sec:proof-facts}) with the expanded bundles. \<^descr> \<^theory_text>\includes b\<^sub>1 \ b\<^sub>n\ is similar to \<^theory_text>\include\, but works in situations where a specification context is constructed, notably for \<^theory_text>\context\ and long statements of \<^theory_text>\theorem\ etc. Here is an artificial example of bundling various configuration options: \ (*<*)experiment begin(*>*) bundle trace = [[simp_trace, linarith_trace, metis_trace, smt_trace]] lemma "x = x" including trace by metis (*<*)end(*>*) section \Term definitions \label{sec:term-definitions}\ text \ \begin{matharray}{rcll} @{command_def "definition"} & : & \local_theory \ local_theory\ \\ @{attribute_def "defn"} & : & \attribute\ \\ @{command_def "print_defn_rules"}\\<^sup>*\ & : & \context \\ \\ @{command_def "abbreviation"} & : & \local_theory \ local_theory\ \\ @{command_def "print_abbrevs"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} Term definitions may either happen within the logic (as equational axioms of a certain form (see also \secref{sec:overloading}), or outside of it as rewrite system on abstract syntax. The second form is called ``abbreviation''. \<^rail>\ @@{command definition} decl? definition ; @@{command abbreviation} @{syntax mode}? decl? abbreviation ; @@{command print_abbrevs} ('!'?) ; decl: @{syntax name} ('::' @{syntax type})? @{syntax mixfix}? @'where' ; definition: @{syntax thmdecl}? @{syntax prop} @{syntax spec_prems} @{syntax for_fixes} ; abbreviation: @{syntax prop} @{syntax for_fixes} \ \<^descr> \<^theory_text>\definition c where eq\ produces an internal definition \c \ t\ according to the specification given as \eq\, which is then turned into a proven fact. The given proposition may deviate from internal meta-level equality according to the rewrite rules declared as @{attribute defn} by the object-logic. This usually covers object-level equality \x = y\ and equivalence \A \ B\. End-users normally need not change the @{attribute defn} setup. Definitions may be presented with explicit arguments on the LHS, as well as additional conditions, e.g.\ \f x y = t\ instead of \f \ \x y. t\ and \y \ 0 \ g x y = u\ instead of an unrestricted \g \ \x y. u\. \<^descr> \<^theory_text>\print_defn_rules\ prints the definitional rewrite rules declared via @{attribute defn} in the current context. \<^descr> \<^theory_text>\abbreviation c where eq\ introduces a syntactic constant which is associated with a certain term according to the meta-level equality \eq\. Abbreviations participate in the usual type-inference process, but are expanded before the logic ever sees them. Pretty printing of terms involves higher-order rewriting with rules stemming from reverted abbreviations. This needs some care to avoid overlapping or looping syntactic replacements! The optional \mode\ specification restricts output to a particular print mode; using ``\input\'' here achieves the effect of one-way abbreviations. The mode may also include an ``\<^theory_text>\output\'' qualifier that affects the concrete syntax declared for abbreviations, cf.\ \<^theory_text>\syntax\ in \secref{sec:syn-trans}. \<^descr> \<^theory_text>\print_abbrevs\ prints all constant abbreviations of the current context; the ``\!\'' option indicates extra verbosity. \ section \Axiomatizations \label{sec:axiomatizations}\ text \ \begin{matharray}{rcll} @{command_def "axiomatization"} & : & \theory \ theory\ & (axiomatic!) \\ \end{matharray} \<^rail>\ @@{command axiomatization} @{syntax vars}? (@'where' axiomatization)? ; axiomatization: (@{syntax thmdecl} @{syntax prop} + @'and') @{syntax spec_prems} @{syntax for_fixes} \ \<^descr> \<^theory_text>\axiomatization c\<^sub>1 \ c\<^sub>m where \\<^sub>1 \ \\<^sub>n\ introduces several constants simultaneously and states axiomatic properties for these. The constants are marked as being specified once and for all, which prevents additional - specifications for the same constants later on, but it is always possible do + specifications for the same constants later on, but it is always possible to emit axiomatizations without referring to particular constants. Note that lack of precise dependency tracking of axiomatizations may disrupt the well-formedness of an otherwise definitional theory. Axiomatization is restricted to a global theory context: support for local theory targets \secref{sec:target} would introduce an extra dimension of uncertainty what the written specifications really are, and make it infeasible to argue why they are correct. Axiomatic specifications are required when declaring a new logical system within Isabelle/Pure, but in an application environment like Isabelle/HOL the user normally stays within definitional mechanisms provided by the logic and its libraries. \ section \Generic declarations\ text \ \begin{matharray}{rcl} @{command_def "declaration"} & : & \local_theory \ local_theory\ \\ @{command_def "syntax_declaration"} & : & \local_theory \ local_theory\ \\ @{command_def "declare"} & : & \local_theory \ local_theory\ \\ \end{matharray} Arbitrary operations on the background context may be wrapped-up as generic declaration elements. Since the underlying concept of local theories may be subject to later re-interpretation, there is an additional dependency on a morphism that tells the difference of the original declaration context wrt.\ the application context encountered later on. A fact declaration is an important special case: it consists of a theorem which is applied to the context by means of an attribute. \<^rail>\ (@@{command declaration} | @@{command syntax_declaration}) ('(' @'pervasive' ')')? \ @{syntax text} ; @@{command declare} (@{syntax thms} + @'and') \ \<^descr> \<^theory_text>\declaration d\ adds the declaration function \d\ of ML type \<^ML_type>\declaration\, to the current local theory under construction. In later application contexts, the function is transformed according to the morphisms being involved in the interpretation hierarchy. If the \<^theory_text>\(pervasive)\ option is given, the corresponding declaration is applied to all possible contexts involved, including the global background theory. \<^descr> \<^theory_text>\syntax_declaration\ is similar to \<^theory_text>\declaration\, but is meant to affect only ``syntactic'' tools by convention (such as notation and type-checking information). \<^descr> \<^theory_text>\declare thms\ declares theorems to the current local theory context. No theorem binding is involved here, unlike \<^theory_text>\lemmas\ (cf.\ \secref{sec:theorems}), so \<^theory_text>\declare\ only has the effect of applying attributes as included in the theorem specification. \ section \Locales \label{sec:locale}\ text \ A locale is a functor that maps parameters (including implicit type parameters) and a specification to a list of declarations. The syntax of locales is modeled after the Isar proof context commands (cf.\ \secref{sec:proof-context}). Locale hierarchies are supported by maintaining a graph of dependencies between locale instances in the global theory. Dependencies may be introduced through import (where a locale is defined as sublocale of the imported instances) or by proving that an existing locale is a sublocale of one or several locale instances. A locale may be opened with the purpose of appending to its list of declarations (cf.\ \secref{sec:target}). When opening a locale declarations from all dependencies are collected and are presented as a local theory. In this process, which is called \<^emph>\roundup\, redundant locale instances are omitted. A locale instance is redundant if it is subsumed by an instance encountered earlier. A more detailed description of this process is available elsewhere @{cite Ballarin2014}. \ subsection \Locale expressions \label{sec:locale-expr}\ text \ A \<^emph>\locale expression\ denotes a context composed of instances of existing locales. The context consists of the declaration elements from the locale instances. Redundant locale instances are omitted according to roundup. \<^rail>\ @{syntax_def locale_expr}: (instance + '+') @{syntax for_fixes} ; instance: (qualifier ':')? @{syntax name} (pos_insts | named_insts) \ rewrites? ; qualifier: @{syntax name} ('?')? ; pos_insts: ('_' | @{syntax term})* ; named_insts: @'where' (@{syntax name} '=' @{syntax term} + @'and') ; rewrites: @'rewrites' (@{syntax thmdecl}? @{syntax prop} + @'and') \ A locale instance consists of a reference to a locale and either positional or named parameter instantiations optionally followed by rewrites clauses. Identical instantiations (that is, those that instantiate a parameter by itself) may be omitted. The notation ``\_\'' enables to omit the instantiation for a parameter inside a positional instantiation. Terms in instantiations are from the context the locale expressions is declared in. Local names may be added to this context with the optional \<^theory_text>\for\ clause. This is useful for shadowing names bound in outer contexts, and for declaring syntax. In addition, syntax declarations from one instance are effective when parsing subsequent instances of the same expression. Instances have an optional qualifier which applies to names in declarations. Names include local definitions and theorem names. If present, the qualifier itself is either mandatory (default) or non-mandatory (when followed by ``\<^verbatim>\?\''). Non-mandatory means that the qualifier may be omitted on input. Qualifiers only affect name spaces; they play no role in determining whether one locale instance subsumes another. Rewrite clauses amend instances with equations that act as rewrite rules. This is particularly useful for changing concepts introduced through definitions. Rewrite clauses are available only in interpretation commands (see \secref{sec:locale-interpretation} below) and must be proved the user. \ subsection \Locale declarations\ text \ \begin{tabular}{rcll} @{command_def "locale"} & : & \theory \ local_theory\ \\ @{command_def "experiment"} & : & \theory \ local_theory\ \\ @{command_def "print_locale"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_locales"}\\<^sup>*\ & : & \context \\ \\ @{command_def "locale_deps"}\\<^sup>*\ & : & \context \\ \\ @{method_def intro_locales} & : & \method\ \\ @{method_def unfold_locales} & : & \method\ \\ @{attribute_def trace_locales} & : & \attribute\ & default \false\ \\ \end{tabular} \indexisarelem{fixes}\indexisarelem{constrains}\indexisarelem{assumes} \indexisarelem{defines}\indexisarelem{notes} \<^rail>\ @@{command locale} @{syntax name} ('=' @{syntax locale})? @'begin'? ; @@{command experiment} (@{syntax context_elem}*) @'begin' ; @@{command print_locale} '!'? @{syntax name} ; @@{command print_locales} ('!'?) ; @{syntax_def locale}: @{syntax context_elem}+ | @{syntax locale_expr} ('+' (@{syntax context_elem}+))? ; @{syntax_def context_elem}: @'fixes' @{syntax vars} | @'constrains' (@{syntax name} '::' @{syntax type} + @'and') | @'assumes' (@{syntax props} + @'and') | @'defines' (@{syntax thmdecl}? @{syntax prop} @{syntax prop_pat}? + @'and') | @'notes' (@{syntax thmdef}? @{syntax thms} + @'and') \ \<^descr> \<^theory_text>\locale loc = import + body\ defines a new locale \loc\ as a context consisting of a certain view of existing locales (\import\) plus some additional elements (\body\). Both \import\ and \body\ are optional; the degenerate form \<^theory_text>\locale loc\ defines an empty locale, which may still be useful to collect declarations of facts later on. Type-inference on locale expressions automatically takes care of the most general typing that the combined context elements may acquire. - The \import\ consists of a locale expression; see \secref{sec:proof-context} + The \import\ consists of a locale expression; see \secref{sec:locale-expr} above. Its \<^theory_text>\for\ clause defines the parameters of \import\. These are parameters of the defined locale. Locale parameters whose instantiation is omitted automatically extend the (possibly empty) \<^theory_text>\for\ clause: they are inserted at its beginning. This means that these parameters may be referred to from within the expression and also in the subsequent context elements and provides a notational convenience for the inheritance of parameters in locale declarations. The \body\ consists of context elements. \<^descr> @{element "fixes"}~\x :: \ (mx)\ declares a local parameter of type \\\ and mixfix annotation \mx\ (both are optional). The special syntax declaration ``\(\@{keyword_ref "structure"}\)\'' means that \x\ may be referenced implicitly in this context. \<^descr> @{element "constrains"}~\x :: \\ introduces a type constraint \\\ on the local parameter \x\. This element is deprecated. The type constraint should be introduced in the \<^theory_text>\for\ clause or the relevant @{element "fixes"} element. \<^descr> @{element "assumes"}~\a: \\<^sub>1 \ \\<^sub>n\ introduces local premises, similar to \<^theory_text>\assume\ within a proof (cf.\ \secref{sec:proof-context}). \<^descr> @{element "defines"}~\a: x \ t\ defines a previously declared parameter. This is similar to \<^theory_text>\define\ within a proof (cf.\ \secref{sec:proof-context}), but @{element "defines"} is restricted to Pure equalities and the defined variable needs to be declared beforehand via @{element "fixes"}. The left-hand side of the equation may have additional arguments, e.g.\ ``@{element "defines"}~\f x\<^sub>1 \ x\<^sub>n \ t\'', which need to be free in the context. \<^descr> @{element "notes"}~\a = b\<^sub>1 \ b\<^sub>n\ reconsiders facts within a local context. Most notably, this may include arbitrary declarations in any attribute specifications included here, e.g.\ a local @{attribute simp} rule. Both @{element "assumes"} and @{element "defines"} elements contribute to the locale specification. When defining an operation derived from the parameters, \<^theory_text>\definition\ (\secref{sec:term-definitions}) is usually more appropriate. Note that ``\<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\'' patterns given in the syntax of @{element "assumes"} and @{element "defines"} above are illegal in locale definitions. In the long goal format of \secref{sec:goals}, term bindings may be included as expected, though. \<^medskip> Locale specifications are ``closed up'' by turning the given text into a predicate definition \loc_axioms\ and deriving the original assumptions as local lemmas (modulo local definitions). The predicate statement covers only the newly specified assumptions, omitting the content of included locale expressions. The full cumulative view is only provided on export, involving another predicate \loc\ that refers to the complete specification text. In any case, the predicate arguments are those locale parameters that actually occur in the respective piece of text. Also these predicates operate at the meta-level in theory, but the locale packages attempts to internalize statements according to the object-logic setup (e.g.\ replacing \\\ by \\\, and \\\ by \\\ in HOL; see also \secref{sec:object-logic}). Separate introduction rules \loc_axioms.intro\ and \loc.intro\ are provided as well. \<^descr> \<^theory_text>\experiment exprs begin\ opens an anonymous locale context with private naming policy. Specifications in its body are inaccessible from outside. This is useful to perform experiments, without polluting the name space. \<^descr> \<^theory_text>\print_locale "locale"\ prints the contents of the named locale. The command omits @{element "notes"} elements by default. Use \<^theory_text>\print_locale!\ to have them included. \<^descr> \<^theory_text>\print_locales\ prints the names of all locales of the current theory; the ``\!\'' option indicates extra verbosity. \<^descr> \<^theory_text>\locale_deps\ visualizes all locales and their relations as a Hasse diagram. This includes locales defined as type classes (\secref{sec:class}). \<^descr> @{method intro_locales} and @{method unfold_locales} repeatedly expand all introduction rules of locale predicates of the theory. While @{method intro_locales} only applies the \loc.intro\ introduction rules and therefore does not descend to assumptions, @{method unfold_locales} is more aggressive and applies \loc_axioms.intro\ as well. Both methods are aware of locale specifications entailed by the context, both from target statements, and from interpretations (see below). New goals that are entailed by the current context are discharged automatically. \<^descr> @{attribute trace_locales}, when set to \true\, prints the locale instances activated during roundup. Useful for understanding local theories created by complex locale hierarchies. \ subsection \Locale interpretation \label{sec:locale-interpretation}\ text \ \begin{matharray}{rcl} @{command "interpretation"} & : & \local_theory \ proof(prove)\ \\ @{command_def "interpret"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "global_interpretation"} & : & \theory | local_theory \ proof(prove)\ \\ @{command_def "sublocale"} & : & \theory | local_theory \ proof(prove)\ \\ @{command_def "print_interps"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} Locales may be instantiated, and the resulting instantiated declarations added to the current context. This requires proof (of the instantiated specification) and is called \<^emph>\locale interpretation\. Interpretation is possible within arbitrary local theories (\<^theory_text>\interpretation\), within proof bodies (\<^theory_text>\interpret\), into global theories (\<^theory_text>\global_interpretation\) and into locales (\<^theory_text>\sublocale\). \<^rail>\ @@{command interpretation} @{syntax locale_expr} ; @@{command interpret} @{syntax locale_expr} ; @@{command global_interpretation} @{syntax locale_expr} definitions? ; @@{command sublocale} (@{syntax name} ('<' | '\'))? @{syntax locale_expr} \ definitions? ; @@{command print_interps} @{syntax name} ; definitions: @'defines' (@{syntax thmdecl}? @{syntax name} \ @{syntax mixfix}? '=' @{syntax term} + @'and'); \ The core of each interpretation command is a locale expression \expr\; the command generates proof obligations for the instantiated specifications. Once these are discharged by the user, instantiated declarations (in particular, facts) are added to the context in a post-processing phase, in a manner specific to each command. Interpretation commands are aware of interpretations that are already active: post-processing is achieved through a variant of roundup that takes interpretations of the current global or local theory into account. In order to simplify the proof obligations according to existing interpretations use methods @{method intro_locales} or @{method unfold_locales}. Rewrites clauses \<^theory_text>\rewrites eqns\ occur within expressions. They amend the morphism through which a locale instance is interpreted with rewrite rules, also called rewrite morphisms. This is particularly useful for interpreting concepts introduced through definitions. The equations must be proved the user. To enable syntax of the instantiated locale within the equation, while reading a locale expression, equations of a locale instance are read in a temporary context where the instance is already activated. If activation fails, typically due to duplicate constant declarations, processing falls back to reading the equation first. Given definitions \defs\ produce corresponding definitions in the local theory's underlying target \<^emph>\and\ amend the morphism with rewrite rules stemming from the symmetric of those definitions. Hence these need not be proved explicitly the user. Such rewrite definitions are a even more useful device for interpreting concepts introduced through definitions, but they are only supported for interpretation commands operating in a local theory whose implementing target actually supports this. Note that despite the suggestive \<^theory_text>\and\ connective, \defs\ are processed sequentially without mutual recursion. \<^descr> \<^theory_text>\interpretation expr\ interprets \expr\ into a local theory such that its lifetime is limited to the current context block (e.g. a locale or unnamed context). At the closing @{command end} of the block the interpretation and its declarations disappear. Hence facts based on interpretation can be established without creating permanent links to the interpreted locale instances, as would be the case with @{command sublocale}. When used on the level of a global theory, there is no end of a current context block, hence \<^theory_text>\interpretation\ behaves identically to \<^theory_text>\global_interpretation\ then. \<^descr> \<^theory_text>\interpret expr\ interprets \expr\ into a proof context: the interpretation and its declarations disappear when closing the current proof block. Note that for \<^theory_text>\interpret\ the \eqns\ should be explicitly universally quantified. \<^descr> \<^theory_text>\global_interpretation defines defs\ interprets \expr\ into a global theory. When adding declarations to locales, interpreted versions of these declarations are added to the global theory for all interpretations in the global theory as well. That is, interpretations into global theories dynamically participate in any declarations added to locales. Free variables in the interpreted expression are allowed. They are turned into schematic variables in the generated declarations. In order to use a free variable whose name is already bound in the context --- for example, because a constant of that name exists --- add it to the \<^theory_text>\for\ clause. \<^descr> \<^theory_text>\sublocale name \ expr defines defs\ interprets \expr\ into the locale \name\. A proof that the specification of \name\ implies the specification of \expr\ is required. As in the localized version of the theorem command, the proof is in the context of \name\. After the proof obligation has been discharged, the locale hierarchy is changed as if \name\ imported \expr\ (hence the name \<^theory_text>\sublocale\). When the context of \name\ is subsequently entered, traversing the locale hierarchy will involve the locale instances of \expr\, and their declarations will be added to the context. This makes \<^theory_text>\sublocale\ dynamic: extensions of a locale that is instantiated in \expr\ may take place after the \<^theory_text>\sublocale\ declaration and still become available in the context. Circular \<^theory_text>\sublocale\ declarations are allowed as long as they do not lead to infinite chains. If interpretations of \name\ exist in the current global theory, the command adds interpretations for \expr\ as well, with the same qualifier, although only for fragments of \expr\ that are not interpreted in the theory already. Rewrites clauses in the expression or rewrite definitions \defs\ can help break infinite chains induced by circular \<^theory_text>\sublocale\ declarations. In a named context block the \<^theory_text>\sublocale\ command may also be used, but the locale argument must be omitted. The command then refers to the locale (or class) target of the context block. \<^descr> \<^theory_text>\print_interps name\ lists all interpretations of locale \name\ in the current theory or proof context, including those due to a combination of an \<^theory_text>\interpretation\ or \<^theory_text>\interpret\ and one or several \<^theory_text>\sublocale\ declarations. \begin{warn} If a global theory inherits declarations (body elements) for a locale from one parent and an interpretation of that locale from another parent, the interpretation will not be applied to the declarations. \end{warn} \begin{warn} Since attributes are applied to interpreted theorems, interpretation may modify the context of common proof tools, e.g.\ the Simplifier or Classical Reasoner. As the behaviour of such tools is \<^emph>\not\ stable under interpretation morphisms, manual declarations might have to be added to the target context of the interpretation to revert such declarations. \end{warn} \begin{warn} An interpretation in a local theory or proof context may subsume previous interpretations. This happens if the same specification fragment is interpreted twice and the instantiation of the second interpretation is more general than the interpretation of the first. The locale package does not attempt to remove subsumed interpretations. \end{warn} \begin{warn} While \<^theory_text>\interpretation (in c) \\ is admissible, it is not useful since its result is discarded immediately. \end{warn} \ section \Classes \label{sec:class}\ text \ \begin{matharray}{rcl} @{command_def "class"} & : & \theory \ local_theory\ \\ @{command_def "instantiation"} & : & \theory \ local_theory\ \\ @{command_def "instance"} & : & \local_theory \ local_theory\ \\ @{command "instance"} & : & \theory \ proof(prove)\ \\ @{command_def "subclass"} & : & \local_theory \ local_theory\ \\ @{command_def "print_classes"}\\<^sup>*\ & : & \context \\ \\ @{command_def "class_deps"}\\<^sup>*\ & : & \context \\ \\ @{method_def intro_classes} & : & \method\ \\ \end{matharray} A class is a particular locale with \<^emph>\exactly one\ type variable \\\. Beyond the underlying locale, a corresponding type class is established which is interpreted logically as axiomatic type class @{cite "Wenzel:1997:TPHOL"} whose logical content are the assumptions of the locale. Thus, classes provide the full generality of locales combined with the commodity of type classes (notably type-inference). See @{cite "isabelle-classes"} for a short tutorial. \<^rail>\ @@{command class} class_spec @'begin'? ; class_spec: @{syntax name} '=' ((@{syntax name} '+' (@{syntax context_elem}+)) | @{syntax name} | (@{syntax context_elem}+)) ; @@{command instantiation} (@{syntax name} + @'and') '::' @{syntax arity} @'begin' ; @@{command instance} (() | (@{syntax name} + @'and') '::' @{syntax arity} | @{syntax name} ('<' | '\') @{syntax name} ) ; @@{command subclass} @{syntax name} ; @@{command class_deps} (class_bounds class_bounds?)? ; class_bounds: @{syntax sort} | '(' (@{syntax sort} + @'|') ')' \ \<^descr> \<^theory_text>\class c = superclasses + body\ defines a new class \c\, inheriting from \superclasses\. This introduces a locale \c\ with import of all locales \superclasses\. Any @{element "fixes"} in \body\ are lifted to the global theory level (\<^emph>\class operations\ \f\<^sub>1, \, f\<^sub>n\ of class \c\), mapping the local type parameter \\\ to a schematic type variable \?\ :: c\. Likewise, @{element "assumes"} in \body\ are also lifted, mapping each local parameter \f :: \[\]\ to its corresponding global constant \f :: \[?\ :: c]\. The corresponding introduction rule is provided as \c_class_axioms.intro\. This rule should be rarely needed directly --- the @{method intro_classes} method takes care of the details of class membership proofs. \<^descr> \<^theory_text>\instantiation t :: (s\<^sub>1, \, s\<^sub>n)s begin\ opens a target (cf.\ \secref{sec:target}) which allows to specify class operations \f\<^sub>1, \, f\<^sub>n\ corresponding to sort \s\ at the particular type instance \(\\<^sub>1 :: s\<^sub>1, \, \\<^sub>n :: s\<^sub>n) t\. A plain \<^theory_text>\instance\ command in the target body poses a goal stating these type arities. The target is concluded by an @{command_ref (local) "end"} command. Note that a list of simultaneous type constructors may be given; this corresponds nicely to mutually recursive type definitions, e.g.\ in Isabelle/HOL. \<^descr> \<^theory_text>\instance\ in an instantiation target body sets up a goal stating the type arities claimed at the opening \<^theory_text>\instantiation\. The proof would usually proceed by @{method intro_classes}, and then establish the characteristic theorems of the type classes involved. After finishing the proof, the background theory will be augmented by the proven type arities. On the theory level, \<^theory_text>\instance t :: (s\<^sub>1, \, s\<^sub>n)s\ provides a convenient way to instantiate a type class with no need to specify operations: one can continue with the instantiation proof immediately. \<^descr> \<^theory_text>\subclass c\ in a class context for class \d\ sets up a goal stating that class \c\ is logically contained in class \d\. After finishing the proof, class \d\ is proven to be subclass \c\ and the locale \c\ is interpreted into \d\ simultaneously. A weakened form of this is available through a further variant of @{command instance}: \<^theory_text>\instance c\<^sub>1 \ c\<^sub>2\ opens a proof that class \c\<^sub>2\ implies \c\<^sub>1\ without reference to the underlying locales; this is useful if the properties to prove the logical connection are not sufficient on the locale level but on the theory level. \<^descr> \<^theory_text>\print_classes\ prints all classes in the current theory. \<^descr> \<^theory_text>\class_deps\ visualizes classes and their subclass relations as a directed acyclic graph. By default, all classes from the current theory context are show. This may be restricted by optional bounds as follows: \<^theory_text>\class_deps upper\ or \<^theory_text>\class_deps upper lower\. A class is visualized, iff it is a subclass of some sort from \upper\ and a superclass of some sort from \lower\. \<^descr> @{method intro_classes} repeatedly expands all class introduction rules of this theory. Note that this method usually needs not be named explicitly, as it is already included in the default proof step (e.g.\ of \<^theory_text>\proof\). In particular, instantiation of trivial (syntactic) classes may be performed by a single ``\<^theory_text>\..\'' proof step. \ subsection \The class target\ text \ %FIXME check A named context may refer to a locale (cf.\ \secref{sec:target}). If this locale is also a class \c\, apart from the common locale target behaviour the following happens. \<^item> Local constant declarations \g[\]\ referring to the local type parameter \\\ and local parameters \f[\]\ are accompanied by theory-level constants \g[?\ :: c]\ referring to theory-level class operations \f[?\ :: c]\. \<^item> Local theorem bindings are lifted as are assumptions. \<^item> Local syntax refers to local operations \g[\]\ and global operations \g[?\ :: c]\ uniformly. Type inference resolves ambiguities. In rare cases, manual type annotations are needed. \ subsection \Co-regularity of type classes and arities\ text \ The class relation together with the collection of type-constructor arities must obey the principle of \<^emph>\co-regularity\ as defined below. \<^medskip> For the subsequent formulation of co-regularity we assume that the class relation is closed by transitivity and reflexivity. Moreover the collection of arities \t :: (\<^vec>s)c\ is completed such that \t :: (\<^vec>s)c\ and \c \ c'\ implies \t :: (\<^vec>s)c'\ for all such declarations. Treating sorts as finite sets of classes (meaning the intersection), the class relation \c\<^sub>1 \ c\<^sub>2\ is extended to sorts as follows: \[ \s\<^sub>1 \ s\<^sub>2 \ \c\<^sub>2 \ s\<^sub>2. \c\<^sub>1 \ s\<^sub>1. c\<^sub>1 \ c\<^sub>2\ \] This relation on sorts is further extended to tuples of sorts (of the same length) in the component-wise way. \<^medskip> Co-regularity of the class relation together with the arities relation means: \[ \t :: (\<^vec>s\<^sub>1)c\<^sub>1 \ t :: (\<^vec>s\<^sub>2)c\<^sub>2 \ c\<^sub>1 \ c\<^sub>2 \ \<^vec>s\<^sub>1 \ \<^vec>s\<^sub>2\ \] for all such arities. In other words, whenever the result classes of some type-constructor arities are related, then the argument sorts need to be related in the same way. \<^medskip> Co-regularity is a very fundamental property of the order-sorted algebra of - types. For example, it entails principle types and most general unifiers, + types. For example, it entails principal types and most general unifiers, e.g.\ see @{cite "nipkow-prehofer"}. \ section \Overloaded constant definitions \label{sec:overloading}\ text \ Definitions essentially express abbreviations within the logic. The simplest form of a definition is \c :: \ \ t\, where \c\ is a new constant and \t\ is a closed term that does not mention \c\. Moreover, so-called \<^emph>\hidden polymorphism\ is excluded: all type variables in \t\ need to occur in its type \\\. \<^emph>\Overloading\ means that a constant being declared as \c :: \ decl\ may be defined separately on type instances \c :: (\\<^sub>1, \, \\<^sub>n)\ decl\ for each type constructor \\\. At most occasions overloading will be used in a Haskell-like fashion together with type classes by means of \<^theory_text>\instantiation\ (see \secref{sec:class}). Sometimes low-level overloading is desirable; this is supported by \<^theory_text>\consts\ and \<^theory_text>\overloading\ explained below. The right-hand side of overloaded definitions may mention overloaded constants recursively at type instances corresponding to the immediate argument types \\\<^sub>1, \, \\<^sub>n\. Incomplete specification patterns impose global constraints on all occurrences. E.g.\ \d :: \ \ \\ on the left-hand side means that all corresponding occurrences on some right-hand side need to be an instance of this, and general \d :: \ \ \\ will be disallowed. Full details are given by Kun\v{c}ar @{cite "Kuncar:2015"}. \<^medskip> The \<^theory_text>\consts\ command and the \<^theory_text>\overloading\ target provide a convenient interface for end-users. Regular specification elements such as @{command definition}, @{command inductive}, @{command function} may be used in the body. It is also possible to use \<^theory_text>\consts c :: \\ with later \<^theory_text>\overloading c \ c :: \\ to keep the declaration and definition of a constant separate. \begin{matharray}{rcl} @{command_def "consts"} & : & \theory \ theory\ \\ @{command_def "overloading"} & : & \theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command consts} ((@{syntax name} '::' @{syntax type} @{syntax mixfix}?) +) ; @@{command overloading} ( spec + ) @'begin' ; spec: @{syntax name} ( '\' | '==' ) @{syntax term} ( '(' @'unchecked' ')' )? \ \<^descr> \<^theory_text>\consts c :: \\ declares constant \c\ to have any instance of type scheme \\\. The optional mixfix annotations may attach concrete syntax to the constants declared. \<^descr> \<^theory_text>\overloading x\<^sub>1 \ c\<^sub>1 :: \\<^sub>1 \ x\<^sub>n \ c\<^sub>n :: \\<^sub>n begin \ end\ defines a theory target (cf.\ \secref{sec:target}) which allows to specify already declared constants via definitions in the body. These are identified by an explicitly given mapping from variable names \x\<^sub>i\ to constants \c\<^sub>i\ at particular type instances. The definitions themselves are established using common specification tools, using the names \x\<^sub>i\ as reference to the corresponding constants. Option \<^theory_text>\(unchecked)\ disables global dependency checks for the corresponding definition, which is occasionally useful for exotic overloading; this is a form of axiomatic specification. It is at the discretion of the user to avoid malformed theory specifications! \ subsubsection \Example\ consts Length :: "'a \ nat" overloading Length\<^sub>0 \ "Length :: unit \ nat" Length\<^sub>1 \ "Length :: 'a \ unit \ nat" Length\<^sub>2 \ "Length :: 'a \ 'b \ unit \ nat" Length\<^sub>3 \ "Length :: 'a \ 'b \ 'c \ unit \ nat" begin fun Length\<^sub>0 :: "unit \ nat" where "Length\<^sub>0 () = 0" fun Length\<^sub>1 :: "'a \ unit \ nat" where "Length\<^sub>1 (a, ()) = 1" fun Length\<^sub>2 :: "'a \ 'b \ unit \ nat" where "Length\<^sub>2 (a, b, ()) = 2" fun Length\<^sub>3 :: "'a \ 'b \ 'c \ unit \ nat" where "Length\<^sub>3 (a, b, c, ()) = 3" end lemma "Length (a, b, c, ()) = 3" by simp lemma "Length ((a, b), (c, d), ()) = 2" by simp lemma "Length ((a, b, c, d, e), ()) = 1" by simp section \Incorporating ML code \label{sec:ML}\ text \ \begin{matharray}{rcl} @{command_def "SML_file"} & : & \local_theory \ local_theory\ \\ @{command_def "SML_file_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "SML_file_no_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file_no_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_export"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_prf"} & : & \proof \ proof\ \\ @{command_def "ML_val"} & : & \any \\ \\ @{command_def "ML_command"} & : & \any \\ \\ @{command_def "setup"} & : & \theory \ theory\ \\ @{command_def "local_setup"} & : & \local_theory \ local_theory\ \\ @{command_def "attribute_setup"} & : & \local_theory \ local_theory\ \\ \end{matharray} \begin{tabular}{rcll} @{attribute_def ML_print_depth} & : & \attribute\ & default 10 \\ @{attribute_def ML_source_trace} & : & \attribute\ & default \false\ \\ @{attribute_def ML_debugger} & : & \attribute\ & default \false\ \\ @{attribute_def ML_exception_trace} & : & \attribute\ & default \false\ \\ @{attribute_def ML_exception_debugger} & : & \attribute\ & default \false\ \\ @{attribute_def ML_environment} & : & \attribute\ & default \Isabelle\ \\ \end{tabular} \<^rail>\ (@@{command SML_file} | @@{command SML_file_debug} | @@{command SML_file_no_debug} | @@{command ML_file} | @@{command ML_file_debug} | @@{command ML_file_no_debug}) @{syntax name} ';'? ; (@@{command ML} | @@{command ML_export} | @@{command ML_prf} | @@{command ML_val} | @@{command ML_command} | @@{command setup} | @@{command local_setup}) @{syntax text} ; @@{command attribute_setup} @{syntax name} '=' @{syntax text} @{syntax text}? \ \<^descr> \<^theory_text>\SML_file name\ reads and evaluates the given Standard ML file. Top-level SML bindings are stored within the (global or local) theory context; the initial environment is restricted to the Standard ML implementation of Poly/ML, without the many add-ons of Isabelle/ML. Multiple \<^theory_text>\SML_file\ commands may be used to build larger Standard ML projects, independently of the regular Isabelle/ML environment. \<^descr> \<^theory_text>\ML_file name\ reads and evaluates the given ML file. The current theory context is passed down to the ML toplevel and may be modified, using \<^ML>\Context.>>\ or derived ML commands. Top-level ML bindings are stored within the (global or local) theory context. \<^descr> \<^theory_text>\SML_file_debug\, \<^theory_text>\SML_file_no_debug\, \<^theory_text>\ML_file_debug\, and \<^theory_text>\ML_file_no_debug\ change the @{attribute ML_debugger} option locally while the given file is compiled. \<^descr> \<^theory_text>\ML\ is similar to \<^theory_text>\ML_file\, but evaluates directly the given \text\. Top-level ML bindings are stored within the (global or local) theory context. \<^descr> \<^theory_text>\ML_export\ is similar to \<^theory_text>\ML\, but the resulting toplevel bindings are exported to the global bootstrap environment of the ML process --- it has a lasting effect that cannot be retracted. This allows ML evaluation without a formal theory context, e.g. for command-line tools via @{tool process} @{cite "isabelle-system"}. \<^descr> \<^theory_text>\ML_prf\ is analogous to \<^theory_text>\ML\ but works within a proof context. Top-level ML bindings are stored within the proof context in a purely sequential fashion, disregarding the nested proof structure. ML bindings introduced by \<^theory_text>\ML_prf\ are discarded at the end of the proof. \<^descr> \<^theory_text>\ML_val\ and \<^theory_text>\ML_command\ are diagnostic versions of \<^theory_text>\ML\, which means that the context may not be updated. \<^theory_text>\ML_val\ echos the bindings produced at the ML toplevel, but \<^theory_text>\ML_command\ is silent. \<^descr> \<^theory_text>\setup "text"\ changes the current theory context by applying \text\, which refers to an ML expression of type \<^ML_type>\theory -> theory\. This enables to initialize any object-logic specific tools and packages written in ML, for example. \<^descr> \<^theory_text>\local_setup\ is similar to \<^theory_text>\setup\ for a local theory context, and an ML expression of type \<^ML_type>\local_theory -> local_theory\. This allows to invoke local theory specification packages without going through concrete outer syntax, for example. \<^descr> \<^theory_text>\attribute_setup name = "text" description\ defines an attribute in the current context. The given \text\ has to be an ML expression of type \<^ML_type>\attribute context_parser\, cf.\ basic parsers defined in structure \<^ML_structure>\Args\ and \<^ML_structure>\Attrib\. In principle, attributes can operate both on a given theorem and the implicit context, although in practice only one is modified and the other serves as parameter. Here are examples for these two cases: \ (*<*)experiment begin(*>*) attribute_setup my_rule = \Attrib.thms >> (fn ths => Thm.rule_attribute ths (fn context: Context.generic => fn th: thm => let val th' = th OF ths in th' end))\ attribute_setup my_declaration = \Attrib.thms >> (fn ths => Thm.declaration_attribute (fn th: thm => fn context: Context.generic => let val context' = context in context' end))\ (*<*)end(*>*) text \ \<^descr> @{attribute ML_print_depth} controls the printing depth of the ML toplevel pretty printer. Typically the limit should be less than 10. Bigger values such as 100--1000 are occasionally useful for debugging. \<^descr> @{attribute ML_source_trace} indicates whether the source text that is given to the ML compiler should be output: it shows the raw Standard ML after expansion of Isabelle/ML antiquotations. \<^descr> @{attribute ML_debugger} controls compilation of sources with or without debugging information. The global system option @{system_option_ref ML_debugger} does the same when building a session image. It is also possible use commands like \<^theory_text>\ML_file_debug\ etc. The ML debugger is explained further in @{cite "isabelle-jedit"}. \<^descr> @{attribute ML_exception_trace} indicates whether the ML run-time system should print a detailed stack trace on exceptions. The result is dependent on various ML compiler optimizations. The boundary for the exception trace is the current Isar command transactions: it is occasionally better to insert the combinator \<^ML>\Runtime.exn_trace\ into ML code for debugging @{cite "isabelle-implementation"}, closer to the point where it actually happens. \<^descr> @{attribute ML_exception_debugger} controls detailed exception trace via the Poly/ML debugger, at the cost of extra compile-time and run-time overhead. Relevant ML modules need to be compiled beforehand with debugging enabled, see @{attribute ML_debugger} above. \<^descr> @{attribute ML_environment} determines the named ML environment for toplevel declarations, e.g.\ in command \<^theory_text>\ML\ or \<^theory_text>\ML_file\. The following ML environments are predefined in Isabelle/Pure: \<^item> \Isabelle\ for Isabelle/ML. It contains all modules of Isabelle/Pure and further add-ons, e.g. material from Isabelle/HOL. \<^item> \SML\ for official Standard ML. It contains only the initial basis according to \<^url>\http://sml-family.org/Basis/overview.html\. The Isabelle/ML function \<^ML>\ML_Env.setup\ defines a new ML environment. This is useful to incorporate big SML projects in an isolated name space, possibly with variations on ML syntax; the existing setup of \<^ML>\ML_Env.SML_operations\ follows the official standard. It is also possible to move toplevel bindings between ML environments, using a notation with ``\>\'' as separator. For example: \ (*<*)experiment begin(*>*) declare [[ML_environment = "Isabelle>SML"]] ML \val println = writeln\ declare [[ML_environment = "SML"]] ML \println "test"\ declare [[ML_environment = "Isabelle"]] ML \ML \println\ (*bad*) handle ERROR msg => warning msg\ (*<*)end(*>*) section \Generated files and exported files\ text \ Write access to the physical file-system is incompatible with the stateless model of processing Isabelle documents. To avoid bad effects, the following concepts for abstract file-management are provided by Isabelle: \<^descr>[Generated files] are stored within the theory context in Isabelle/ML. This allows to operate on the content in Isabelle/ML, e.g. via the command @{command compile_generated_files}. \<^descr>[Exported files] are stored within the session database in Isabelle/Scala. This allows to deliver artefacts to external tools, see also @{cite "isabelle-system"} for session \<^verbatim>\ROOT\ declaration \<^theory_text>\export_files\, and @{tool build} option \<^verbatim>\-e\. A notable example is the command @{command_ref export_code} (\chref{ch:export-code}): it uses both concepts simultaneously. File names are hierarchically structured, using a slash as separator. The (long) theory name is used as a prefix: the resulting name needs to be globally unique. \begin{matharray}{rcll} @{command_def "generate_file"} & : & \local_theory \ local_theory\ \\ @{command_def "export_generated_files"} & : & \context \\ \\ @{command_def "compile_generated_files"} & : & \context \\ \\ @{command_def "external_file"} & : & \any \ any\ \\ \end{matharray} \<^rail>\ @@{command generate_file} path '=' content ; path: @{syntax embedded} ; content: @{syntax embedded} ; @@{command export_generated_files} (files_in_theory + @'and') ; files_in_theory: (@'_' | (path+)) (('(' @'in' @{syntax name} ')')?) ; @@{command compile_generated_files} (files_in_theory + @'and') \ (@'external_files' (external_files + @'and'))? \ (@'export_files' (export_files + @'and'))? \ (@'export_prefix' path)? ; external_files: (path+) (('(' @'in' path ')')?) ; export_files: (path+) (executable?) ; executable: '(' ('exe' | 'executable') ')' ; @@{command external_file} @{syntax name} ';'? \ \<^descr> \<^theory_text>\generate_file path = content\ augments the table of generated files within the current theory by a new entry: duplicates are not allowed. The name extension determines a pre-existent file-type; the \content\ is a string that is preprocessed according to rules of this file-type. For example, Isabelle/Pure supports \<^verbatim>\.hs\ as file-type for Haskell: embedded cartouches are evaluated as Isabelle/ML expressions of type \<^ML_type>\string\, the result is inlined in Haskell string syntax. \<^descr> \<^theory_text>\export_generated_files paths (in thy)\ retrieves named generated files from the given theory (that needs be reachable via imports of the current one). By default, the current theory node is used. Using ``\<^verbatim>\_\'' (underscore) instead of explicit path names refers to \emph{all} files of a theory node. The overall list of files is prefixed with the respective (long) theory name and exported to the session database. In Isabelle/jEdit the result can be browsed via the virtual file-system with prefix ``\<^verbatim>\isabelle-export:\'' (using the regular file-browser). \<^descr> \<^theory_text>\compile_generated_files paths (in thy) where compile_body\ retrieves named generated files as for \<^theory_text>\export_generated_files\ and writes them into a temporary directory, such that the \compile_body\ may operate on them as an ML function of type \<^ML_type>\Path.T -> unit\. This may create further files, e.g.\ executables produced by a compiler that is invoked as external process (e.g.\ via \<^ML>\Isabelle_System.bash\), or any other files. The option ``\<^theory_text>\external_files paths (in base_dir)\'' copies files from the physical file-system into the temporary directory, \emph{before} invoking \compile_body\. The \base_dir\ prefix is removed from each of the \paths\, but the remaining sub-directory structure is reconstructed in the target directory. The option ``\<^theory_text>\export_files paths\'' exports the specified files from the temporary directory to the session database, \emph{after} invoking \compile_body\. Entries may be decorated with ``\<^theory_text>\(exe)\'' to say that it is a platform-specific executable program: the executable file-attribute will be set, and on Windows the \<^verbatim>\.exe\ file-extension will be included; ``\<^theory_text>\(executable)\'' only refers to the file-attribute, without special treatment of the \<^verbatim>\.exe\ extension. The option ``\<^theory_text>\export_prefix path\'' specifies an extra path prefix for all exports of \<^theory_text>\export_files\ above. \<^descr> \<^theory_text>\external_file name\ declares the formal dependency on the given file name, such that the Isabelle build process knows about it (see also @{cite "isabelle-system"}). This is required for any files mentioned in \<^theory_text>\compile_generated_files / external_files\ above, in order to document source dependencies properly. It is also possible to use \<^theory_text>\external_file\ alone, e.g.\ when other Isabelle/ML tools use \<^ML>\File.read\, without specific management of content by the Prover IDE. \ section \Primitive specification elements\ subsection \Sorts\ text \ \begin{matharray}{rcll} @{command_def "default_sort"} & : & \local_theory \ local_theory\ \end{matharray} \<^rail>\ @@{command default_sort} @{syntax sort} \ \<^descr> \<^theory_text>\default_sort s\ makes sort \s\ the new default sort for any type variable that is given explicitly in the text, but lacks a sort constraint (wrt.\ the current context). Type variables generated by type inference are not affected. Usually the default sort is only changed when defining a new object-logic. For example, the default sort in Isabelle/HOL is \<^class>\type\, the class of all HOL types. When merging theories, the default sorts of the parents are logically intersected, i.e.\ the representations as lists of classes are joined. \ subsection \Types \label{sec:types-pure}\ text \ \begin{matharray}{rcll} @{command_def "type_synonym"} & : & \local_theory \ local_theory\ \\ @{command_def "typedecl"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command type_synonym} (@{syntax typespec} '=' @{syntax type} @{syntax mixfix}?) ; @@{command typedecl} @{syntax typespec} @{syntax mixfix}? \ \<^descr> \<^theory_text>\type_synonym (\\<^sub>1, \, \\<^sub>n) t = \\ introduces a \<^emph>\type synonym\ \(\\<^sub>1, \, \\<^sub>n) t\ for the existing type \\\. Unlike the semantic type definitions in Isabelle/HOL, type synonyms are merely syntactic abbreviations without any logical significance. Internally, type synonyms are fully expanded. \<^descr> \<^theory_text>\typedecl (\\<^sub>1, \, \\<^sub>n) t\ declares a new type constructor \t\. If the object-logic defines a base sort \s\, then the constructor is declared to operate on that, via the axiomatic type-class instance \t :: (s, \, s)s\. \begin{warn} If you introduce a new type axiomatically, i.e.\ via @{command_ref typedecl} and @{command_ref axiomatization} (\secref{sec:axiomatizations}), the minimum requirement is that it has a non-empty model, to avoid immediate collapse of the logical environment. Moreover, one needs to demonstrate that the interpretation of such free-form axiomatizations can coexist with other axiomatization schemes for types, notably @{command_def typedef} in Isabelle/HOL (\secref{sec:hol-typedef}), or any other extension that people might have introduced elsewhere. \end{warn} \ section \Naming existing theorems \label{sec:theorems}\ text \ \begin{matharray}{rcll} @{command_def "lemmas"} & : & \local_theory \ local_theory\ \\ @{command_def "named_theorems"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command lemmas} (@{syntax thmdef}? @{syntax thms} + @'and') @{syntax for_fixes} ; @@{command named_theorems} (@{syntax name} @{syntax text}? + @'and') \ \<^descr> \<^theory_text>\lemmas a = b\<^sub>1 \ b\<^sub>n\~@{keyword_def "for"}~\x\<^sub>1 \ x\<^sub>m\ evaluates given facts (with attributes) in the current context, which may be augmented by local variables. Results are standardized before being stored, i.e.\ schematic variables are renamed to enforce index \0\ uniformly. \<^descr> \<^theory_text>\named_theorems name description\ declares a dynamic fact within the context. The same \name\ is used to define an attribute with the usual \add\/\del\ syntax (e.g.\ see \secref{sec:simp-rules}) to maintain the content incrementally, in canonical declaration order of the text structure. \ section \Oracles \label{sec:oracles}\ text \ \begin{matharray}{rcll} @{command_def "oracle"} & : & \theory \ theory\ & (axiomatic!) \\ @{command_def "thm_oracles"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} Oracles allow Isabelle to take advantage of external reasoners such as arithmetic decision procedures, model checkers, fast tautology checkers or computer algebra systems. Invoked as an oracle, an external reasoner can create arbitrary Isabelle theorems. It is the responsibility of the user to ensure that the external reasoner is as trustworthy as the application requires. Another typical source of errors is the linkup between Isabelle and the external tool, not just its concrete implementation, but also the required translation between two different logical environments. Isabelle merely guarantees well-formedness of the propositions being asserted, and records within the internal derivation object how presumed theorems depend on unproven suppositions. This also includes implicit type-class reasoning via the order-sorted algebra of class relations and type arities (see also @{command_ref instantiation} and @{command_ref instance}). \<^rail>\ @@{command oracle} @{syntax name} '=' @{syntax text} ; @@{command thm_oracles} @{syntax thms} \ \<^descr> \<^theory_text>\oracle name = "text"\ turns the given ML expression \text\ of type \<^ML_text>\'a -> cterm\ into an ML function of type \<^ML_text>\'a -> thm\, which is bound to the global identifier \<^ML_text>\name\. This acts like an infinitary specification of axioms! Invoking the oracle only works within the scope of the resulting theory. See \<^file>\~~/src/HOL/ex/Iff_Oracle.thy\ for a worked example of defining a new primitive rule as oracle, and turning it into a proof method. \<^descr> \<^theory_text>\thm_oracles thms\ displays all oracles used in the internal derivation of the given theorems; this covers the full graph of transitive dependencies. \ section \Name spaces\ text \ \begin{matharray}{rcl} @{command_def "alias"} & : & \local_theory \ local_theory\ \\ @{command_def "type_alias"} & : & \local_theory \ local_theory\ \\ @{command_def "hide_class"} & : & \theory \ theory\ \\ @{command_def "hide_type"} & : & \theory \ theory\ \\ @{command_def "hide_const"} & : & \theory \ theory\ \\ @{command_def "hide_fact"} & : & \theory \ theory\ \\ \end{matharray} \<^rail>\ (@{command alias} | @{command type_alias}) @{syntax name} '=' @{syntax name} ; (@{command hide_class} | @{command hide_type} | @{command hide_const} | @{command hide_fact}) ('(' @'open' ')')? (@{syntax name} + ) \ Isabelle organizes any kind of name declarations (of types, constants, theorems etc.) by separate hierarchically structured name spaces. Normally the user does not have to control the behaviour of name spaces by hand, yet the following commands provide some way to do so. \<^descr> \<^theory_text>\alias\ and \<^theory_text>\type_alias\ introduce aliases for constants and type constructors, respectively. This allows adhoc changes to name-space accesses. \<^descr> \<^theory_text>\type_alias b = c\ introduces an alias for an existing type constructor. \<^descr> \<^theory_text>\hide_class names\ fully removes class declarations from a given name space; with the \(open)\ option, only the unqualified base name is hidden. Note that hiding name space accesses has no impact on logical declarations --- they remain valid internally. Entities that are no longer accessible to the user are printed with the special qualifier ``\??\'' prefixed to the full internal name. \<^descr> \<^theory_text>\hide_type\, \<^theory_text>\hide_const\, and \<^theory_text>\hide_fact\ are similar to \<^theory_text>\hide_class\, but hide types, constants, and facts, respectively. \ end