diff --git a/NEWS b/NEWS --- a/NEWS +++ b/NEWS @@ -1,16368 +1,16422 @@ Isabelle NEWS -- history of user-relevant changes ================================================= (Note: Isabelle/jEdit shows a tree-view of the NEWS file in Sidekick.) New in this Isabelle version ---------------------------- *** General *** * Timeouts for Isabelle/ML tools are subject to system option "timeout_scale" --- this already used for the overall session build process before, and allows to adapt to slow machines. The underlying Timeout.apply in Isabelle/ML treats an original timeout specification 0 as no timeout; before it meant immediate timeout. Rare INCOMPATIBILITY in boundary cases. * Remote provers from SystemOnTPTP (notably for Sledgehammer) are now managed via Isabelle/Scala instead of perl; the dependency on libwww-perl has been eliminated (notably on Linux). Rare INCOMPATIBILITY: HTTP proxy configuration now works via JVM properties https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/net/doc-files/net-properties.html * More symbol definitions for the Z Notation (Isabelle fonts and LaTeX). See also the group "Z Notation" in the Symbols dockable of Isabelle/jEdit. *** Isabelle/jEdit Prover IDE *** * More robust 'proof' outline for method "induct": support nested cases. *** Document preparation *** +* Document antiquotations for ML text have been refined: "def" and "ref" +variants support index entries, e.g. @{ML} (no entry) vs. @{ML_def} +(bold entry) vs. @{ML_ref} (regular entry); @{ML_type} supports explicit +type arguments for constructors (only relevant for index), e.g. +@{ML_type \'a list\} vs. @{ML_type 'a \list\}; @{ML_op} has been renamed +to @{ML_infix}. Minor INCOMPATIBILITY concerning name and syntax. + +* Option "document_logo" determines if an instance of the Isabelle logo +should be created in the document output directory. The given string +specifies the name of the logo variant, while "_" (underscore) refers to +the unnamed variant. The output file name is always "isabelle_logo.pdf". + +* Option "document_preprocessor" specifies the name of an executable +that is run within the document output directory, after preparing the +document sources and before the actual build process. This allows to +apply adhoc patches, without requiring a separate "build" script. + +* Option "document_build" determines the document build engine, as +defined in Isabelle/Scala (as system service). The subsequent engines +are provided by the Isabelle distribution: + + - "lualatex" (default): use ISABELLE_LUALATEX for a standard LaTeX + build with optional ISABELLE_BIBTEX and ISABELLE_MAKEINDEX + + - "pdflatex": as above, but use ISABELLE_PDFLATEX (legacy mode for + special LaTeX styles) + + - "build": delegate to the executable "./build pdf" + +The presence of a "build" command within the document output directory +explicitly requires document_build=build. Minor INCOMPATIBILITY, need to +adjust session ROOT options. + +* The command-line tool "isabelle latex" has been discontinued, +INCOMPATIBILITY for old document build scripts. + + - Former "isabelle latex -o sty" has become obsolete: Isabelle .sty + files are automatically generated within the document output + directory. + + - Former "isabelle latex -o pdf" should be replaced by + "$ISABELLE_LUALATEX root" or "$ISABELLE_PDFLATEX root" (without + quotes), according to the intended LaTeX engine. + + - Former "isabelle latex -o bbl" should be replaced by + "$ISABELLE_BIBTEX root" (without quotes). + + - Former "isabelle latex -o idx" should be replaced by + "$ISABELLE_MAKEINDEX root" (without quotes). + +* Option "document_bibliography" explicitly enables the use of bibtex; +the default is to check the presence of root.bib, but it could have a +different name. + * Improved LaTeX typesetting of \...\ using \guilsinglleft ... \guilsinglright. INCOMPATIBILITY, need to use \usepackage[T1]{fontenc} (which is now also the default in "isabelle mkroot"). * Simplified typesetting of \...\ using \guillemotleft ... \guillemotright from \usepackage[T1]{fontenc} --- \usepackage{babel} is no longer required. *** HOL *** * Theory Multiset: dedicated predicate "multiset" is gone, use explict expression instead. Minor INCOMPATIBILITY. * Theory Multiset: consolidated abbreviations Mempty, Melem, not_Melem to empty_mset, member_mset, not_member_mset respectively. Minor INCOMPATIBILITY. * Theory Multiset: consolidated operation and fact names: inf_subset_mset ~> inter_mset sup_subset_mset ~> union_mset multiset_inter_def ~> inter_mset_def sup_subset_mset_def ~> union_mset_def multiset_inter_count ~> count_inter_mset sup_subset_mset_count ~> count_union_mset * Theory Multiset: syntax precendence for membership operations has been adjusted to match the corresponding precendences on sets. Rare INCOMPATIBILITY. * HOL-Analysis/HOL-Probability: indexed products of discrete distributions, negative binomial distribution, Hoeffding's inequality, Chernoff bounds, Cauchy–Schwarz inequality for nn_integral, and some more small lemmas. Some theorems that were stated awkwardly before were corrected. Minor INCOMPATIBILITY. * Theorems "antisym" and "eq_iff" in class "order" have been renamed to "order.antisym" and "order.eq_iff", to coexist locally with "antisym" and "eq_iff" from locale "ordering". INCOMPATIBILITY: significant potential for change can be avoided if interpretations of type class "order" are replaced or augmented by interpretations of locale "ordering". * Theorem "swap_def" now is always qualified as "Fun.swap_def". Minor INCOMPATIBILITY; note that for most applications less elementary lemmas exists. * Dedicated session HOL-Combinatorics. INCOMPATIBILITY: theories "Permutations", "List_Permutation" (formerly "Permutation"), "Stirling", "Multiset_Permutations", "Perm" have been moved there from session HOL-Library. * Theory "Permutation" in HOL-Library has been renamed to the more specific "List_Permutation". Note that most notions from that theory are already present in theory "Permutations". INCOMPATIBILITY. * Lemma "permutes_induct" has been given stronger hypotheses and named premises. INCOMPATIBILITY. * Theory "Transposition" in HOL-Combinatorics provides elementary swap operation "transpose". * Combinator "Fun.swap" resolved into a mere input abbreviation in separate theory "Transposition" in HOL-Combinatorics. INCOMPATIBILITY. * Bit operations set_bit, unset_bit and flip_bit are now class operations. INCOMPATIBILITY. *** ML *** * ML antiquotations \<^try>\expr\ and \<^can>\expr\ operate directly on the given ML expression, in contrast to functions "try" and "can" that modify application of a function. * ML antiquotations for conditional ML text: \<^if_linux>\...\ \<^if_macos>\...\ \<^if_windows>\...\ \<^if_unix>\...\ * External bash processes are always managed by Isabelle/Scala, in contrast to Isabelle2021 where this was only done for macOS on Apple Silicon. The main Isabelle/ML interface is Isabelle_System.bash_process with result type Process_Result.T (resembling class Process_Result in Scala); derived operations Isabelle_System.bash and Isabelle_System.bash_output provide similar functionality as before. Rare INCOMPATIBILITY due to subtle semantic differences: - Processes invoked from Isabelle/ML actually run in the context of the Java VM of Isabelle/Scala. The settings environment and current working directory are usually the same on both sides, but there can be subtle corner cases (e.g. unexpected uses of "cd" or "putenv" in ML). - Output via stdout and stderr is line-oriented: Unix vs. Windows line-endings are normalized towards Unix; presence or absence of a final newline is irrelevant. The original lines are available as Process_Result.out_lines/err_lines; the concatenated versions Process_Result.out/err *omit* a trailing newline (using Library.trim_line, which was occasional seen in applications before, but is no longer necessary). - Output needs to be plain text encoded in UTF-8: Isabelle/Scala recodes it temporarily as UTF-16. This works for well-formed Unicode text, but not for arbitrary byte strings. In such cases, the bash script should write tempory files, managed by Isabelle/ML operations like Isabelle_System.with_tmp_file to create a file name and File.read to retrieve its content. - Just like any other Scala function invoked from ML, Isabelle_System.bash_process requires a proper PIDE session context. This could be a regular batch session (e.g. "isabelle build"), a PIDE editor session (e.g. "isabelle jedit"), or headless PIDE (e.g. "isabelle dump" or "isabelle server"). Note that old "isabelle console" or raw "isabelle process" don't have that. New Process_Result.timing works as in Isabelle/Scala, based on direct measurements of the bash_process wrapper in C: elapsed time is always available, CPU time is only available on Linux and macOS, GC time is unavailable. * Likewise, the following Isabelle/ML system operations are run in the context of Isabelle/Scala: - Isabelle_System.make_directory - Isabelle_System.copy_dir - Isabelle_System.copy_file - Isabelle_System.copy_base_file - Isabelle_System.rm_tree - Isabelle_System.download *** System *** * Command-line tool "isabelle version" supports repository archives (without full .hg directory). More options. * Obsolete settings variable ISABELLE_PLATFORM32 has been discontinued. Note that only Windows supports old 32 bit executables, via settings variable ISABELLE_WINDOWS_PLATFORM32. Everything else should be ISABELLE_PLATFORM64 (generic Posix) or ISABELLE_WINDOWS_PLATFORM64 (native Windows) or ISABELLE_APPLE_PLATFORM64 (Apple Silicon). New in Isabelle2021 (February 2021) ----------------------------------- *** General *** * On macOS, the IsabelleXYZ.app directory layout now follows the other platforms, without indirection via Contents/Resources/. INCOMPATIBILITY, use e.g. IsabelleXYZ.app/bin/isabelle instead of former IsabelleXYZ.app/Isabelle/bin/isabelle or IsabelleXYZ.app/Isabelle/Contents/Resources/IsabelleXYZ/bin/isabelle. * HTML presentation uses rich markup produced by Isabelle/PIDE, resulting in more colors and links. * HTML presentation includes auxiliary files (e.g. ML) for each theory. * Proof method "subst" is confined to the original subgoal range: its included distinct_subgoals_tac no longer affects unrelated subgoals. Rare INCOMPATIBILITY. * Theory_Data extend operation is obsolete and needs to be the identity function; merge should be conservative and not reset to the empty value. Subtle INCOMPATIBILITY and change of semantics (due to Theory.join_theory from Isabelle2020). Special extend/merge behaviour at the begin of a new theory can be achieved via Theory.at_begin. *** Isabelle/jEdit Prover IDE *** * Improved GUI look-and-feel: the portable and scalable "FlatLaf Light" is used by default on all platforms (appearance similar to IntelliJ IDEA). * Improved markup for theory header imports: hyperlinks for theory files work without formal checking of content. * The prover process can download auxiliary files (e.g. 'ML_file') for theories with remote URL. This requires the external "curl" program. * Action "isabelle.goto-entity" (shortcut CS+d) jumps to the definition of the formal entity at the caret position. * The visual feedback on caret entity focus is normally restricted to definitions within the visible text area. The keyboard modifier "CS" overrides this: then all defining and referencing positions are shown. See also option "jedit_focus_modifier". * The jEdit status line includes widgets both for JVM and ML heap usage. Ongoing ML ongoing garbage collection is shown as "ML cleanup". * The Monitor dockable provides buttons to request a full garbage collection and sharing of live data on the ML heap. It also includes information about the Java Runtime system. * PIDE support for session ROOTS: markup for directories. * Update to jedit-5.6.0, the latest release. This version works properly on macOS by default, without the special MacOSX plugin. * Action "full-screen-mode" (shortcut F11 or S+F11) has been modified for better approximate window size on macOS and Linux/X11. * Improved GUI support for macOS 11.1 Big Sur: native fullscreen mode, but non-native look-and-feel (FlatLaf). * Hyperlinks to various file-formats (.pdf, .png, etc.) open an external viewer, instead of re-using the jEdit text editor. * IDE support for Naproche-SAD: Proof Checking of Natural Mathematical Documents. See also $NAPROCHE_HOME/examples for files with .ftl or .ftl.tex extension. The corresponding Naproche-SAD server process can be disabled by setting the system option naproche_server=false and restarting the Isabelle application. *** Document preparation *** * Keyword 'document_theories' within ROOT specifies theories from other sessions that should be included in the generated document source directory. This does not affect the generated session.tex: \input{...} needs to be used separately. * The standard LaTeX engine is now lualatex, according to settings variable ISABELLE_PDFLATEX. This is mostly upwards compatible with old pdflatex, but text encoding needs to conform strictly to utf8. Rare INCOMPATIBILITY. * Discontinued obsolete DVI format and ISABELLE_LATEX settings variable: document output is always PDF. * Antiquotation @{tool} refers to Isabelle command-line tools, with completion and formal reference to the source (external script or internal Scala function). * Antiquotation @{bash_function} refers to GNU bash functions that are checked within the Isabelle settings environment. * Antiquotations @{scala}, @{scala_object}, @{scala_type}, @{scala_method} refer to checked Isabelle/Scala entities. *** Pure *** * Session Pure-Examples contains notable examples for Isabelle/Pure (former entries of HOL-Isar_Examples). * Named contexts (locale and class specifications, locale and class context blocks) allow bundle mixins for the surface context. This allows syntax notations to be organized within bundles conveniently. See theory "HOL-ex.Specifications_with_bundle_mixins" for examples and the isar-ref manual for syntax descriptions. * Definitions in locales produce rule which can be added as congruence rule to protect foundational terms during simplification. * Consolidated terminology and function signatures for nested targets: - Local_Theory.begin_nested replaces Local_Theory.open_target - Local_Theory.end_nested replaces Local_Theory.close_target - Combination of Local_Theory.begin_nested and Local_Theory.end_nested(_result) replaces Local_Theory.subtarget(_result) INCOMPATIBILITY. * Local_Theory.init replaces Generic_Target.init. Minor INCOMPATIBILITY. *** HOL *** * Session HOL-Examples contains notable examples for Isabelle/HOL (former entries of HOL-Isar_Examples, HOL-ex etc.). * An updated version of the veriT solver is now included as Isabelle component. It can be used in the "smt" proof method via "smt (verit)" or via "declare [[smt_solver = verit]]" in the context; see also session HOL-Word-SMT_Examples. * Zipperposition 2.0 is now included as Isabelle component for experimentation, e.g. in "sledgehammer [prover = zipperposition]". * Sledgehammer: - support veriT in proof preplay - take adventage of more cores in proof preplay * Updated the Metis prover underlying the "metis" proof method to version 2.4 (release 20180810). The new version fixes one soundness defect and two incompleteness defects. Very slight INCOMPATIBILITY. * Nitpick/Kodkod may be invoked directly within the running Isabelle/Scala session (instead of an external Java process): this improves reactivity and saves resources. This experimental feature is guarded by system option "kodkod_scala" (default: true in PIDE interaction, false in batch builds). * Simproc "defined_all" and rewrite rule "subst_all" perform more aggressive substitution with variables from assumptions. INCOMPATIBILITY, consider repairing proofs locally like this: supply subst_all [simp del] [[simproc del: defined_all]] * Simproc "datatype_no_proper_subterm" rewrites equalities "lhs = rhs" on datatypes to "False" if either side is a proper subexpression of the other (for any datatype with a reasonable size function). * Syntax for state monad combinators fcomp and scomp is organized in bundle state_combinator_syntax. Minor INCOMPATIBILITY. * Syntax for reflected term syntax is organized in bundle term_syntax, discontinuing previous locale term_syntax. Minor INCOMPATIBILITY. * New constant "power_int" for exponentiation with integer exponent, written as "x powi n". * Added the "at most 1" quantifier, Uniq. * For the natural numbers, "Sup {} = 0". * New constant semiring_char gives the characteristic of any type of class semiring_1, with the convenient notation CHAR('a). For example, CHAR(nat) = CHAR(int) = CHAR(real) = 0, CHAR(17) = 17. * HOL-Computational_Algebra.Polynomial: Definition and basic properties of algebraic integers. * Library theory "Bit_Operations" with generic bit operations. * Library theory "Signed_Division" provides operations for signed division, instantiated for type int. * Theory "Multiset": removed misleading notation \# for sum_mset; replaced with \\<^sub>#. Analogous notation for prod_mset also exists now. * New theory "HOL-Library.Word" takes over material from former session "HOL-Word". INCOMPATIBILITY: need to adjust imports. * Theory "HOL-Library.Word": Type word is restricted to bit strings consisting of at least one bit. INCOMPATIBILITY. * Theory "HOL-Library.Word": Bit operations NOT, AND, OR, XOR are based on generic algebraic bit operations from theory "HOL-Library.Bit_Operations". INCOMPATIBILITY. * Theory "HOL-Library.Word": Most operations on type word are set up for transfer and lifting. INCOMPATIBILITY. * Theory "HOL-Library.Word": Generic type conversions. INCOMPATIBILITY, sometimes additional rewrite rules must be added to applications to get a confluent system again. * Theory "HOL-Library.Word": Uniform polymorphic "mask" operation for both types int and word. INCOMPATIBILITY. * Theory "HOL-Library.Word": Syntax for signed compare operators has been consolidated with syntax of regular compare operators. Minor INCOMPATIBILITY. * Former session "HOL-Word": Various operations dealing with bit values represented as reversed lists of bools are separated into theory Reversed_Bit_Lists in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Theory "Word_Bitwise" has been moved to AFP entry Word_Lib as theory "Bitwise". INCOMPATIBILITY. * Former session "HOL-Word": Compound operation "bin_split" simplifies by default into its components "drop_bit" and "take_bit". INCOMPATIBILITY. * Former session "HOL-Word": Operations lsb, msb and set_bit are separated into theories Least_significant_bit, Most_significant_bit and Generic_set_bit respectively in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Ancient int numeral representation has been factored out in separate theory "Ancient_Numeral" in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Operations "bin_last", "bin_rest", "bin_nth", "bintrunc", "sbintrunc", "norm_sint", "bin_cat" and "max_word" are now mere input abbreviations. Minor INCOMPATIBILITY. * Former session "HOL-Word": Misc ancient material has been factored out into separate theories and moved to session Word_Lib in the AFP. See theory "Guide" there for further information. INCOMPATIBILITY. * Session HOL-TPTP: The "tptp_isabelle" and "tptp_sledgehammer" commands are in working order again, as opposed to outputting "GaveUp" on nearly all problems. * Session "HOL-Hoare": concrete syntax only for Hoare triples, not abstract language constructors. * Session "HOL-Hoare": now provides a total correctness logic as well. *** FOL *** * Added the "at most 1" quantifier, Uniq, as in HOL. * Simproc "defined_all" and rewrite rule "subst_all" have been changed as in HOL. *** ML *** * Antiquotations @{scala_function}, @{scala}, @{scala_thread} refer to registered Isabelle/Scala functions (of type String => String): invocation works via the PIDE protocol. * Path.append is available as overloaded "+" operator, similar to corresponding Isabelle/Scala operation. * ML statistics via an external Poly/ML process: this allows monitoring the runtime system while the ML program sleeps. *** System *** * Isabelle server allows user-defined commands via isabelle_scala_service. * Update/rebuild external provers on currently supported OS platforms, notably CVC4 1.8, E prover 2.5, SPASS 3.8ds, CSDP 6.1.1. * The command-line tool "isabelle log" prints prover messages from the build database of the given session, following the the order of theory sources, instead of erratic parallel evaluation. Consequently, the session log file is restricted to system messages of the overall build process, and thus becomes more informative. * Discontinued obsolete isabelle display tool, and DVI_VIEWER settings variable. * The command-line tool "isabelle logo" only outputs PDF; obsolete EPS (for DVI documents) has been discontinued. Former option -n has been turned into -o with explicit file name. Minor INCOMPATIBILITY. * The command-line tool "isabelle components" supports new options -u and -x to manage $ISABELLE_HOME_USER/etc/components without manual editing of Isabelle configuration files. * The shell function "isabelle_directory" (within etc/settings of components) augments the list of special directories for persistent symbolic path names. This improves portability of heap images and session databases. It used to be hard-wired for Isabelle + AFP, but other projects may now participate on equal terms. * The command-line tool "isabelle process" now prints output to stdout/stderr separately and incrementally, instead of just one bulk to stdout after termination. Potential INCOMPATIBILITY for external tools. * The command-line tool "isabelle console" now supports interrupts properly (on Linux and macOS). * Batch-builds via "isabelle build" use a PIDE session with special protocol: this allows to invoke Isabelle/Scala operations from Isabelle/ML. Big build jobs (e.g. AFP) require extra heap space for the java process, e.g. like this in $ISABELLE_HOME_USER/etc/settings: ISABELLE_TOOL_JAVA_OPTIONS="$ISABELLE_TOOL_JAVA_OPTIONS -Xmx8g" This includes full PIDE markup, if option "build_pide_reports" is enabled. * The command-line tool "isabelle build" provides option -P DIR to produce PDF/HTML presentation in the specified directory; -P: refers to the standard directory according to ISABELLE_BROWSER_INFO / ISABELLE_BROWSER_INFO_SYSTEM settings. Generated PDF documents are taken from the build database -- from this or earlier builds with option document=pdf. * The command-line tool "isabelle document" generates theory documents on the spot, using the underlying session build database (exported LaTeX sources or existing PDF files). INCOMPATIBILITY, the former "isabelle document" tool was rather different and has been discontinued. * The command-line tool "isabelle sessions" explores the structure of Isabelle sessions and prints result names in topological order (on stdout). * The Isabelle/Scala "Progress" interface changed slightly and "No_Progress" has been discontinued. INCOMPATIBILITY, use "new Progress" instead. * General support for Isabelle/Scala system services, configured via the shell function "isabelle_scala_service" in etc/settings (e.g. of an Isabelle component); see implementations of class Isabelle_System.Service in Isabelle/Scala. This supersedes former "isabelle_scala_tools" and "isabelle_file_format": minor INCOMPATIBILITY. * The syntax of theory load commands (for auxiliary files) is now specified in Isabelle/Scala, as instance of class isabelle.Command_Span.Load_Command registered via isabelle_scala_service in etc/settings. This allows more flexible schemes than just a list of file extensions. Minor INCOMPATIBILITY, e.g. see theory HOL-SPARK.SPARK_Setup to emulate the old behaviour. * JVM system property "isabelle.laf" has been discontinued; the default Swing look-and-feel is ""FlatLaf Light". * Isabelle/Phabricator supports Ubuntu 20.04 LTS. * Isabelle/Phabricator setup has been updated to follow ongoing development: libphutil has been discontinued. Minor INCOMPATIBILITY: existing server installations should remove libphutil from /usr/local/bin/isabelle-phabricator-upgrade and each installation root directory (e.g. /var/www/phabricator-vcs/libphutil). * Experimental support for arm64-linux platform. The reference platform is Raspberry Pi 4 with 8 GB RAM running Pi OS (64 bit). * Support for Apple Silicon, using mostly x86_64-darwin runtime translation via Rosetta 2 (e.g. Poly/ML and external provers), but also some native arm64-darwin executables (e.g. Java). New in Isabelle2020 (April 2020) -------------------------------- *** General *** * Session ROOT files need to specify explicit 'directories' for import of theory files. Directories cannot be shared by different sessions. (Recall that import of theories from other sessions works via session-qualified theory names, together with suitable 'sessions' declarations in the ROOT.) * Internal derivations record dependencies on oracles and other theorems accurately, including the implicit type-class reasoning wrt. proven class relations and type arities. In particular, the formal tagging with "Pure.skip_proofs" of results stemming from "instance ... sorry" is now propagated properly to theorems depending on such type instances. * Command 'sorry' (oracle "Pure.skip_proofs") is more precise about the actual proposition that is assumed in the goal and proof context. This requires at least Proofterm.proofs = 1 to show up in theorem dependencies. * Command 'thm_oracles' prints all oracles used in given theorems, covering the full graph of transitive dependencies. * Command 'thm_deps' prints immediate theorem dependencies of the given facts. The former graph visualization has been discontinued, because it was hardly usable. * Refined treatment of proof terms, including type-class proofs for minor object-logics (FOL, FOLP, Sequents). * The inference kernel is now confined to one main module: structure Thm, without the former circular dependency on structure Axclass. * Mixfix annotations may use "' " (single quote followed by space) to separate delimiters (as documented in the isar-ref manual), without requiring an auxiliary empty block. A literal single quote needs to be escaped properly. Minor INCOMPATIBILITY. *** Isar *** * The proof method combinator (subproofs m) applies the method expression m consecutively to each subgoal, constructing individual subproofs internally. This impacts the internal construction of proof terms: it makes a cascade of let-expressions within the derivation tree and may thus improve scalability. * Attribute "trace_locales" activates tracing of locale instances during roundup. It replaces the diagnostic command 'print_dependencies', which has been discontinued. *** Isabelle/jEdit Prover IDE *** * Prover IDE startup is now much faster, because theory dependencies are no longer explored in advance. The overall session structure with its declarations of 'directories' is sufficient to locate theory files. Thus the "session focus" of option "isabelle jedit -S" has become obsolete (likewise for "isabelle vscode_server -S"). Existing option "-R" is both sufficient and more convenient to start editing a particular session. * Actions isabelle.tooltip (CS+b) and isabelle.message (CS+m) display tooltip message popups, corresponding to mouse hovering with/without the CONTROL/COMMAND key pressed. * The following actions allow to navigate errors within the current document snapshot: isabelle.first-error (CS+a) isabelle.last-error (CS+z) isabelle.next-error (CS+n) isabelle.prev-error (CS+p) * Support more brackets: \ \ (intended for implicit argument syntax). * Action isabelle.jconsole (menu item Plugins / Isabelle / Java/VM Monitor) applies the jconsole tool on the running Isabelle/jEdit process. This allows to monitor resource usage etc. * More adequate default font sizes for Linux on HD / UHD displays: automatic font scaling is usually absent on Linux, in contrast to Windows and macOS. * The default value for the jEdit property "view.antiAlias" (menu item Utilities / Global Options / Text Area / Anti Aliased smooth text) is now "subpixel HRGB", instead of former "standard". Especially on Linux this often leads to faster text rendering, but can also cause problems with odd color shades. An alternative is to switch back to "standard" here, and set the following Java system property: isabelle jedit -Dsun.java2d.opengl=true This can be made persistent via JEDIT_JAVA_OPTIONS in $ISABELLE_HOME_USER/etc/settings. For the "Isabelle2020" desktop application there is a corresponding options file in the same directory. *** Isabelle/VSCode Prover IDE *** * Update of State and Preview panels to use new WebviewPanel API of VSCode. *** HOL *** * Improvements of the 'lift_bnf' command: - Add support for quotient types. - Generate transfer rules for the lifted map/set/rel/pred constants (theorems "._transfer_raw"). * Term_XML.Encode/Decode.term uses compact representation of Const "typargs" from the given declaration environment. This also makes more sense for translations to lambda-calculi with explicit polymorphism. INCOMPATIBILITY, use Term_XML.Encode/Decode.term_raw in special applications. * ASCII membership syntax concerning big operators for infimum and supremum has been discontinued. INCOMPATIBILITY. * Removed multiplicativity assumption from class "normalization_semidom". Introduced various new intermediate classes with the multiplicativity assumption; many theorem statements (especially involving GCD/LCM) had to be adapted. This allows for a more natural instantiation of the algebraic typeclasses for e.g. Gaussian integers. INCOMPATIBILITY. * Clear distinction between types for bits (False / True) and Z2 (0 / 1): theory HOL-Library.Bit has been renamed accordingly. INCOMPATIBILITY. * Dynamic facts "algebra_split_simps" and "field_split_simps" correspond to algebra_simps and field_simps but contain more aggressive rules potentially splitting goals; algebra_split_simps roughly replaces sign_simps and field_split_simps can be used instead of divide_simps. INCOMPATIBILITY. * Theory HOL.Complete_Lattices: renamed Inf_Sup -> Inf_eq_Sup and Sup_Inf -> Sup_eq_Inf * Theory HOL-Library.Monad_Syntax: infix operation "bind" (\) associates to the left now as is customary. * Theory HOL-Library.Ramsey: full finite Ramsey's theorem with multiple colours and arbitrary exponents. * Session HOL-Proofs: build faster thanks to better treatment of proof terms in Isabelle/Pure. * Session HOL-Word: bitwise NOT-operator has proper prefix syntax. Minor INCOMPATIBILITY. * Session HOL-Analysis: proof method "metric" implements a decision procedure for simple linear statements in metric spaces. * Session HOL-Complex_Analysis has been split off from HOL-Analysis. *** ML *** * Theory construction may be forked internally, the operation Theory.join_theory recovers a single result theory. See also the example in theory "HOL-ex.Join_Theory". * Antiquotation @{oracle_name} inlines a formally checked oracle name. * Minimal support for a soft-type system within the Isabelle logical framework (module Soft_Type_System). * Former Variable.auto_fixes has been replaced by slightly more general Proof_Context.augment: it is subject to an optional soft-type system of the underlying object-logic. Minor INCOMPATIBILITY. * More scalable Export.export using XML.tree to avoid premature string allocations, with convenient shortcut XML.blob. Minor INCOMPATIBILITY. * Prover IDE support for the underlying Poly/ML compiler (not the basis library). Open $ML_SOURCES/ROOT.ML in Isabelle/jEdit to browse the implementation with full markup. *** System *** * Standard rendering for more Isabelle symbols: \ \ \ \ * The command-line tool "isabelle scala_project" creates a Gradle project configuration for Isabelle/Scala/jEdit, to support Scala IDEs such as IntelliJ IDEA. * The command-line tool "isabelle phabricator_setup" facilitates self-hosting of the Phabricator software-development platform, with support for Git, Mercurial, Subversion repositories. This helps to avoid monoculture and to escape the gravity of centralized version control by Github and/or Bitbucket. For further documentation, see chapter "Phabricator server administration" in the "system" manual. A notable example installation is https://isabelle-dev.sketis.net/. * The command-line tool "isabelle hg_setup" simplifies the setup of Mercurial repositories, with hosting via Phabricator or SSH file server access. * The command-line tool "isabelle imports" has been discontinued: strict checking of session directories enforces session-qualified theory names in applications -- users are responsible to specify session ROOT entries properly. * The command-line tool "isabelle dump" and its underlying Isabelle/Scala module isabelle.Dump has become more scalable, by splitting sessions and supporting a base logic image. Minor INCOMPATIBILITY in options and parameters. * The command-line tool "isabelle build_docker" has been slightly improved: it is now properly documented in the "system" manual. * Isabelle/Scala support for the Linux platform (Ubuntu): packages, users, system services. * Isabelle/Scala support for proof terms (with full type/term information) in module isabelle.Term. * Isabelle/Scala: more scalable output of YXML files, e.g. relevant for "isabelle dump". * Theory export via Isabelle/Scala has been reworked. The former "fact" name space is now split into individual "thm" items: names are potentially indexed, such as "foo" for singleton facts, or "bar(1)", "bar(2)", "bar(3)" for multi-facts. Theorem dependencies are now exported as well: this spans an overall dependency graph of internal inferences; it might help to reconstruct the formal structure of theory libraries. See also the module isabelle.Export_Theory in Isabelle/Scala. * Theory export of structured specifications, based on internal declarations of Spec_Rules by packages like 'definition', 'inductive', 'primrec', 'function'. * Old settings variables ISABELLE_PLATFORM and ISABELLE_WINDOWS_PLATFORM have been discontinued -- deprecated since Isabelle2018. * More complete x86_64 platform support on macOS, notably Catalina where old x86 has been discontinued. * Update to GHC stack 2.1.3 with stackage lts-13.19/ghc-8.6.4. * Update to OCaml Opam 2.0.6 (using ocaml 4.05.0 as before). New in Isabelle2019 (June 2019) ------------------------------- *** General *** * The font collection "Isabelle DejaVu" is systematically derived from the existing "DejaVu" fonts, with variants "Sans Mono", "Sans", "Serif" and styles "Normal", "Bold", "Italic/Oblique", "Bold-Italic/Oblique". The DejaVu base fonts are retricted to well-defined Unicode ranges and augmented by special Isabelle symbols, taken from the former "IsabelleText" font (which is no longer provided separately). The line metrics and overall rendering quality is closer to original DejaVu. INCOMPATIBILITY with display configuration expecting the old "IsabelleText" font: use e.g. "Isabelle DejaVu Sans Mono" instead. * The Isabelle fonts render "\" properly as superscript "-1". * Old-style inner comments (* ... *) within the term language are no longer supported (legacy feature in Isabelle2018). * Old-style {* verbatim *} tokens are explicitly marked as legacy feature and will be removed soon. Use \cartouche\ syntax instead, e.g. via "isabelle update_cartouches -t" (available since Isabelle2015). * Infix operators that begin or end with a "*" are now parenthesized without additional spaces, e.g. "(*)" instead of "( * )". Minor INCOMPATIBILITY. * Mixfix annotations may use cartouches instead of old-style double quotes, e.g. (infixl \+\ 60). The command-line tool "isabelle update -u mixfix_cartouches" allows to update existing theory sources automatically. * ML setup commands (e.g. 'setup', 'method_setup', 'parse_translation') need to provide a closed expression -- without trailing semicolon. Minor INCOMPATIBILITY. * Commands 'generate_file', 'export_generated_files', and 'compile_generated_files' support a stateless (PIDE-conformant) model for generated sources and compiled binaries of other languages. The compilation process is managed in Isabelle/ML, and results exported to the session database for further use (e.g. with "isabelle export" or "isabelle build -e"). *** Isabelle/jEdit Prover IDE *** * Fonts for the text area, gutter, GUI elements etc. use the "Isabelle DejaVu" collection by default, which provides uniform rendering quality with the usual Isabelle symbols. Line spacing no longer needs to be adjusted: properties for the old IsabelleText font had "Global Options / Text Area / Extra vertical line spacing (in pixels): -2", it now defaults to 1, but 0 works as well. * The jEdit File Browser is more prominent in the default GUI layout of Isabelle/jEdit: various virtual file-systems provide access to Isabelle resources, notably via "favorites:" (or "Edit Favorites"). * Further markup and rendering for "plain text" (e.g. informal prose) and "raw text" (e.g. verbatim sources). This improves the visual appearance of formal comments inside the term language, or in general for repeated alternation of formal and informal text. * Action "isabelle-export-browser" points the File Browser to the theory exports of the current buffer, based on the "isabelle-export:" virtual file-system. The directory view needs to be reloaded manually to follow ongoing document processing. * Action "isabelle-session-browser" points the File Browser to session information, based on the "isabelle-session:" virtual file-system. Its entries are structured according to chapter / session names, the open operation is redirected to the session ROOT file. * Support for user-defined file-formats via class isabelle.File_Format in Isabelle/Scala (e.g. see isabelle.Bibtex.File_Format), configured via the shell function "isabelle_file_format" in etc/settings (e.g. of an Isabelle component). * System option "jedit_text_overview" allows to disable the text overview column. * Command-line options "-s" and "-u" of "isabelle jedit" override the default for system option "system_heaps" that determines the heap storage directory for "isabelle build". Option "-n" is now clearly separated from option "-s". * The Isabelle/jEdit desktop application uses the same options as "isabelle jedit" for its internal "isabelle build" process: the implicit option "-o system_heaps" (or "-s") has been discontinued. This reduces the potential for surprise wrt. command-line tools. * The official download of the Isabelle/jEdit application already contains heap images for Isabelle/HOL within its main directory: thus the first encounter becomes faster and more robust (e.g. when run from a read-only directory). * Isabelle DejaVu fonts are available with hinting by default, which is relevant for low-resolution displays. This may be disabled via system option "isabelle_fonts_hinted = false" in $ISABELLE_HOME_USER/etc/preferences -- it occasionally yields better results. * OpenJDK 11 has quite different font rendering, with better glyph shapes and improved sub-pixel anti-aliasing. In some situations results might be *worse* than Oracle Java 8, though -- a proper HiDPI / UHD display is recommended. * OpenJDK 11 supports GTK version 2.2 and 3 (according to system property jdk.gtk.version). The factory default is version 3, but ISABELLE_JAVA_SYSTEM_OPTIONS includes "-Djdk.gtk.version=2.2" to make this more conservative (as in Java 8). Depending on the GTK theme configuration, "-Djdk.gtk.version=3" might work better or worse. *** Document preparation *** * More predefined symbols: \ \ (package "stmaryrd"), \ \ (package "pifont"). * High-quality blackboard-bold symbols from font "txmia" (package "pxfonts"): \\\\\\\\\\\\\\\\\\\\\\\\\\. * Document markers are formal comments of the form \<^marker>\marker_body\ that are stripped from document output: the effect is to modify the semantic presentation context or to emit markup to the PIDE document. Some predefined markers are taken from the Dublin Core Metadata Initiative, e.g. \<^marker>\contributor arg\ or \<^marker>\license arg\ and produce PIDE markup that can be retrieved from the document database. * Old-style command tags %name are re-interpreted as markers with proof-scope \<^marker>\tag (proof) name\ and produce LaTeX environments as before. Potential INCOMPATIBILITY: multiple markers are composed in canonical order, resulting in a reversed list of tags in the presentation context. * Marker \<^marker>\tag name\ does not apply to the proof of a top-level goal statement by default (e.g. 'theorem', 'lemma'). This is a subtle change of semantics wrt. old-style %name. * In Isabelle/jEdit, the string "\tag" may be completed to a "\<^marker>\tag \" template. * Document antiquotation option "cartouche" indicates if the output should be delimited as cartouche; this takes precedence over the analogous option "quotes". * Many document antiquotations are internally categorized as "embedded" and expect one cartouche argument, which is typically used with the \<^control>\cartouche\ notation (e.g. \<^term>\\x y. x\). The cartouche delimiters are stripped in output of the source (antiquotation option "source"), but it is possible to enforce delimiters via option "source_cartouche", e.g. @{term [source_cartouche] \\x y. x\}. *** Isar *** * Implicit cases goal1, goal2, goal3, etc. have been discontinued (legacy feature since Isabelle2016). * More robust treatment of structural errors: begin/end blocks take precedence over goal/proof. This is particularly relevant for the headless PIDE session and server. * Command keywords of kind thy_decl / thy_goal may be more specifically fit into the traditional document model of "definition-statement-proof" via thy_defn / thy_stmt / thy_goal_defn / thy_goal_stmt. *** HOL *** * Command 'export_code' produces output as logical files within the theory context, as well as formal session exports that can be materialized via command-line tools "isabelle export" or "isabelle build -e" (with 'export_files' in the session ROOT). Isabelle/jEdit also provides a virtual file-system "isabelle-export:" that can be explored in the regular file-browser. A 'file_prefix' argument allows to specify an explicit name prefix for the target file (SML, OCaml, Scala) or directory (Haskell); the default is "export" with a consecutive number within each theory. * Command 'export_code': the 'file' argument is now legacy and will be removed soon: writing to the physical file-system is not well-defined in a reactive/parallel application like Isabelle. The empty 'file' argument has been discontinued already: it is superseded by the file-browser in Isabelle/jEdit on "isabelle-export:". Minor INCOMPATIBILITY. * Command 'code_reflect' no longer supports the 'file' argument: it has been superseded by 'file_prefix' for stateless file management as in 'export_code'. Minor INCOMPATIBILITY. * Code generation for OCaml: proper strings are used for literals. Minor INCOMPATIBILITY. * Code generation for OCaml: Zarith supersedes Nums as library for proper integer arithmetic. The library is located via standard invocations of "ocamlfind" (via ISABELLE_OCAMLFIND settings variable). The environment provided by "isabelle ocaml_setup" already contains this tool and the required packages. Minor INCOMPATIBILITY. * Code generation for Haskell: code includes for Haskell must contain proper module frame, nothing is added magically any longer. INCOMPATIBILITY. * Code generation: slightly more conventional syntax for 'code_stmts' antiquotation. Minor INCOMPATIBILITY. * Theory List: the precedence of the list_update operator has changed: "f a [n := x]" now needs to be written "(f a)[n := x]". * The functions \, \, \, \ (not the corresponding binding operators) now have the same precedence as any other prefix function symbol. Minor INCOMPATIBILITY. * Simplified syntax setup for big operators under image. In rare situations, type conversions are not inserted implicitly any longer and need to be given explicitly. Auxiliary abbreviations INFIMUM, SUPREMUM, UNION, INTER should now rarely occur in output and are just retained as migration auxiliary. Abbreviations MINIMUM and MAXIMUM are gone INCOMPATIBILITY. * The simplifier uses image_cong_simp as a congruence rule. The historic and not really well-formed congruence rules INF_cong*, SUP_cong*, are not used by default any longer. INCOMPATIBILITY; consider using declare image_cong_simp [cong del] in extreme situations. * INF_image and SUP_image are no default simp rules any longer. INCOMPATIBILITY, prefer image_comp as simp rule if needed. * Strong congruence rules (with =simp=> in the premises) for constant f are now uniformly called f_cong_simp, in accordance with congruence rules produced for mappers by the datatype package. INCOMPATIBILITY. * Retired lemma card_Union_image; use the simpler card_UN_disjoint instead. INCOMPATIBILITY. * Facts sum_mset.commute and prod_mset.commute have been renamed to sum_mset.swap and prod_mset.swap, similarly to sum.swap and prod.swap. INCOMPATIBILITY. * ML structure Inductive: slightly more conventional naming schema. Minor INCOMPATIBILITY. * ML: Various _global variants of specification tools have been removed. Minor INCOMPATIBILITY, prefer combinators Named_Target.theory_map[_result] to lift specifications to the global theory level. * Theory HOL-Library.Simps_Case_Conv: 'case_of_simps' now supports overlapping and non-exhaustive patterns and handles arbitrarily nested patterns. It uses on the same algorithm as HOL-Library.Code_Lazy, which assumes sequential left-to-right pattern matching. The generated equation no longer tuples the arguments on the right-hand side. INCOMPATIBILITY. * Theory HOL-Library.Multiset: the \# operator now has the same precedence as any other prefix function symbol. * Theory HOL-Library.Cardinal_Notations has been discontinued in favor of the bundle cardinal_syntax (available in theory Main). Minor INCOMPATIBILITY. * Session HOL-Library and HOL-Number_Theory: Exponentiation by squaring, used for computing powers in class "monoid_mult" and modular exponentiation. * Session HOL-Computational_Algebra: Formal Laurent series and overhaul of Formal power series. * Session HOL-Number_Theory: More material on residue rings in Carmichael's function, primitive roots, more properties for "ord". * Session HOL-Analysis: Better organization and much more material at the level of abstract topological spaces. * Session HOL-Algebra: Free abelian groups, etc., ported from HOL Light; algebraic closure of a field by de Vilhena and Baillon. * Session HOL-Homology has been added. It is a port of HOL Light's homology library, with new proofs of "invariance of domain" and related results. * Session HOL-SPARK: .prv files are no longer written to the file-system, but exported to the session database. Results may be retrieved via "isabelle build -e HOL-SPARK-Examples" on the command-line. * Sledgehammer: - The URL for SystemOnTPTP, which is used by remote provers, has been updated. - The machine-learning-based filter MaSh has been optimized to take less time (in most cases). * SMT: reconstruction is now possible using the SMT solver veriT. * Session HOL-Word: * New theory More_Word as comprehensive entrance point. * Merged type class bitss into type class bits. INCOMPATIBILITY. *** ML *** * Command 'generate_file' allows to produce sources for other languages, with antiquotations in the Isabelle context (only the control-cartouche form). The default "cartouche" antiquotation evaluates an ML expression of type string and inlines the result as a string literal of the target language. For example, this works for Haskell as follows: generate_file "Pure.hs" = \ module Isabelle.Pure where allConst, impConst, eqConst :: String allConst = \\<^const_name>\Pure.all\\ impConst = \\<^const_name>\Pure.imp\\ eqConst = \\<^const_name>\Pure.eq\\ \ See also commands 'export_generated_files' and 'compile_generated_files' to use the results. * ML evaluation (notably via command 'ML' or 'ML_file') is subject to option ML_environment to select a named environment, such as "Isabelle" for Isabelle/ML, or "SML" for official Standard ML. * ML antiquotation @{master_dir} refers to the master directory of the underlying theory, i.e. the directory of the theory file. * ML antiquotation @{verbatim} inlines its argument as string literal, preserving newlines literally. The short form \<^verbatim>\abc\ is particularly useful. * Local_Theory.reset is no longer available in user space. Regular definitional packages should use balanced blocks of Local_Theory.open_target versus Local_Theory.close_target instead, or the Local_Theory.subtarget(_result) combinator. Rare INCOMPATIBILITY. * Original PolyML.pointerEq is retained as a convenience for tools that don't use Isabelle/ML (where this is called "pointer_eq"). *** System *** * Update to OpenJDK 11: the current long-term support version of Java. * Update to Poly/ML 5.8 allows to use the native x86_64 platform without the full overhead of 64-bit values everywhere. This special x86_64_32 mode provides up to 16GB ML heap, while program code and stacks are allocated elsewhere. Thus approx. 5 times more memory is available for applications compared to old x86 mode (which is no longer used by Isabelle). The switch to the x86_64 CPU architecture also avoids compatibility problems with Linux and macOS, where 32-bit applications are gradually phased out. * System option "checkpoint" has been discontinued: obsolete thanks to improved memory management in Poly/ML. * System option "system_heaps" determines where to store the session image of "isabelle build" (and other tools using that internally). Former option "-s" is superseded by option "-o system_heaps". INCOMPATIBILITY in command-line syntax. * Session directory $ISABELLE_HOME/src/Tools/Haskell provides some source modules for Isabelle tools implemented in Haskell, notably for Isabelle/PIDE. * The command-line tool "isabelle build -e" retrieves theory exports from the session build database, using 'export_files' in session ROOT entries. * The command-line tool "isabelle update" uses Isabelle/PIDE in batch-mode to update theory sources based on semantic markup produced in Isabelle/ML. Actual updates depend on system options that may be enabled via "-u OPT" (for "update_OPT"), see also $ISABELLE_HOME/etc/options section "Theory update". Theory sessions are specified as in "isabelle dump". * The command-line tool "isabelle update -u control_cartouches" changes antiquotations into control-symbol format (where possible): @{NAME} becomes \<^NAME> and @{NAME ARG} becomes \<^NAME>\ARG\. * Support for Isabelle command-line tools defined in Isabelle/Scala. Instances of class Isabelle_Scala_Tools may be configured via the shell function "isabelle_scala_tools" in etc/settings (e.g. of an Isabelle component). * Isabelle Server command "use_theories" supports "nodes_status_delay" for continuous output of node status information. The time interval is specified in seconds; a negative value means it is disabled (default). * Isabelle Server command "use_theories" terminates more robustly in the presence of structurally broken sources: full consolidation of theories is no longer required. * OCaml tools and libraries are now accesed via ISABELLE_OCAMLFIND, which needs to point to a suitable version of "ocamlfind" (e.g. via OPAM, see below). INCOMPATIBILITY: settings variables ISABELLE_OCAML and ISABELLE_OCAMLC are no longer supported. * Support for managed installations of Glasgow Haskell Compiler and OCaml via the following command-line tools: isabelle ghc_setup isabelle ghc_stack isabelle ocaml_setup isabelle ocaml_opam The global installation state is determined by the following settings (and corresponding directory contents): ISABELLE_STACK_ROOT ISABELLE_STACK_RESOLVER ISABELLE_GHC_VERSION ISABELLE_OPAM_ROOT ISABELLE_OCAML_VERSION After setup, the following Isabelle settings are automatically redirected (overriding existing user settings): ISABELLE_GHC ISABELLE_OCAMLFIND The old meaning of these settings as locally installed executables may be recovered by purging the directories ISABELLE_STACK_ROOT / ISABELLE_OPAM_ROOT, or by resetting these variables in $ISABELLE_HOME_USER/etc/settings. New in Isabelle2018 (August 2018) --------------------------------- *** General *** * Session-qualified theory names are mandatory: it is no longer possible to refer to unqualified theories from the parent session. INCOMPATIBILITY for old developments that have not been updated to Isabelle2017 yet (using the "isabelle imports" tool). * Only the most fundamental theory names are global, usually the entry points to major logic sessions: Pure, Main, Complex_Main, HOLCF, IFOL, FOL, ZF, ZFC etc. INCOMPATIBILITY, need to use qualified names for formerly global "HOL-Probability.Probability" and "HOL-SPARK.SPARK". * Global facts need to be closed: no free variables and no hypotheses. Rare INCOMPATIBILITY. * Facts stemming from locale interpretation are subject to lazy evaluation for improved performance. Rare INCOMPATIBILITY: errors stemming from interpretation morphisms might be deferred and thus difficult to locate; enable system option "strict_facts" temporarily to avoid this. * Marginal comments need to be written exclusively in the new-style form "\ \text\", old ASCII variants like "-- {* ... *}" are no longer supported. INCOMPATIBILITY, use the command-line tool "isabelle update_comments" to update existing theory files. * Old-style inner comments (* ... *) within the term language are legacy and will be discontinued soon: use formal comments "\ \...\" or "\<^cancel>\...\" instead. * The "op " syntax for infix operators has been replaced by "()". If begins or ends with a "*", there needs to be a space between the "*" and the corresponding parenthesis. INCOMPATIBILITY, use the command-line tool "isabelle update_op" to convert theory and ML files to the new syntax. Because it is based on regular expression matching, the result may need a bit of manual postprocessing. Invoking "isabelle update_op" converts all files in the current directory (recursively). In case you want to exclude conversion of ML files (because the tool frequently also converts ML's "op" syntax), use option "-m". * Theory header 'abbrevs' specifications need to be separated by 'and'. INCOMPATIBILITY. * Command 'external_file' declares the formal dependency on the given file name, such that the Isabelle build process knows about it, but without specific Prover IDE management. * Session ROOT entries no longer allow specification of 'files'. Rare INCOMPATIBILITY, use command 'external_file' within a proper theory context. * Session root directories may be specified multiple times: each accessible ROOT file is processed only once. This facilitates specification of $ISABELLE_HOME_USER/ROOTS or command-line options like -d or -D for "isabelle build" and "isabelle jedit". Example: isabelle build -D '~~/src/ZF' * The command 'display_drafts' has been discontinued. INCOMPATIBILITY, use action "isabelle.draft" (or "print") in Isabelle/jEdit instead. * In HTML output, the Isabelle symbol "\" is rendered as explicit Unicode hyphen U+2010, to avoid unclear meaning of the old "soft hyphen" U+00AD. Rare INCOMPATIBILITY, e.g. copy-paste of historic Isabelle HTML output. *** Isabelle/jEdit Prover IDE *** * The command-line tool "isabelle jedit" provides more flexible options for session management: - option -R builds an auxiliary logic image with all theories from other sessions that are not already present in its parent - option -S is like -R, with a focus on the selected session and its descendants (this reduces startup time for big projects like AFP) - option -A specifies an alternative ancestor session for options -R and -S - option -i includes additional sessions into the name-space of theories Examples: isabelle jedit -R HOL-Number_Theory isabelle jedit -R HOL-Number_Theory -A HOL isabelle jedit -d '$AFP' -S Formal_SSA -A HOL isabelle jedit -d '$AFP' -S Formal_SSA -A HOL-Analysis isabelle jedit -d '$AFP' -S Formal_SSA -A HOL-Analysis -i CryptHOL * PIDE markup for session ROOT files: allows to complete session names, follow links to theories and document files etc. * Completion supports theory header imports, using theory base name. E.g. "Prob" may be completed to "HOL-Probability.Probability". * Named control symbols (without special Unicode rendering) are shown as bold-italic keyword. This is particularly useful for the short form of antiquotations with control symbol: \<^name>\argument\. The action "isabelle.antiquoted_cartouche" turns an antiquotation with 0 or 1 arguments into this format. * Completion provides templates for named symbols with arguments, e.g. "\ \ARGUMENT\" or "\<^emph>\ARGUMENT\". * Slightly more parallel checking, notably for high priority print functions (e.g. State output). * The view title is set dynamically, according to the Isabelle distribution and the logic session name. The user can override this via set-view-title (stored persistently in $JEDIT_SETTINGS/perspective.xml). * System options "spell_checker_include" and "spell_checker_exclude" supersede former "spell_checker_elements" to determine regions of text that are subject to spell-checking. Minor INCOMPATIBILITY. * Action "isabelle.preview" is able to present more file formats, notably bibtex database files and ML files. * Action "isabelle.draft" is similar to "isabelle.preview", but shows a plain-text document draft. Both are available via the menu "Plugins / Isabelle". * When loading text files, the Isabelle symbols encoding UTF-8-Isabelle is only used if there is no conflict with existing Unicode sequences in the file. Otherwise, the fallback encoding is plain UTF-8 and Isabelle symbols remain in literal \ form. This avoids accidental loss of Unicode content when saving the file. * Bibtex database files (.bib) are semantically checked. * Update to jedit-5.5.0, the latest release. *** Isabelle/VSCode Prover IDE *** * HTML preview of theories and other file-formats similar to Isabelle/jEdit. * Command-line tool "isabelle vscode_server" accepts the same options -A, -R, -S, -i for session selection as "isabelle jedit". This is relevant for isabelle.args configuration settings in VSCode. The former option -A (explore all known session files) has been discontinued: it is enabled by default, unless option -S is used to focus on a particular spot in the session structure. INCOMPATIBILITY. *** Document preparation *** * Formal comments work uniformly in outer syntax, inner syntax (term language), Isabelle/ML and some other embedded languages of Isabelle. See also "Document comments" in the isar-ref manual. The following forms are supported: - marginal text comment: \ \\\ - canceled source: \<^cancel>\\\ - raw LaTeX: \<^latex>\\\ * Outside of the inner theory body, the default presentation context is theory Pure. Thus elementary antiquotations may be used in markup commands (e.g. 'chapter', 'section', 'text') and formal comments. * System option "document_tags" specifies alternative command tags. This is occasionally useful to control the global visibility of commands via session options (e.g. in ROOT). * Document markup commands ('section', 'text' etc.) are implicitly tagged as "document" and visible by default. This avoids the application of option "document_tags" to these commands. * Isabelle names are mangled into LaTeX macro names to allow the full identifier syntax with underscore, prime, digits. This is relevant for antiquotations in control symbol notation, e.g. \<^const_name> becomes \isactrlconstUNDERSCOREname. * Document preparation with skip_proofs option now preserves the content more accurately: only terminal proof steps ('by' etc.) are skipped. * Document antiquotation @{theory name} requires the long session-qualified theory name: this is what users reading the text normally need to import. * Document antiquotation @{session name} checks and prints the given session name verbatim. * Document antiquotation @{cite} now checks the given Bibtex entries against the Bibtex database files -- only in batch-mode session builds. * Command-line tool "isabelle document" has been re-implemented in Isabelle/Scala, with simplified arguments and explicit errors from the latex and bibtex process. Minor INCOMPATIBILITY. * Session ROOT entry: empty 'document_files' means there is no document for this session. There is no need to specify options [document = false] anymore. *** Isar *** * Command 'interpret' no longer exposes resulting theorems as literal facts, notably for the \prop\ notation or the "fact" proof method. This improves modularity of proofs and scalability of locale interpretation. Rare INCOMPATIBILITY, need to refer to explicitly named facts instead (e.g. use 'find_theorems' or 'try' to figure this out). * The old 'def' command has been discontinued (legacy since Isbelle2016-1). INCOMPATIBILITY, use 'define' instead -- usually with object-logic equality or equivalence. *** Pure *** * The inner syntax category "sort" now includes notation "_" for the dummy sort: it is effectively ignored in type-inference. * Rewrites clauses (keyword 'rewrites') were moved into the locale expression syntax, where they are part of locale instances. In interpretation commands rewrites clauses now need to occur before 'for' and 'defines'. Rare INCOMPATIBILITY; definitions immediately subject to rewriting may need to be pulled up into the surrounding theory. * For 'rewrites' clauses, if activating a locale instance fails, fall back to reading the clause first. This helps avoid qualification of locale instances where the qualifier's sole purpose is avoiding duplicate constant declarations. * Proof method "simp" now supports a new modifier "flip:" followed by a list of theorems. Each of these theorems is removed from the simpset (without warning if it is not there) and the symmetric version of the theorem (i.e. lhs and rhs exchanged) is added to the simpset. For "auto" and friends the modifier is "simp flip:". *** HOL *** * Sledgehammer: bundled version of "vampire" (for non-commercial users) helps to avoid fragility of "remote_vampire" service. * Clarified relationship of characters, strings and code generation: - Type "char" is now a proper datatype of 8-bit values. - Conversions "nat_of_char" and "char_of_nat" are gone; use more general conversions "of_char" and "char_of" with suitable type constraints instead. - The zero character is just written "CHR 0x00", not "0" any longer. - Type "String.literal" (for code generation) is now isomorphic to lists of 7-bit (ASCII) values; concrete values can be written as "STR ''...''" for sequences of printable characters and "STR 0x..." for one single ASCII code point given as hexadecimal numeral. - Type "String.literal" supports concatenation "... + ..." for all standard target languages. - Theory HOL-Library.Code_Char is gone; study the explanations concerning "String.literal" in the tutorial on code generation to get an idea how target-language string literals can be converted to HOL string values and vice versa. - Session Imperative-HOL: operation "raise" directly takes a value of type "String.literal" as argument, not type "string". INCOMPATIBILITY. * Code generation: Code generation takes an explicit option "case_insensitive" to accomodate case-insensitive file systems. * Abstract bit operations as part of Main: push_bit, take_bit, drop_bit. * New, more general, axiomatization of complete_distrib_lattice. The former axioms: "sup x (Inf X) = Inf (sup x ` X)" and "inf x (Sup X) = Sup (inf x ` X)" are replaced by: "Inf (Sup ` A) <= Sup (Inf ` {f ` A | f . (! Y \ A . f Y \ Y)})" The instantiations of sets and functions as complete_distrib_lattice are moved to Hilbert_Choice.thy because their proofs need the Hilbert choice operator. The dual of this property is also proved in theory HOL.Hilbert_Choice. * New syntax for the minimum/maximum of a function over a finite set: MIN x\A. B and even MIN x. B (only useful for finite types), also MAX. * Clarifed theorem names: Min.antimono ~> Min.subset_imp Max.antimono ~> Max.subset_imp Minor INCOMPATIBILITY. * SMT module: - The 'smt_oracle' option is now necessary when using the 'smt' method with a solver other than Z3. INCOMPATIBILITY. - The encoding to first-order logic is now more complete in the presence of higher-order quantifiers. An 'smt_explicit_application' option has been added to control this. INCOMPATIBILITY. * Facts sum.commute(_restrict) and prod.commute(_restrict) renamed to sum.swap(_restrict) and prod.swap(_restrict), to avoid name clashes on interpretation of abstract locales. INCOMPATIBILITY. * Predicate coprime is now a real definition, not a mere abbreviation. INCOMPATIBILITY. * Predicate pairwise_coprime abolished, use "pairwise coprime" instead. INCOMPATIBILITY. * The relator rel_filter on filters has been strengthened to its canonical categorical definition with better properties. INCOMPATIBILITY. * Generalized linear algebra involving linear, span, dependent, dim from type class real_vector to locales module and vector_space. Renamed: span_inc ~> span_superset span_superset ~> span_base span_eq ~> span_eq_iff INCOMPATIBILITY. * Class linordered_semiring_1 covers zero_less_one also, ruling out pathologic instances. Minor INCOMPATIBILITY. * Theory HOL.List: functions "sorted_wrt" and "sorted" now compare every element in a list to all following elements, not just the next one. * Theory HOL.List syntax: - filter-syntax "[x <- xs. P]" is no longer output syntax, but only input syntax - list comprehension syntax now supports tuple patterns in "pat <- xs" * Theory Map: "empty" must now be qualified as "Map.empty". * Removed nat-int transfer machinery. Rare INCOMPATIBILITY. * Fact mod_mult_self4 (on nat) renamed to Suc_mod_mult_self3, to avoid clash with fact mod_mult_self4 (on more generic semirings). INCOMPATIBILITY. * Eliminated some theorem aliasses: even_times_iff ~> even_mult_iff mod_2_not_eq_zero_eq_one_nat ~> not_mod_2_eq_0_eq_1 even_of_nat ~> even_int_iff INCOMPATIBILITY. * Eliminated some theorem duplicate variations: - dvd_eq_mod_eq_0_numeral can be replaced by dvd_eq_mod_eq_0 - mod_Suc_eq_Suc_mod can be replaced by mod_Suc - mod_Suc_eq_Suc_mod [symmetrict] can be replaced by mod_simps - mod_eq_0_iff can be replaced by mod_eq_0_iff_dvd and dvd_def - the witness of mod_eqD can be given directly as "_ div _" INCOMPATIBILITY. * Classical setup: Assumption "m mod d = 0" (for m d :: nat) is no longer aggresively destroyed to "\q. m = d * q". INCOMPATIBILITY, adding "elim!: dvd" to classical proof methods in most situations restores broken proofs. * Theory HOL-Library.Conditional_Parametricity provides command 'parametric_constant' for proving parametricity of non-recursive definitions. For constants that are not fully parametric the command will infer conditions on relations (e.g., bi_unique, bi_total, or type class conditions such as "respects 0") sufficient for parametricity. See theory HOL-ex.Conditional_Parametricity_Examples for some examples. * Theory HOL-Library.Code_Lazy provides a new preprocessor for the code generator to generate code for algebraic types with lazy evaluation semantics even in call-by-value target languages. See the theories HOL-ex.Code_Lazy_Demo and HOL-Codegenerator_Test.Code_Lazy_Test for some examples. * Theory HOL-Library.Landau_Symbols has been moved here from AFP. * Theory HOL-Library.Old_Datatype no longer provides the legacy command 'old_datatype'. INCOMPATIBILITY. * Theory HOL-Computational_Algebra.Polynomial_Factorial does not provide instances of rat, real, complex as factorial rings etc. Import HOL-Computational_Algebra.Field_as_Ring explicitly in case of need. INCOMPATIBILITY. * Session HOL-Algebra: renamed (^) to [^] to avoid conflict with new infix/prefix notation. * Session HOL-Algebra: revamped with much new material. The set of isomorphisms between two groups is now denoted iso rather than iso_set. INCOMPATIBILITY. * Session HOL-Analysis: the Arg function now respects the same interval as Ln, namely (-pi,pi]; the old Arg function has been renamed Arg2pi. INCOMPATIBILITY. * Session HOL-Analysis: the functions zorder, zer_poly, porder and pol_poly have been redefined. All related lemmas have been reworked. INCOMPATIBILITY. * Session HOL-Analysis: infinite products, Moebius functions, the Riemann mapping theorem, the Vitali covering theorem, change-of-variables results for integration and measures. * Session HOL-Real_Asymp: proof method "real_asymp" proves asymptotics or real-valued functions (limits, "Big-O", etc.) automatically. See also ~~/src/HOL/Real_Asymp/Manual for some documentation. * Session HOL-Types_To_Sets: more tool support (unoverload_type combines internalize_sorts and unoverload) and larger experimental application (type based linear algebra transferred to linear algebra on subspaces). *** ML *** * Operation Export.export emits theory exports (arbitrary blobs), which are stored persistently in the session build database. * Command 'ML_export' exports ML toplevel bindings to the global bootstrap environment of the ML process. This allows ML evaluation without a formal theory context, e.g. in command-line tools like "isabelle process". *** System *** * Mac OS X 10.10 Yosemite is now the baseline version; Mavericks is no longer supported. * Linux and Windows/Cygwin is for x86_64 only, old 32bit platform support has been discontinued. * Java runtime is for x86_64 only. Corresponding Isabelle settings have been renamed to ISABELLE_TOOL_JAVA_OPTIONS and JEDIT_JAVA_OPTIONS, instead of former 32/64 variants. INCOMPATIBILITY. * Old settings ISABELLE_PLATFORM and ISABELLE_WINDOWS_PLATFORM should be phased out due to unclear preference of 32bit vs. 64bit architecture. Explicit GNU bash expressions are now preferred, for example (with quotes): #Posix executables (Unix or Cygwin), with preference for 64bit "${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}" #native Windows or Unix executables, with preference for 64bit "${ISABELLE_WINDOWS_PLATFORM64:-${ISABELLE_WINDOWS_PLATFORM32:-${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}}}" #native Windows (32bit) or Unix executables (preference for 64bit) "${ISABELLE_WINDOWS_PLATFORM32:-${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}}" * Command-line tool "isabelle build" supports new options: - option -B NAME: include session NAME and all descendants - option -S: only observe changes of sources, not heap images - option -f: forces a fresh build * Command-line tool "isabelle build" options -c -x -B refer to descendants wrt. the session parent or import graph. Subtle INCOMPATIBILITY: options -c -x used to refer to the session parent graph only. * Command-line tool "isabelle build" takes "condition" options with the corresponding environment values into account, when determining the up-to-date status of a session. * The command-line tool "dump" dumps information from the cumulative PIDE session database: many sessions may be loaded into a given logic image, results from all loaded theories are written to the output directory. * Command-line tool "isabelle imports -I" also reports actual session imports. This helps to minimize the session dependency graph. * The command-line tool "export" and 'export_files' in session ROOT entries retrieve theory exports from the session build database. * The command-line tools "isabelle server" and "isabelle client" provide access to the Isabelle Server: it supports responsive session management and concurrent use of theories, based on Isabelle/PIDE infrastructure. See also the "system" manual. * The command-line tool "isabelle update_comments" normalizes formal comments in outer syntax as follows: \ \text\ (whith a single space to approximate the appearance in document output). This is more specific than former "isabelle update_cartouches -c": the latter tool option has been discontinued. * The command-line tool "isabelle mkroot" now always produces a document outline: its options have been adapted accordingly. INCOMPATIBILITY. * The command-line tool "isabelle mkroot -I" initializes a Mercurial repository for the generated session files. * Settings ISABELLE_HEAPS + ISABELLE_BROWSER_INFO (or ISABELLE_HEAPS_SYSTEM + ISABELLE_BROWSER_INFO_SYSTEM in "system build mode") determine the directory locations of the main build artefacts -- instead of hard-wired directories in ISABELLE_HOME_USER (or ISABELLE_HOME). * Settings ISABELLE_PATH and ISABELLE_OUTPUT have been discontinued: heap images and session databases are always stored in $ISABELLE_HEAPS/$ML_IDENTIFIER (command-line default) or $ISABELLE_HEAPS_SYSTEM/$ML_IDENTIFIER (main Isabelle application or "isabelle jedit -s" or "isabelle build -s"). * ISABELLE_LATEX and ISABELLE_PDFLATEX now include platform-specific options for improved error reporting. Potential INCOMPATIBILITY with unusual LaTeX installations, may have to adapt these settings. * Update to Poly/ML 5.7.1 with slightly improved performance and PIDE markup for identifier bindings. It now uses The GNU Multiple Precision Arithmetic Library (libgmp) on all platforms, notably Mac OS X with 32/64 bit. New in Isabelle2017 (October 2017) ---------------------------------- *** General *** * Experimental support for Visual Studio Code (VSCode) as alternative Isabelle/PIDE front-end, see also https://marketplace.visualstudio.com/items?itemName=makarius.Isabelle2017 VSCode is a new type of application that continues the concepts of "programmer's editor" and "integrated development environment" towards fully semantic editing and debugging -- in a relatively light-weight manner. Thus it fits nicely on top of the Isabelle/PIDE infrastructure. Technically, VSCode is based on the Electron application framework (Node.js + Chromium browser + V8), which is implemented in JavaScript and TypeScript, while Isabelle/VSCode mainly consists of Isabelle/Scala modules around a Language Server implementation. * Theory names are qualified by the session name that they belong to. This affects imports, but not the theory name space prefix (which is just the theory base name as before). In order to import theories from other sessions, the ROOT file format provides a new 'sessions' keyword. In contrast, a theory that is imported in the old-fashioned manner via an explicit file-system path belongs to the current session, and might cause theory name conflicts later on. Theories that are imported from other sessions are excluded from the current session document. The command-line tool "isabelle imports" helps to update theory imports. * The main theory entry points for some non-HOL sessions have changed, to avoid confusion with the global name "Main" of the session HOL. This leads to the follow renamings: CTT/Main.thy ~> CTT/CTT.thy ZF/Main.thy ~> ZF/ZF.thy ZF/Main_ZF.thy ~> ZF/ZF.thy ZF/Main_ZFC.thy ~> ZF/ZFC.thy ZF/ZF.thy ~> ZF/ZF_Base.thy INCOMPATIBILITY. * Commands 'alias' and 'type_alias' introduce aliases for constants and type constructors, respectively. This allows adhoc changes to name-space accesses within global or local theory contexts, e.g. within a 'bundle'. * Document antiquotations @{prf} and @{full_prf} output proof terms (again) in the same way as commands 'prf' and 'full_prf'. * Computations generated by the code generator can be embedded directly into ML, alongside with @{code} antiquotations, using the following antiquotations: @{computation ... terms: ... datatypes: ...} : ((term -> term) -> 'ml option -> 'a) -> Proof.context -> term -> 'a @{computation_conv ... terms: ... datatypes: ...} : (Proof.context -> 'ml -> conv) -> Proof.context -> conv @{computation_check terms: ... datatypes: ...} : Proof.context -> conv See src/HOL/ex/Computations.thy, src/HOL/Decision_Procs/Commutative_Ring.thy and src/HOL/Decision_Procs/Reflective_Field.thy for examples and the tutorial on code generation. *** Prover IDE -- Isabelle/Scala/jEdit *** * Session-qualified theory imports allow the Prover IDE to process arbitrary theory hierarchies independently of the underlying logic session image (e.g. option "isabelle jedit -l"), but the directory structure needs to be known in advance (e.g. option "isabelle jedit -d" or a line in the file $ISABELLE_HOME_USER/ROOTS). * The PIDE document model maintains file content independently of the status of jEdit editor buffers. Reloading jEdit buffers no longer causes changes of formal document content. Theory dependencies are always resolved internally, without the need for corresponding editor buffers. The system option "jedit_auto_load" has been discontinued: it is effectively always enabled. * The Theories dockable provides a "Purge" button, in order to restrict the document model to theories that are required for open editor buffers. * The Theories dockable indicates the overall status of checking of each entry. When all forked tasks of a theory are finished, the border is painted with thick lines; remaining errors in this situation are represented by a different border color. * Automatic indentation is more careful to avoid redundant spaces in intermediate situations. Keywords are indented after input (via typed characters or completion); see also option "jedit_indent_input". * Action "isabelle.preview" opens an HTML preview of the current theory document in the default web browser. * Command-line invocation "isabelle jedit -R -l LOGIC" opens the ROOT entry of the specified logic session in the editor, while its parent is used for formal checking. * The main Isabelle/jEdit plugin may be restarted manually (using the jEdit Plugin Manager), as long as the "Isabelle Base" plugin remains enabled at all times. * Update to current jedit-5.4.0. *** Pure *** * Deleting the last code equations for a particular function using [code del] results in function with no equations (runtime abort) rather than an unimplemented function (generation time abort). Use explicit [[code drop:]] to enforce the latter. Minor INCOMPATIBILITY. * Proper concept of code declarations in code.ML: - Regular code declarations act only on the global theory level, being ignored with warnings if syntactically malformed. - Explicitly global code declarations yield errors if syntactically malformed. - Default code declarations are silently ignored if syntactically malformed. Minor INCOMPATIBILITY. * Clarified and standardized internal data bookkeeping of code declarations: history of serials allows to track potentially non-monotonous declarations appropriately. Minor INCOMPATIBILITY. *** HOL *** * The Nunchaku model finder is now part of "Main". * SMT module: - A new option, 'smt_nat_as_int', has been added to translate 'nat' to 'int' and benefit from the SMT solver's theory reasoning. It is disabled by default. - The legacy module "src/HOL/Library/Old_SMT.thy" has been removed. - Several small issues have been rectified in the 'smt' command. * (Co)datatype package: The 'size_gen_o_map' lemma is no longer generated for datatypes with type class annotations. As a result, the tactic that derives it no longer fails on nested datatypes. Slight INCOMPATIBILITY. * Command and antiquotation "value" with modified default strategy: terms without free variables are always evaluated using plain evaluation only, with no fallback on normalization by evaluation. Minor INCOMPATIBILITY. * Theories "GCD" and "Binomial" are already included in "Main" (instead of "Complex_Main"). * Constant "surj" is a full input/output abbreviation (again). Minor INCOMPATIBILITY. * Dropped aliasses RangeP, DomainP for Rangep, Domainp respectively. INCOMPATIBILITY. * Renamed ii to imaginary_unit in order to free up ii as a variable name. The syntax \ remains available. INCOMPATIBILITY. * Dropped abbreviations transP, antisymP, single_valuedP; use constants transp, antisymp, single_valuedp instead. INCOMPATIBILITY. * Constant "subseq" in Topological_Spaces has been removed -- it is subsumed by "strict_mono". Some basic lemmas specific to "subseq" have been renamed accordingly, e.g. "subseq_o" -> "strict_mono_o" etc. * Theory List: "sublist" renamed to "nths" in analogy with "nth", and "sublisteq" renamed to "subseq". Minor INCOMPATIBILITY. * Theory List: new generic function "sorted_wrt". * Named theorems mod_simps covers various congruence rules concerning mod, replacing former zmod_simps. INCOMPATIBILITY. * Swapped orientation of congruence rules mod_add_left_eq, mod_add_right_eq, mod_add_eq, mod_mult_left_eq, mod_mult_right_eq, mod_mult_eq, mod_minus_eq, mod_diff_left_eq, mod_diff_right_eq, mod_diff_eq. INCOMPATIBILITY. * Generalized some facts: measure_induct_rule measure_induct zminus_zmod ~> mod_minus_eq zdiff_zmod_left ~> mod_diff_left_eq zdiff_zmod_right ~> mod_diff_right_eq zmod_eq_dvd_iff ~> mod_eq_dvd_iff INCOMPATIBILITY. * Algebraic type class hierarchy of euclidean (semi)rings in HOL: euclidean_(semi)ring, euclidean_(semi)ring_cancel, unique_euclidean_(semi)ring; instantiation requires provision of a euclidean size. * Theory "HOL-Number_Theory.Euclidean_Algorithm" has been reworked: - Euclidean induction is available as rule eucl_induct. - Constants Euclidean_Algorithm.gcd, Euclidean_Algorithm.lcm, Euclidean_Algorithm.Gcd and Euclidean_Algorithm.Lcm allow easy instantiation of euclidean (semi)rings as GCD (semi)rings. - Coefficients obtained by extended euclidean algorithm are available as "bezout_coefficients". INCOMPATIBILITY. * Theory "Number_Theory.Totient" introduces basic notions about Euler's totient function previously hidden as solitary example in theory Residues. Definition changed so that "totient 1 = 1" in agreement with the literature. Minor INCOMPATIBILITY. * New styles in theory "HOL-Library.LaTeXsugar": - "dummy_pats" for printing equations with "_" on the lhs; - "eta_expand" for printing eta-expanded terms. * Theory "HOL-Library.Permutations": theorem bij_swap_ompose_bij has been renamed to bij_swap_compose_bij. INCOMPATIBILITY. * New theory "HOL-Library.Going_To_Filter" providing the "f going_to F" filter for describing points x such that f(x) is in the filter F. * Theory "HOL-Library.Formal_Power_Series": constants X/E/L/F have been renamed to fps_X/fps_exp/fps_ln/fps_hypergeo to avoid polluting the name space. INCOMPATIBILITY. * Theory "HOL-Library.FinFun" has been moved to AFP (again). INCOMPATIBILITY. * Theory "HOL-Library.FuncSet": some old and rarely used ASCII replacement syntax has been removed. INCOMPATIBILITY, standard syntax with symbols should be used instead. The subsequent commands help to reproduce the old forms, e.g. to simplify porting old theories: syntax (ASCII) "_PiE" :: "pttrn \ 'a set \ 'b set \ ('a \ 'b) set" ("(3PIE _:_./ _)" 10) "_Pi" :: "pttrn \ 'a set \ 'b set \ ('a \ 'b) set" ("(3PI _:_./ _)" 10) "_lam" :: "pttrn \ 'a set \ 'a \ 'b \ ('a \ 'b)" ("(3%_:_./ _)" [0,0,3] 3) * Theory "HOL-Library.Multiset": the simprocs on subsets operators of multisets have been renamed: msetless_cancel_numerals ~> msetsubset_cancel msetle_cancel_numerals ~> msetsubset_eq_cancel INCOMPATIBILITY. * Theory "HOL-Library.Pattern_Aliases" provides input and output syntax for pattern aliases as known from Haskell, Scala and ML. * Theory "HOL-Library.Uprod" formalizes the type of unordered pairs. * Session HOL-Analysis: more material involving arcs, paths, covering spaces, innessential maps, retracts, infinite products, simplicial complexes. Baire Category theorem. Major results include the Jordan Curve Theorem and the Great Picard Theorem. * Session HOL-Algebra has been extended by additional lattice theory: the Knaster-Tarski fixed point theorem and Galois Connections. * Sessions HOL-Computational_Algebra and HOL-Number_Theory: new notions of squarefreeness, n-th powers, and prime powers. * Session "HOL-Computional_Algebra" covers many previously scattered theories, notably Euclidean_Algorithm, Factorial_Ring, Formal_Power_Series, Fraction_Field, Fundamental_Theorem_Algebra, Normalized_Fraction, Polynomial_FPS, Polynomial, Primes. Minor INCOMPATIBILITY. *** System *** * Isabelle/Scala: the SQL module supports access to relational databases, either as plain file (SQLite) or full-scale server (PostgreSQL via local port or remote ssh connection). * Results of "isabelle build" are recorded as SQLite database (i.e. "Application File Format" in the sense of https://www.sqlite.org/appfileformat.html). This allows systematic access via operations from module Sessions.Store in Isabelle/Scala. * System option "parallel_proofs" is 1 by default (instead of more aggressive 2). This requires less heap space and avoids burning parallel CPU cycles, while full subproof parallelization is enabled for repeated builds (according to parallel_subproofs_threshold). * System option "record_proofs" allows to change the global Proofterm.proofs variable for a session. Regular values are are 0, 1, 2; a negative value means the current state in the ML heap image remains unchanged. * Isabelle settings variable ISABELLE_SCALA_BUILD_OPTIONS has been renamed to ISABELLE_SCALAC_OPTIONS. Rare INCOMPATIBILITY. * Isabelle settings variables ISABELLE_WINDOWS_PLATFORM, ISABELLE_WINDOWS_PLATFORM32, ISABELLE_WINDOWS_PLATFORM64 indicate the native Windows platform (independently of the Cygwin installation). This is analogous to ISABELLE_PLATFORM, ISABELLE_PLATFORM32, ISABELLE_PLATFORM64. * Command-line tool "isabelle build_docker" builds a Docker image from the Isabelle application bundle for Linux. See also https://hub.docker.com/r/makarius/isabelle * Command-line tool "isabelle vscode_server" provides a Language Server Protocol implementation, e.g. for the Visual Studio Code editor. It serves as example for alternative PIDE front-ends. * Command-line tool "isabelle imports" helps to maintain theory imports wrt. session structure. Examples for the main Isabelle distribution: isabelle imports -I -a isabelle imports -U -a isabelle imports -U -i -a isabelle imports -M -a -d '~~/src/Benchmarks' New in Isabelle2016-1 (December 2016) ------------------------------------- *** General *** * Splitter in proof methods "simp", "auto" and friends: - The syntax "split add" has been discontinued, use plain "split", INCOMPATIBILITY. - For situations with many conditional or case expressions, there is an alternative splitting strategy that can be much faster. It is selected by writing "split!" instead of "split". It applies safe introduction and elimination rules after each split rule. As a result the subgoal may be split into several subgoals. * Command 'bundle' provides a local theory target to define a bundle from the body of specification commands (such as 'declare', 'declaration', 'notation', 'lemmas', 'lemma'). For example: bundle foo begin declare a [simp] declare b [intro] end * Command 'unbundle' is like 'include', but works within a local theory context. Unlike "context includes ... begin", the effect of 'unbundle' on the target context persists, until different declarations are given. * Simplified outer syntax: uniform category "name" includes long identifiers. Former "xname" / "nameref" / "name reference" has been discontinued. * Embedded content (e.g. the inner syntax of types, terms, props) may be delimited uniformly via cartouches. This works better than old-fashioned quotes when sub-languages are nested. * Mixfix annotations support general block properties, with syntax "(\x=a y=b z \\". Notable property names are "indent", "consistent", "unbreakable", "markup". The existing notation "(DIGITS" is equivalent to "(\indent=DIGITS\". The former notation "(00" for unbreakable blocks is superseded by "(\unbreabable\" --- rare INCOMPATIBILITY. * Proof method "blast" is more robust wrt. corner cases of Pure statements without object-logic judgment. * Commands 'prf' and 'full_prf' are somewhat more informative (again): proof terms are reconstructed and cleaned from administrative thm nodes. * Code generator: config option "code_timing" triggers measurements of different phases of code generation. See src/HOL/ex/Code_Timing.thy for examples. * Code generator: implicits in Scala (stemming from type class instances) are generated into companion object of corresponding type class, to resolve some situations where ambiguities may occur. * Solve direct: option "solve_direct_strict_warnings" gives explicit warnings for lemma statements with trivial proofs. *** Prover IDE -- Isabelle/Scala/jEdit *** * More aggressive flushing of machine-generated input, according to system option editor_generated_input_delay (in addition to existing editor_input_delay for regular user edits). This may affect overall PIDE reactivity and CPU usage. * Syntactic indentation according to Isabelle outer syntax. Action "indent-lines" (shortcut C+i) indents the current line according to command keywords and some command substructure. Action "isabelle.newline" (shortcut ENTER) indents the old and the new line according to command keywords only; see also option "jedit_indent_newline". * Semantic indentation for unstructured proof scripts ('apply' etc.) via number of subgoals. This requires information of ongoing document processing and may thus lag behind, when the user is editing too quickly; see also option "jedit_script_indent" and "jedit_script_indent_limit". * Refined folding mode "isabelle" based on Isar syntax: 'next' and 'qed' are treated as delimiters for fold structure; 'begin' and 'end' structure of theory specifications is treated as well. * Command 'proof' provides information about proof outline with cases, e.g. for proof methods "cases", "induct", "goal_cases". * Completion templates for commands involving "begin ... end" blocks, e.g. 'context', 'notepad'. * Sidekick parser "isabelle-context" shows nesting of context blocks according to 'begin' and 'end' structure. * Highlighting of entity def/ref positions wrt. cursor. * Action "isabelle.select-entity" (shortcut CS+ENTER) selects all occurrences of the formal entity at the caret position. This facilitates systematic renaming. * PIDE document markup works across multiple Isar commands, e.g. the results established at the end of a proof are properly identified in the theorem statement. * Cartouche abbreviations work both for " and ` to accomodate typical situations where old ASCII notation may be updated. * Dockable window "Symbols" also provides access to 'abbrevs' from the outer syntax of the current theory buffer. This provides clickable syntax templates, including entries with empty abbrevs name (which are inaccessible via keyboard completion). * IDE support for the Isabelle/Pure bootstrap process, with the following independent stages: src/Pure/ROOT0.ML src/Pure/ROOT.ML src/Pure/Pure.thy src/Pure/ML_Bootstrap.thy The ML ROOT files act like quasi-theories in the context of theory ML_Bootstrap: this allows continuous checking of all loaded ML files. The theory files are presented with a modified header to import Pure from the running Isabelle instance. Results from changed versions of each stage are *not* propagated to the next stage, and isolated from the actual Isabelle/Pure that runs the IDE itself. The sequential dependencies of the above files are only observed for batch build. * Isabelle/ML and Standard ML files are presented in Sidekick with the tree structure of section headings: this special comment format is described in "implementation" chapter 0, e.g. (*** section ***). * Additional abbreviations for syntactic completion may be specified within the theory header as 'abbrevs'. The theory syntax for 'keywords' has been simplified accordingly: optional abbrevs need to go into the new 'abbrevs' section. * Global abbreviations via $ISABELLE_HOME/etc/abbrevs and $ISABELLE_HOME_USER/etc/abbrevs are no longer supported. Minor INCOMPATIBILITY, use 'abbrevs' within theory header instead. * Action "isabelle.keymap-merge" asks the user to resolve pending Isabelle keymap changes that are in conflict with the current jEdit keymap; non-conflicting changes are always applied implicitly. This action is automatically invoked on Isabelle/jEdit startup and thus increases chances that users see new keyboard shortcuts when re-using old keymaps. * ML and document antiquotations for file-systems paths are more uniform and diverse: @{path NAME} -- no file-system check @{file NAME} -- check for plain file @{dir NAME} -- check for directory Minor INCOMPATIBILITY, former uses of @{file} and @{file_unchecked} may have to be changed. *** Document preparation *** * New symbol \, e.g. for temporal operator. * New document and ML antiquotation @{locale} for locales, similar to existing antiquotation @{class}. * Mixfix annotations support delimiters like \<^control>\cartouche\ -- this allows special forms of document output. * Raw LaTeX output now works via \<^latex>\...\ instead of raw control symbol \<^raw:...>. INCOMPATIBILITY, notably for LaTeXsugar.thy and its derivatives. * \<^raw:...> symbols are no longer supported. * Old 'header' command is no longer supported (legacy since Isabelle2015). *** Isar *** * Many specification elements support structured statements with 'if' / 'for' eigen-context, e.g. 'axiomatization', 'abbreviation', 'definition', 'inductive', 'function'. * Toplevel theorem statements support eigen-context notation with 'if' / 'for' (in postfix), which corresponds to 'assumes' / 'fixes' in the traditional long statement form (in prefix). Local premises are called "that" or "assms", respectively. Empty premises are *not* bound in the context: INCOMPATIBILITY. * Command 'define' introduces a local (non-polymorphic) definition, with optional abstraction over local parameters. The syntax resembles 'definition' and 'obtain'. It fits better into the Isar language than old 'def', which is now a legacy feature. * Command 'obtain' supports structured statements with 'if' / 'for' context. * Command '\' is an alias for 'sorry', with different typesetting. E.g. to produce proof holes in examples and documentation. * The defining position of a literal fact \prop\ is maintained more carefully, and made accessible as hyperlink in the Prover IDE. * Commands 'finally' and 'ultimately' used to expose the result as literal fact: this accidental behaviour has been discontinued. Rare INCOMPATIBILITY, use more explicit means to refer to facts in Isar. * Command 'axiomatization' has become more restrictive to correspond better to internal axioms as singleton facts with mandatory name. Minor INCOMPATIBILITY. * Proof methods may refer to the main facts via the dynamic fact "method_facts". This is particularly useful for Eisbach method definitions. * Proof method "use" allows to modify the main facts of a given method expression, e.g. (use facts in simp) (use facts in \simp add: ...\) * The old proof method "default" has been removed (legacy since Isabelle2016). INCOMPATIBILITY, use "standard" instead. *** Pure *** * Pure provides basic versions of proof methods "simp" and "simp_all" that only know about meta-equality (==). Potential INCOMPATIBILITY in theory imports that merge Pure with e.g. Main of Isabelle/HOL: the order is relevant to avoid confusion of Pure.simp vs. HOL.simp. * The command 'unfolding' and proof method "unfold" include a second stage where given equations are passed through the attribute "abs_def" before rewriting. This ensures that definitions are fully expanded, regardless of the actual parameters that are provided. Rare INCOMPATIBILITY in some corner cases: use proof method (simp only:) instead, or declare [[unfold_abs_def = false]] in the proof context. * Type-inference improves sorts of newly introduced type variables for the object-logic, using its base sort (i.e. HOL.type for Isabelle/HOL). Thus terms like "f x" or "\x. P x" without any further syntactic context produce x::'a::type in HOL instead of x::'a::{} in Pure. Rare INCOMPATIBILITY, need to provide explicit type constraints for Pure types where this is really intended. *** HOL *** * New proof method "argo" using the built-in Argo solver based on SMT technology. The method can be used to prove goals of quantifier-free propositional logic, goals based on a combination of quantifier-free propositional logic with equality, and goals based on a combination of quantifier-free propositional logic with linear real arithmetic including min/max/abs. See HOL/ex/Argo_Examples.thy for examples. * The new "nunchaku" command integrates the Nunchaku model finder. The tool is experimental. See ~~/src/HOL/Nunchaku/Nunchaku.thy for details. * Metis: The problem encoding has changed very slightly. This might break existing proofs. INCOMPATIBILITY. * Sledgehammer: - The MaSh relevance filter is now faster than before. - Produce syntactically correct Vampire 4.0 problem files. * (Co)datatype package: - New commands for defining corecursive functions and reasoning about them in "~~/src/HOL/Library/BNF_Corec.thy": 'corec', 'corecursive', 'friend_of_corec', and 'corecursion_upto'; and 'corec_unique' proof method. See 'isabelle doc corec'. - The predicator :: ('a \ bool) \ 'a F \ bool is now a first-class citizen in bounded natural functors. - 'primrec' now allows nested calls through the predicator in addition to the map function. - 'bnf' automatically discharges reflexive proof obligations. - 'bnf' outputs a slightly modified proof obligation expressing rel in terms of map and set (not giving a specification for rel makes this one reflexive). - 'bnf' outputs a new proof obligation expressing pred in terms of set (not giving a specification for pred makes this one reflexive). INCOMPATIBILITY: manual 'bnf' declarations may need adjustment. - Renamed lemmas: rel_prod_apply ~> rel_prod_inject pred_prod_apply ~> pred_prod_inject INCOMPATIBILITY. - The "size" plugin has been made compatible again with locales. - The theorems about "rel" and "set" may have a slightly different (but equivalent) form. INCOMPATIBILITY. * The 'coinductive' command produces a proper coinduction rule for mutual coinductive predicates. This new rule replaces the old rule, which exposed details of the internal fixpoint construction and was hard to use. INCOMPATIBILITY. * New abbreviations for negated existence (but not bounded existence): \x. P x \ \ (\x. P x) \!x. P x \ \ (\!x. P x) * The print mode "HOL" for ASCII syntax of binders "!", "?", "?!", "@" has been removed for output. It is retained for input only, until it is eliminated altogether. * The unique existence quantifier no longer provides 'binder' syntax, but uses syntax translations (as for bounded unique existence). Thus iterated quantification \!x y. P x y with its slightly confusing sequential meaning \!x. \!y. P x y is no longer possible. Instead, pattern abstraction admits simultaneous unique existence \!(x, y). P x y (analogous to existing notation \!(x, y)\A. P x y). Potential INCOMPATIBILITY in rare situations. * Conventional syntax "%(). t" for unit abstractions. Slight syntactic INCOMPATIBILITY. * Renamed constants and corresponding theorems: setsum ~> sum setprod ~> prod listsum ~> sum_list listprod ~> prod_list INCOMPATIBILITY. * Sligthly more standardized theorem names: sgn_times ~> sgn_mult sgn_mult' ~> Real_Vector_Spaces.sgn_mult divide_zero_left ~> div_0 zero_mod_left ~> mod_0 divide_zero ~> div_by_0 divide_1 ~> div_by_1 nonzero_mult_divide_cancel_left ~> nonzero_mult_div_cancel_left div_mult_self1_is_id ~> nonzero_mult_div_cancel_left nonzero_mult_divide_cancel_right ~> nonzero_mult_div_cancel_right div_mult_self2_is_id ~> nonzero_mult_div_cancel_right is_unit_divide_mult_cancel_left ~> is_unit_div_mult_cancel_left is_unit_divide_mult_cancel_right ~> is_unit_div_mult_cancel_right mod_div_equality ~> div_mult_mod_eq mod_div_equality2 ~> mult_div_mod_eq mod_div_equality3 ~> mod_div_mult_eq mod_div_equality4 ~> mod_mult_div_eq minus_div_eq_mod ~> minus_div_mult_eq_mod minus_div_eq_mod2 ~> minus_mult_div_eq_mod minus_mod_eq_div ~> minus_mod_eq_div_mult minus_mod_eq_div2 ~> minus_mod_eq_mult_div div_mod_equality' ~> minus_mod_eq_div_mult [symmetric] mod_div_equality' ~> minus_div_mult_eq_mod [symmetric] zmod_zdiv_equality ~> mult_div_mod_eq [symmetric] zmod_zdiv_equality' ~> minus_div_mult_eq_mod [symmetric] Divides.mult_div_cancel ~> minus_mod_eq_mult_div [symmetric] mult_div_cancel ~> minus_mod_eq_mult_div [symmetric] zmult_div_cancel ~> minus_mod_eq_mult_div [symmetric] div_1 ~> div_by_Suc_0 mod_1 ~> mod_by_Suc_0 INCOMPATIBILITY. * New type class "idom_abs_sgn" specifies algebraic properties of sign and absolute value functions. Type class "sgn_if" has disappeared. Slight INCOMPATIBILITY. * Dedicated syntax LENGTH('a) for length of types. * Characters (type char) are modelled as finite algebraic type corresponding to {0..255}. - Logical representation: * 0 is instantiated to the ASCII zero character. * All other characters are represented as "Char n" with n being a raw numeral expression less than 256. * Expressions of the form "Char n" with n greater than 255 are non-canonical. - Printing and parsing: * Printable characters are printed and parsed as "CHR ''\''" (as before). * The ASCII zero character is printed and parsed as "0". * All other canonical characters are printed as "CHR 0xXX" with XX being the hexadecimal character code. "CHR n" is parsable for every numeral expression n. * Non-canonical characters have no special syntax and are printed as their logical representation. - Explicit conversions from and to the natural numbers are provided as char_of_nat, nat_of_char (as before). - The auxiliary nibble type has been discontinued. INCOMPATIBILITY. * Type class "div" with operation "mod" renamed to type class "modulo" with operation "modulo", analogously to type class "divide". This eliminates the need to qualify any of those names in the presence of infix "mod" syntax. INCOMPATIBILITY. * Statements and proofs of Knaster-Tarski fixpoint combinators lfp/gfp have been clarified. The fixpoint properties are lfp_fixpoint, its symmetric lfp_unfold (as before), and the duals for gfp. Auxiliary items for the proof (lfp_lemma2 etc.) are no longer exported, but can be easily recovered by composition with eq_refl. Minor INCOMPATIBILITY. * Constant "surj" is a mere input abbreviation, to avoid hiding an equation in term output. Minor INCOMPATIBILITY. * Command 'code_reflect' accepts empty constructor lists for datatypes, which renders those abstract effectively. * Command 'export_code' checks given constants for abstraction violations: a small guarantee that given constants specify a safe interface for the generated code. * Code generation for Scala: ambiguous implicts in class diagrams are spelt out explicitly. * Static evaluators (Code_Evaluation.static_* in Isabelle/ML) rely on explicitly provided auxiliary definitions for required type class dictionaries rather than half-working magic. INCOMPATIBILITY, see the tutorial on code generation for details. * Theory Set_Interval: substantial new theorems on indexed sums and products. * Locale bijection establishes convenient default simp rules such as "inv f (f a) = a" for total bijections. * Abstract locales semigroup, abel_semigroup, semilattice, semilattice_neutr, ordering, ordering_top, semilattice_order, semilattice_neutr_order, comm_monoid_set, semilattice_set, semilattice_neutr_set, semilattice_order_set, semilattice_order_neutr_set monoid_list, comm_monoid_list, comm_monoid_list_set, comm_monoid_mset, comm_monoid_fun use boldified syntax uniformly that does not clash with corresponding global syntax. INCOMPATIBILITY. * Former locale lifting_syntax is now a bundle, which is easier to include in a local context or theorem statement, e.g. "context includes lifting_syntax begin ... end". Minor INCOMPATIBILITY. * Some old / obsolete theorems have been renamed / removed, potential INCOMPATIBILITY. nat_less_cases -- removed, use linorder_cases instead inv_image_comp -- removed, use image_inv_f_f instead image_surj_f_inv_f ~> image_f_inv_f * Some theorems about groups and orders have been generalised from groups to semi-groups that are also monoids: le_add_same_cancel1 le_add_same_cancel2 less_add_same_cancel1 less_add_same_cancel2 add_le_same_cancel1 add_le_same_cancel2 add_less_same_cancel1 add_less_same_cancel2 * Some simplifications theorems about rings have been removed, since superseeded by a more general version: less_add_cancel_left_greater_zero ~> less_add_same_cancel1 less_add_cancel_right_greater_zero ~> less_add_same_cancel2 less_eq_add_cancel_left_greater_eq_zero ~> le_add_same_cancel1 less_eq_add_cancel_right_greater_eq_zero ~> le_add_same_cancel2 less_eq_add_cancel_left_less_eq_zero ~> add_le_same_cancel1 less_eq_add_cancel_right_less_eq_zero ~> add_le_same_cancel2 less_add_cancel_left_less_zero ~> add_less_same_cancel1 less_add_cancel_right_less_zero ~> add_less_same_cancel2 INCOMPATIBILITY. * Renamed split_if -> if_split and split_if_asm -> if_split_asm to resemble the f.split naming convention, INCOMPATIBILITY. * Added class topological_monoid. * The following theorems have been renamed: setsum_left_distrib ~> sum_distrib_right setsum_right_distrib ~> sum_distrib_left INCOMPATIBILITY. * Compound constants INFIMUM and SUPREMUM are mere abbreviations now. INCOMPATIBILITY. * "Gcd (f ` A)" and "Lcm (f ` A)" are printed with optional comprehension-like syntax analogously to "Inf (f ` A)" and "Sup (f ` A)". * Class semiring_Lcd merged into semiring_Gcd. INCOMPATIBILITY. * The type class ordered_comm_monoid_add is now called ordered_cancel_comm_monoid_add. A new type class ordered_comm_monoid_add is introduced as the combination of ordered_ab_semigroup_add + comm_monoid_add. INCOMPATIBILITY. * Introduced the type classes canonically_ordered_comm_monoid_add and dioid. * Introduced the type class ordered_ab_semigroup_monoid_add_imp_le. When instantiating linordered_semiring_strict and ordered_ab_group_add, an explicit instantiation of ordered_ab_semigroup_monoid_add_imp_le might be required. INCOMPATIBILITY. * Dropped various legacy fact bindings, whose replacements are often of a more general type also: lcm_left_commute_nat ~> lcm.left_commute lcm_left_commute_int ~> lcm.left_commute gcd_left_commute_nat ~> gcd.left_commute gcd_left_commute_int ~> gcd.left_commute gcd_greatest_iff_nat ~> gcd_greatest_iff gcd_greatest_iff_int ~> gcd_greatest_iff coprime_dvd_mult_nat ~> coprime_dvd_mult coprime_dvd_mult_int ~> coprime_dvd_mult zpower_numeral_even ~> power_numeral_even gcd_mult_cancel_nat ~> gcd_mult_cancel gcd_mult_cancel_int ~> gcd_mult_cancel div_gcd_coprime_nat ~> div_gcd_coprime div_gcd_coprime_int ~> div_gcd_coprime zpower_numeral_odd ~> power_numeral_odd zero_less_int_conv ~> of_nat_0_less_iff gcd_greatest_nat ~> gcd_greatest gcd_greatest_int ~> gcd_greatest coprime_mult_nat ~> coprime_mult coprime_mult_int ~> coprime_mult lcm_commute_nat ~> lcm.commute lcm_commute_int ~> lcm.commute int_less_0_conv ~> of_nat_less_0_iff gcd_commute_nat ~> gcd.commute gcd_commute_int ~> gcd.commute Gcd_insert_nat ~> Gcd_insert Gcd_insert_int ~> Gcd_insert of_int_int_eq ~> of_int_of_nat_eq lcm_least_nat ~> lcm_least lcm_least_int ~> lcm_least lcm_assoc_nat ~> lcm.assoc lcm_assoc_int ~> lcm.assoc int_le_0_conv ~> of_nat_le_0_iff int_eq_0_conv ~> of_nat_eq_0_iff Gcd_empty_nat ~> Gcd_empty Gcd_empty_int ~> Gcd_empty gcd_assoc_nat ~> gcd.assoc gcd_assoc_int ~> gcd.assoc zero_zle_int ~> of_nat_0_le_iff lcm_dvd2_nat ~> dvd_lcm2 lcm_dvd2_int ~> dvd_lcm2 lcm_dvd1_nat ~> dvd_lcm1 lcm_dvd1_int ~> dvd_lcm1 gcd_zero_nat ~> gcd_eq_0_iff gcd_zero_int ~> gcd_eq_0_iff gcd_dvd2_nat ~> gcd_dvd2 gcd_dvd2_int ~> gcd_dvd2 gcd_dvd1_nat ~> gcd_dvd1 gcd_dvd1_int ~> gcd_dvd1 int_numeral ~> of_nat_numeral lcm_ac_nat ~> ac_simps lcm_ac_int ~> ac_simps gcd_ac_nat ~> ac_simps gcd_ac_int ~> ac_simps abs_int_eq ~> abs_of_nat zless_int ~> of_nat_less_iff zdiff_int ~> of_nat_diff zadd_int ~> of_nat_add int_mult ~> of_nat_mult int_Suc ~> of_nat_Suc inj_int ~> inj_of_nat int_1 ~> of_nat_1 int_0 ~> of_nat_0 Lcm_empty_nat ~> Lcm_empty Lcm_empty_int ~> Lcm_empty Lcm_insert_nat ~> Lcm_insert Lcm_insert_int ~> Lcm_insert comp_fun_idem_gcd_nat ~> comp_fun_idem_gcd comp_fun_idem_gcd_int ~> comp_fun_idem_gcd comp_fun_idem_lcm_nat ~> comp_fun_idem_lcm comp_fun_idem_lcm_int ~> comp_fun_idem_lcm Lcm_eq_0 ~> Lcm_eq_0_I Lcm0_iff ~> Lcm_0_iff Lcm_dvd_int ~> Lcm_least divides_mult_nat ~> divides_mult divides_mult_int ~> divides_mult lcm_0_nat ~> lcm_0_right lcm_0_int ~> lcm_0_right lcm_0_left_nat ~> lcm_0_left lcm_0_left_int ~> lcm_0_left dvd_gcd_D1_nat ~> dvd_gcdD1 dvd_gcd_D1_int ~> dvd_gcdD1 dvd_gcd_D2_nat ~> dvd_gcdD2 dvd_gcd_D2_int ~> dvd_gcdD2 coprime_dvd_mult_iff_nat ~> coprime_dvd_mult_iff coprime_dvd_mult_iff_int ~> coprime_dvd_mult_iff realpow_minus_mult ~> power_minus_mult realpow_Suc_le_self ~> power_Suc_le_self dvd_Gcd, dvd_Gcd_nat, dvd_Gcd_int removed in favour of Gcd_greatest INCOMPATIBILITY. * Renamed HOL/Quotient_Examples/FSet.thy to HOL/Quotient_Examples/Quotient_FSet.thy INCOMPATIBILITY. * Session HOL-Library: theory FinFun bundles "finfun_syntax" and "no_finfun_syntax" allow to control optional syntax in local contexts; this supersedes former theory FinFun_Syntax. INCOMPATIBILITY, e.g. use "unbundle finfun_syntax" to imitate import of "~~/src/HOL/Library/FinFun_Syntax". * Session HOL-Library: theory Multiset_Permutations (executably) defines the set of permutations of a given set or multiset, i.e. the set of all lists that contain every element of the carrier (multi-)set exactly once. * Session HOL-Library: multiset membership is now expressed using set_mset rather than count. - Expressions "count M a > 0" and similar simplify to membership by default. - Converting between "count M a = 0" and non-membership happens using equations count_eq_zero_iff and not_in_iff. - Rules count_inI and in_countE obtain facts of the form "count M a = n" from membership. - Rules count_in_diffI and in_diff_countE obtain facts of the form "count M a = n + count N a" from membership on difference sets. INCOMPATIBILITY. * Session HOL-Library: theory LaTeXsugar uses new-style "dummy_pats" for displaying equations in functional programming style --- variables present on the left-hand but not on the righ-hand side are replaced by underscores. * Session HOL-Library: theory Combinator_PER provides combinator to build partial equivalence relations from a predicate and an equivalence relation. * Session HOL-Library: theory Perm provides basic facts about almost everywhere fix bijections. * Session HOL-Library: theory Normalized_Fraction allows viewing an element of a field of fractions as a normalized fraction (i.e. a pair of numerator and denominator such that the two are coprime and the denominator is normalized wrt. unit factors). * Session HOL-NSA has been renamed to HOL-Nonstandard_Analysis. * Session HOL-Multivariate_Analysis has been renamed to HOL-Analysis. * Session HOL-Analysis: measure theory has been moved here from HOL-Probability. When importing HOL-Analysis some theorems need additional name spaces prefixes due to name clashes. INCOMPATIBILITY. * Session HOL-Analysis: more complex analysis including Cauchy's inequality, Liouville theorem, open mapping theorem, maximum modulus principle, Residue theorem, Schwarz Lemma. * Session HOL-Analysis: Theory of polyhedra: faces, extreme points, polytopes, and the Krein–Milman Minkowski theorem. * Session HOL-Analysis: Numerous results ported from the HOL Light libraries: homeomorphisms, continuous function extensions, invariance of domain. * Session HOL-Probability: the type of emeasure and nn_integral was changed from ereal to ennreal, INCOMPATIBILITY. emeasure :: 'a measure \ 'a set \ ennreal nn_integral :: 'a measure \ ('a \ ennreal) \ ennreal * Session HOL-Probability: Code generation and QuickCheck for Probability Mass Functions. * Session HOL-Probability: theory Random_Permutations contains some theory about choosing a permutation of a set uniformly at random and folding over a list in random order. * Session HOL-Probability: theory SPMF formalises discrete subprobability distributions. * Session HOL-Library: the names of multiset theorems have been normalised to distinguish which ordering the theorems are about mset_less_eqI ~> mset_subset_eqI mset_less_insertD ~> mset_subset_insertD mset_less_eq_count ~> mset_subset_eq_count mset_less_diff_self ~> mset_subset_diff_self mset_le_exists_conv ~> mset_subset_eq_exists_conv mset_le_mono_add_right_cancel ~> mset_subset_eq_mono_add_right_cancel mset_le_mono_add_left_cancel ~> mset_subset_eq_mono_add_left_cancel mset_le_mono_add ~> mset_subset_eq_mono_add mset_le_add_left ~> mset_subset_eq_add_left mset_le_add_right ~> mset_subset_eq_add_right mset_le_single ~> mset_subset_eq_single mset_le_multiset_union_diff_commute ~> mset_subset_eq_multiset_union_diff_commute diff_le_self ~> diff_subset_eq_self mset_leD ~> mset_subset_eqD mset_lessD ~> mset_subsetD mset_le_insertD ~> mset_subset_eq_insertD mset_less_of_empty ~> mset_subset_of_empty mset_less_size ~> mset_subset_size wf_less_mset_rel ~> wf_subset_mset_rel count_le_replicate_mset_le ~> count_le_replicate_mset_subset_eq mset_remdups_le ~> mset_remdups_subset_eq ms_lesseq_impl ~> subset_eq_mset_impl Some functions have been renamed: ms_lesseq_impl -> subset_eq_mset_impl * HOL-Library: multisets are now ordered with the multiset ordering #\# ~> \ #\# ~> < le_multiset ~> less_eq_multiset less_multiset ~> le_multiset INCOMPATIBILITY. * Session HOL-Library: the prefix multiset_order has been discontinued: the theorems can be directly accessed. As a consequence, the lemmas "order_multiset" and "linorder_multiset" have been discontinued, and the interpretations "multiset_linorder" and "multiset_wellorder" have been replaced by instantiations. INCOMPATIBILITY. * Session HOL-Library: some theorems about the multiset ordering have been renamed: le_multiset_def ~> less_eq_multiset_def less_multiset_def ~> le_multiset_def less_eq_imp_le_multiset ~> subset_eq_imp_le_multiset mult_less_not_refl ~> mset_le_not_refl mult_less_trans ~> mset_le_trans mult_less_not_sym ~> mset_le_not_sym mult_less_asym ~> mset_le_asym mult_less_irrefl ~> mset_le_irrefl union_less_mono2{,1,2} ~> union_le_mono2{,1,2} le_multiset\<^sub>H\<^sub>O ~> less_eq_multiset\<^sub>H\<^sub>O le_multiset_total ~> less_eq_multiset_total less_multiset_right_total ~> subset_eq_imp_le_multiset le_multiset_empty_left ~> less_eq_multiset_empty_left le_multiset_empty_right ~> less_eq_multiset_empty_right less_multiset_empty_right ~> le_multiset_empty_left less_multiset_empty_left ~> le_multiset_empty_right union_less_diff_plus ~> union_le_diff_plus ex_gt_count_imp_less_multiset ~> ex_gt_count_imp_le_multiset less_multiset_plus_left_nonempty ~> le_multiset_plus_left_nonempty le_multiset_plus_right_nonempty ~> le_multiset_plus_right_nonempty INCOMPATIBILITY. * Session HOL-Library: the lemma mset_map has now the attribute [simp]. INCOMPATIBILITY. * Session HOL-Library: some theorems about multisets have been removed. INCOMPATIBILITY, use the following replacements: le_multiset_plus_plus_left_iff ~> add_less_cancel_right less_multiset_plus_plus_left_iff ~> add_less_cancel_right le_multiset_plus_plus_right_iff ~> add_less_cancel_left less_multiset_plus_plus_right_iff ~> add_less_cancel_left add_eq_self_empty_iff ~> add_cancel_left_right mset_subset_add_bothsides ~> subset_mset.add_less_cancel_right mset_less_add_bothsides ~> subset_mset.add_less_cancel_right mset_le_add_bothsides ~> subset_mset.add_less_cancel_right empty_inter ~> subset_mset.inf_bot_left inter_empty ~> subset_mset.inf_bot_right empty_sup ~> subset_mset.sup_bot_left sup_empty ~> subset_mset.sup_bot_right bdd_below_multiset ~> subset_mset.bdd_above_bot subset_eq_empty ~> subset_mset.le_zero_eq le_empty ~> subset_mset.le_zero_eq mset_subset_empty_nonempty ~> subset_mset.zero_less_iff_neq_zero mset_less_empty_nonempty ~> subset_mset.zero_less_iff_neq_zero * Session HOL-Library: some typeclass constraints about multisets have been reduced from ordered or linordered to preorder. Multisets have the additional typeclasses order_bot, no_top, ordered_ab_semigroup_add_imp_le, ordered_cancel_comm_monoid_add, linordered_cancel_ab_semigroup_add, and ordered_ab_semigroup_monoid_add_imp_le. INCOMPATIBILITY. * Session HOL-Library: there are some new simplification rules about multisets, the multiset ordering, and the subset ordering on multisets. INCOMPATIBILITY. * Session HOL-Library: the subset ordering on multisets has now the interpretations ordered_ab_semigroup_monoid_add_imp_le and bounded_lattice_bot. INCOMPATIBILITY. * Session HOL-Library, theory Multiset: single has been removed in favor of add_mset that roughly corresponds to Set.insert. Some theorems have removed or changed: single_not_empty ~> add_mset_not_empty or empty_not_add_mset fold_mset_insert ~> fold_mset_add_mset image_mset_insert ~> image_mset_add_mset union_single_eq_diff multi_self_add_other_not_self diff_single_eq_union INCOMPATIBILITY. * Session HOL-Library, theory Multiset: some theorems have been changed to use add_mset instead of single: mset_add multi_self_add_other_not_self diff_single_eq_union union_single_eq_diff union_single_eq_member add_eq_conv_diff insert_noteq_member add_eq_conv_ex multi_member_split multiset_add_sub_el_shuffle mset_subset_eq_insertD mset_subset_insertD insert_subset_eq_iff insert_union_subset_iff multi_psub_of_add_self inter_add_left1 inter_add_left2 inter_add_right1 inter_add_right2 sup_union_left1 sup_union_left2 sup_union_right1 sup_union_right2 size_eq_Suc_imp_eq_union multi_nonempty_split mset_insort mset_update mult1I less_add mset_zip_take_Cons_drop_twice rel_mset_Zero msed_map_invL msed_map_invR msed_rel_invL msed_rel_invR le_multiset_right_total multiset_induct multiset_induct2_size multiset_induct2 INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the definitions of some constants have changed to use add_mset instead of adding a single element: image_mset mset replicate_mset mult1 pred_mset rel_mset' mset_insort INCOMPATIBILITY. * Session HOL-Library, theory Multiset: due to the above changes, the attributes of some multiset theorems have been changed: insert_DiffM [] ~> [simp] insert_DiffM2 [simp] ~> [] diff_add_mset_swap [simp] fold_mset_add_mset [simp] diff_diff_add [simp] (for multisets only) diff_cancel [simp] ~> [] count_single [simp] ~> [] set_mset_single [simp] ~> [] size_multiset_single [simp] ~> [] size_single [simp] ~> [] image_mset_single [simp] ~> [] mset_subset_eq_mono_add_right_cancel [simp] ~> [] mset_subset_eq_mono_add_left_cancel [simp] ~> [] fold_mset_single [simp] ~> [] subset_eq_empty [simp] ~> [] empty_sup [simp] ~> [] sup_empty [simp] ~> [] inter_empty [simp] ~> [] empty_inter [simp] ~> [] INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the order of the variables in the second cases of multiset_induct, multiset_induct2_size, multiset_induct2 has been changed (e.g. Add A a ~> Add a A). INCOMPATIBILITY. * Session HOL-Library, theory Multiset: there is now a simplification procedure on multisets. It mimics the behavior of the procedure on natural numbers. INCOMPATIBILITY. * Session HOL-Library, theory Multiset: renamed sums and products of multisets: msetsum ~> sum_mset msetprod ~> prod_mset * Session HOL-Library, theory Multiset: the notation for intersection and union of multisets have been changed: #\ ~> \# #\ ~> \# INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the lemma one_step_implies_mult_aux on multisets has been removed, use one_step_implies_mult instead. INCOMPATIBILITY. * Session HOL-Library: theory Complete_Partial_Order2 provides reasoning support for monotonicity and continuity in chain-complete partial orders and about admissibility conditions for fixpoint inductions. * Session HOL-Library: theory Library/Polynomial contains also derivation of polynomials (formerly in Library/Poly_Deriv) but not gcd/lcm on polynomials over fields. This has been moved to a separate theory Library/Polynomial_GCD_euclidean.thy, to pave way for a possible future different type class instantiation for polynomials over factorial rings. INCOMPATIBILITY. * Session HOL-Library: theory Sublist provides function "prefixes" with the following renaming prefixeq -> prefix prefix -> strict_prefix suffixeq -> suffix suffix -> strict_suffix Added theory of longest common prefixes. * Session HOL-Number_Theory: algebraic foundation for primes: Generalisation of predicate "prime" and introduction of predicates "prime_elem", "irreducible", a "prime_factorization" function, and the "factorial_ring" typeclass with instance proofs for nat, int, poly. Some theorems now have different names, most notably "prime_def" is now "prime_nat_iff". INCOMPATIBILITY. * Session Old_Number_Theory has been removed, after porting remaining theories. * Session HOL-Types_To_Sets provides an experimental extension of Higher-Order Logic to allow translation of types to sets. *** ML *** * Integer.gcd and Integer.lcm use efficient operations from the Poly/ML library (notably for big integers). Subtle change of semantics: Integer.gcd and Integer.lcm both normalize the sign, results are never negative. This coincides with the definitions in HOL/GCD.thy. INCOMPATIBILITY. * Structure Rat for rational numbers is now an integral part of Isabelle/ML, with special notation @int/nat or @int for numerals (an abbreviation for antiquotation @{Pure.rat argument}) and ML pretty printing. Standard operations on type Rat.rat are provided via ad-hoc overloading of + - * / < <= > >= ~ abs. INCOMPATIBILITY, need to use + instead of +/ etc. Moreover, exception Rat.DIVZERO has been superseded by General.Div. * ML antiquotation @{path} is superseded by @{file}, which ensures that the argument is a plain file. Minor INCOMPATIBILITY. * Antiquotation @{make_string} is available during Pure bootstrap -- with approximative output quality. * Low-level ML system structures (like PolyML and RunCall) are no longer exposed to Isabelle/ML user-space. Potential INCOMPATIBILITY. * The ML function "ML" provides easy access to run-time compilation. This is particularly useful for conditional compilation, without requiring separate files. * Option ML_exception_debugger controls detailed exception trace via the Poly/ML debugger. Relevant ML modules need to be compiled beforehand with ML_file_debug, or with ML_file and option ML_debugger enabled. Note debugger information requires consirable time and space: main Isabelle/HOL with full debugger support may need ML_system_64. * Local_Theory.restore has been renamed to Local_Theory.reset to emphasize its disruptive impact on the cumulative context, notably the scope of 'private' or 'qualified' names. Note that Local_Theory.reset is only appropriate when targets are managed, e.g. starting from a global theory and returning to it. Regular definitional packages should use balanced blocks of Local_Theory.open_target versus Local_Theory.close_target instead. Rare INCOMPATIBILITY. * Structure TimeLimit (originally from the SML/NJ library) has been replaced by structure Timeout, with slightly different signature. INCOMPATIBILITY. * Discontinued cd and pwd operations, which are not well-defined in a multi-threaded environment. Note that files are usually located relatively to the master directory of a theory (see also File.full_path). Potential INCOMPATIBILITY. * Binding.empty_atts supersedes Thm.empty_binding and Attrib.empty_binding. Minor INCOMPATIBILITY. *** System *** * SML/NJ and old versions of Poly/ML are no longer supported. * Poly/ML heaps now follow the hierarchy of sessions, and thus require much less disk space. * The Isabelle ML process is now managed directly by Isabelle/Scala, and shell scripts merely provide optional command-line access. In particular: . Scala module ML_Process to connect to the raw ML process, with interaction via stdin/stdout/stderr or in batch mode; . command-line tool "isabelle console" as interactive wrapper; . command-line tool "isabelle process" as batch mode wrapper. * The executable "isabelle_process" has been discontinued. Tools and prover front-ends should use ML_Process or Isabelle_Process in Isabelle/Scala. INCOMPATIBILITY. * New command-line tool "isabelle process" supports ML evaluation of literal expressions (option -e) or files (option -f) in the context of a given heap image. Errors lead to premature exit of the ML process with return code 1. * The command-line tool "isabelle build" supports option -N for cyclic shuffling of NUMA CPU nodes. This may help performance tuning on Linux servers with separate CPU/memory modules. * System option "threads" (for the size of the Isabelle/ML thread farm) is also passed to the underlying ML runtime system as --gcthreads, unless there is already a default provided via ML_OPTIONS settings. * System option "checkpoint" helps to fine-tune the global heap space management of isabelle build. This is relevant for big sessions that may exhaust the small 32-bit address space of the ML process (which is used by default). * System option "profiling" specifies the mode for global ML profiling in "isabelle build". Possible values are "time", "allocations". The command-line tool "isabelle profiling_report" helps to digest the resulting log files. * System option "ML_process_policy" specifies an optional command prefix for the underlying ML process, e.g. to control CPU affinity on multiprocessor systems. The "isabelle jedit" tool allows to override the implicit default via option -p. * Command-line tool "isabelle console" provides option -r to help to bootstrapping Isabelle/Pure interactively. * Command-line tool "isabelle yxml" has been discontinued. INCOMPATIBILITY, use operations from the modules "XML" and "YXML" in Isabelle/ML or Isabelle/Scala. * Many Isabelle tools that require a Java runtime system refer to the settings ISABELLE_TOOL_JAVA_OPTIONS32 / ISABELLE_TOOL_JAVA_OPTIONS64, depending on the underlying platform. The settings for "isabelle build" ISABELLE_BUILD_JAVA_OPTIONS32 / ISABELLE_BUILD_JAVA_OPTIONS64 have been discontinued. Potential INCOMPATIBILITY. * The Isabelle system environment always ensures that the main executables are found within the shell search $PATH: "isabelle" and "isabelle_scala_script". * Isabelle tools may consist of .scala files: the Scala compiler is invoked on the spot. The source needs to define some object that extends Isabelle_Tool.Body. * File.bash_string, File.bash_path etc. represent Isabelle/ML and Isabelle/Scala strings authentically within GNU bash. This is useful to produce robust shell scripts under program control, without worrying about spaces or special characters. Note that user output works via Path.print (ML) or Path.toString (Scala). INCOMPATIBILITY, the old (and less versatile) operations File.shell_quote, File.shell_path etc. have been discontinued. * The isabelle_java executable allows to run a Java process within the name space of Java and Scala components that are bundled with Isabelle, but without the Isabelle settings environment. * Isabelle/Scala: the SSH module supports ssh and sftp connections, for remote command-execution and file-system access. This resembles operations from module File and Isabelle_System to some extent. Note that Path specifications need to be resolved remotely via ssh.remote_path instead of File.standard_path: the implicit process environment is different, Isabelle settings are not available remotely. * Isabelle/Scala: the Mercurial module supports repositories via the regular hg command-line interface. The repositroy clone and working directory may reside on a local or remote file-system (via ssh connection). New in Isabelle2016 (February 2016) ----------------------------------- *** General *** * Eisbach is now based on Pure instead of HOL. Objects-logics may import either the theory ~~/src/HOL/Eisbach/Eisbach (for HOL etc.) or ~~/src/HOL/Eisbach/Eisbach_Old_Appl_Syntax (for FOL, ZF etc.). Note that the HOL-Eisbach session located in ~~/src/HOL/Eisbach/ contains further examples that do require HOL. * Better resource usage on all platforms (Linux, Windows, Mac OS X) for both Isabelle/ML and Isabelle/Scala. Slightly reduced heap space usage. * Former "xsymbols" syntax with Isabelle symbols is used by default, without any special print mode. Important ASCII replacement syntax remains available under print mode "ASCII", but less important syntax has been removed (see below). * Support for more arrow symbols, with rendering in LaTeX and Isabelle fonts: \ \ \ \ \ \. * Special notation \ for the first implicit 'structure' in the context has been discontinued. Rare INCOMPATIBILITY, use explicit structure name instead, notably in indexed notation with block-subscript (e.g. \\<^bsub>A\<^esub>). * The glyph for \ in the IsabelleText font now corresponds better to its counterpart \ as quantifier-like symbol. A small diamond is available as \; the old symbol \ loses this rendering and any special meaning. * Syntax for formal comments "-- text" now also supports the symbolic form "\ text". Command-line tool "isabelle update_cartouches -c" helps to update old sources. * Toplevel theorem statements have been simplified as follows: theorems ~> lemmas schematic_lemma ~> schematic_goal schematic_theorem ~> schematic_goal schematic_corollary ~> schematic_goal Command-line tool "isabelle update_theorems" updates theory sources accordingly. * Toplevel theorem statement 'proposition' is another alias for 'theorem'. * The old 'defs' command has been removed (legacy since Isabelle2014). INCOMPATIBILITY, use regular 'definition' instead. Overloaded and/or deferred definitions require a surrounding 'overloading' block. *** Prover IDE -- Isabelle/Scala/jEdit *** * IDE support for the source-level debugger of Poly/ML, to work with Isabelle/ML and official Standard ML. Option "ML_debugger" and commands 'ML_file_debug', 'ML_file_no_debug', 'SML_file_debug', 'SML_file_no_debug' control compilation of sources with or without debugging information. The Debugger panel allows to set breakpoints (via context menu), step through stopped threads, evaluate local ML expressions etc. At least one Debugger view needs to be active to have any effect on the running ML program. * The State panel manages explicit proof state output, with dynamic auto-update according to cursor movement. Alternatively, the jEdit action "isabelle.update-state" (shortcut S+ENTER) triggers manual update. * The Output panel no longer shows proof state output by default, to avoid GUI overcrowding. INCOMPATIBILITY, use the State panel instead or enable option "editor_output_state". * The text overview column (status of errors, warnings etc.) is updated asynchronously, leading to much better editor reactivity. Moreover, the full document node content is taken into account. The width of the column is scaled according to the main text area font, for improved visibility. * The main text area no longer changes its color hue in outdated situations. The text overview column takes over the role to indicate unfinished edits in the PIDE pipeline. This avoids flashing text display due to ad-hoc updates by auxiliary GUI components, such as the State panel. * Slightly improved scheduling for urgent print tasks (e.g. command state output, interactive queries) wrt. long-running background tasks. * Completion of symbols via prefix of \ or \<^name> or \name is always possible, independently of the language context. It is never implicit: a popup will show up unconditionally. * Additional abbreviations for syntactic completion may be specified in $ISABELLE_HOME/etc/abbrevs and $ISABELLE_HOME_USER/etc/abbrevs, with support for simple templates using ASCII 007 (bell) as placeholder. * Symbols \, \, \, \, \, \, \, \ no longer provide abbreviations for completion like "+o", "*o", ".o" etc. -- due to conflicts with other ASCII syntax. INCOMPATIBILITY, use plain backslash-completion or define suitable abbreviations in $ISABELLE_HOME_USER/etc/abbrevs. * Action "isabelle-emph" (with keyboard shortcut C+e LEFT) controls emphasized text style; the effect is visible in document output, not in the editor. * Action "isabelle-reset" now uses keyboard shortcut C+e BACK_SPACE, instead of former C+e LEFT. * The command-line tool "isabelle jedit" and the isabelle.Main application wrapper treat the default $USER_HOME/Scratch.thy more uniformly, and allow the dummy file argument ":" to open an empty buffer instead. * New command-line tool "isabelle jedit_client" allows to connect to an already running Isabelle/jEdit process. This achieves the effect of single-instance applications seen on common GUI desktops. * The default look-and-feel for Linux is the traditional "Metal", which works better with GUI scaling for very high-resolution displays (e.g. 4K). Moreover, it is generally more robust than "Nimbus". * Update to jedit-5.3.0, with improved GUI scaling and support of high-resolution displays (e.g. 4K). * The main Isabelle executable is managed as single-instance Desktop application uniformly on all platforms: Linux, Windows, Mac OS X. *** Document preparation *** * Commands 'paragraph' and 'subparagraph' provide additional section headings. Thus there are 6 levels of standard headings, as in HTML. * Command 'text_raw' has been clarified: input text is processed as in 'text' (with antiquotations and control symbols). The key difference is the lack of the surrounding isabelle markup environment in output. * Text is structured in paragraphs and nested lists, using notation that is similar to Markdown. The control symbols for list items are as follows: \<^item> itemize \<^enum> enumerate \<^descr> description * There is a new short form for antiquotations with a single argument that is a cartouche: \<^name>\...\ is equivalent to @{name \...\} and \...\ without control symbol is equivalent to @{cartouche \...\}. \<^name> without following cartouche is equivalent to @{name}. The standard Isabelle fonts provide glyphs to render important control symbols, e.g. "\<^verbatim>", "\<^emph>", "\<^bold>". * Antiquotations @{noindent}, @{smallskip}, @{medskip}, @{bigskip} with corresponding control symbols \<^noindent>, \<^smallskip>, \<^medskip>, \<^bigskip> specify spacing formally, using standard LaTeX macros of the same names. * Antiquotation @{cartouche} in Isabelle/Pure is the same as @{text}. Consequently, \...\ without any decoration prints literal quasi-formal text. Command-line tool "isabelle update_cartouches -t" helps to update old sources, by approximative patching of the content of string and cartouche tokens seen in theory sources. * The @{text} antiquotation now ignores the antiquotation option "source". The given text content is output unconditionally, without any surrounding quotes etc. Subtle INCOMPATIBILITY, put quotes into the argument where they are really intended, e.g. @{text \"foo"\}. Initial or terminal spaces are ignored. * Antiquotations @{emph} and @{bold} output LaTeX source recursively, adding appropriate text style markup. These may be used in the short form \<^emph>\...\ and \<^bold>\...\. * Document antiquotation @{footnote} outputs LaTeX source recursively, marked as \footnote{}. This may be used in the short form \<^footnote>\...\. * Antiquotation @{verbatim [display]} supports option "indent". * Antiquotation @{theory_text} prints uninterpreted theory source text (Isar outer syntax with command keywords etc.). This may be used in the short form \<^theory_text>\...\. @{theory_text [display]} supports option "indent". * Antiquotation @{doc ENTRY} provides a reference to the given documentation, with a hyperlink in the Prover IDE. * Antiquotations @{command}, @{method}, @{attribute} print checked entities of the Isar language. * HTML presentation uses the standard IsabelleText font and Unicode rendering of Isabelle symbols like Isabelle/Scala/jEdit. The former print mode "HTML" loses its special meaning. *** Isar *** * Local goals ('have', 'show', 'hence', 'thus') allow structured rule statements like fixes/assumes/shows in theorem specifications, but the notation is postfix with keywords 'if' (or 'when') and 'for'. For example: have result: "C x y" if "A x" and "B y" for x :: 'a and y :: 'a The local assumptions are bound to the name "that". The result is exported from context of the statement as usual. The above roughly corresponds to a raw proof block like this: { fix x :: 'a and y :: 'a assume that: "A x" "B y" have "C x y" } note result = this The keyword 'when' may be used instead of 'if', to indicate 'presume' instead of 'assume' above. * Assumptions ('assume', 'presume') allow structured rule statements using 'if' and 'for', similar to 'have' etc. above. For example: assume result: "C x y" if "A x" and "B y" for x :: 'a and y :: 'a This assumes "\x y::'a. A x \ B y \ C x y" and produces a general result as usual: "A ?x \ B ?y \ C ?x ?y". Vacuous quantification in assumptions is omitted, i.e. a for-context only effects propositions according to actual use of variables. For example: assume "A x" and "B y" for x and y is equivalent to: assume "\x. A x" and "\y. B y" * The meaning of 'show' with Pure rule statements has changed: premises are treated in the sense of 'assume', instead of 'presume'. This means, a goal like "\x. A x \ B x \ C x" can be solved completely as follows: show "\x. A x \ B x \ C x" or: show "C x" if "A x" "B x" for x Rare INCOMPATIBILITY, the old behaviour may be recovered as follows: show "C x" when "A x" "B x" for x * New command 'consider' states rules for generalized elimination and case splitting. This is like a toplevel statement "theorem obtains" used within a proof body; or like a multi-branch 'obtain' without activation of the local context elements yet. * Proof method "cases" allows to specify the rule as first entry of chained facts. This is particularly useful with 'consider': consider (a) A | (b) B | (c) C then have something proof cases case a then show ?thesis next case b then show ?thesis next case c then show ?thesis qed * Command 'case' allows fact name and attribute specification like this: case a: (c xs) case a [attributes]: (c xs) Facts that are introduced by invoking the case context are uniformly qualified by "a"; the same name is used for the cumulative fact. The old form "case (c xs) [attributes]" is no longer supported. Rare INCOMPATIBILITY, need to adapt uses of case facts in exotic situations, and always put attributes in front. * The standard proof method of commands 'proof' and '..' is now called "standard" to make semantically clear what it is; the old name "default" is still available as legacy for some time. Documentation now explains '..' more accurately as "by standard" instead of "by rule". * Nesting of Isar goal structure has been clarified: the context after the initial backwards refinement is retained for the whole proof, within all its context sections (as indicated via 'next'). This is e.g. relevant for 'using', 'including', 'supply': have "A \ A" if a: A for A supply [simp] = a proof show A by simp next show A by simp qed * Command 'obtain' binds term abbreviations (via 'is' patterns) in the proof body as well, abstracted over relevant parameters. * Improved type-inference for theorem statement 'obtains': separate parameter scope for of each clause. * Term abbreviations via 'is' patterns also work for schematic statements: result is abstracted over unknowns. * Command 'subgoal' allows to impose some structure on backward refinements, to avoid proof scripts degenerating into long of 'apply' sequences. Further explanations and examples are given in the isar-ref manual. * Command 'supply' supports fact definitions during goal refinement ('apply' scripts). * Proof method "goal_cases" turns the current subgoals into cases within the context; the conclusion is bound to variable ?case in each case. For example: lemma "\x. A x \ B x \ C x" and "\y z. U y \ V z \ W y z" proof goal_cases case (1 x) then show ?case using \A x\ \B x\ sorry next case (2 y z) then show ?case using \U y\ \V z\ sorry qed lemma "\x. A x \ B x \ C x" and "\y z. U y \ V z \ W y z" proof goal_cases case prems: 1 then show ?case using prems sorry next case prems: 2 then show ?case using prems sorry qed * The undocumented feature of implicit cases goal1, goal2, goal3, etc. is marked as legacy, and will be removed eventually. The proof method "goals" achieves a similar effect within regular Isar; often it can be done more adequately by other means (e.g. 'consider'). * The vacuous fact "TERM x" may be established "by fact" or as `TERM x` as well, not just "by this" or "." as before. * Method "sleep" succeeds after a real-time delay (in seconds). This is occasionally useful for demonstration and testing purposes. *** Pure *** * Qualifiers in locale expressions default to mandatory ('!') regardless of the command. Previously, for 'locale' and 'sublocale' the default was optional ('?'). The old synatx '!' has been discontinued. INCOMPATIBILITY, remove '!' and add '?' as required. * Keyword 'rewrites' identifies rewrite morphisms in interpretation commands. Previously, the keyword was 'where'. INCOMPATIBILITY. * More gentle suppression of syntax along locale morphisms while printing terms. Previously 'abbreviation' and 'notation' declarations would be suppressed for morphisms except term identity. Now 'abbreviation' is also kept for morphims that only change the involved parameters, and only 'notation' is suppressed. This can be of great help when working with complex locale hierarchies, because proof states are displayed much more succinctly. It also means that only notation needs to be redeclared if desired, as illustrated by this example: locale struct = fixes composition :: "'a => 'a => 'a" (infixl "\" 65) begin definition derived (infixl "\" 65) where ... end locale morphism = left: struct composition + right: struct composition' for composition (infix "\" 65) and composition' (infix "\''" 65) begin notation right.derived ("\''") end * Command 'global_interpretation' issues interpretations into global theories, with optional rewrite definitions following keyword 'defines'. * Command 'sublocale' accepts optional rewrite definitions after keyword 'defines'. * Command 'permanent_interpretation' has been discontinued. Use 'global_interpretation' or 'sublocale' instead. INCOMPATIBILITY. * Command 'print_definitions' prints dependencies of definitional specifications. This functionality used to be part of 'print_theory'. * Configuration option rule_insts_schematic has been discontinued (intermediate legacy feature in Isabelle2015). INCOMPATIBILITY. * Abbreviations in type classes now carry proper sort constraint. Rare INCOMPATIBILITY in situations where the previous misbehaviour has been exploited. * Refinement of user-space type system in type classes: pseudo-local operations behave more similar to abbreviations. Potential INCOMPATIBILITY in exotic situations. *** HOL *** * The 'typedef' command has been upgraded from a partially checked "axiomatization", to a full definitional specification that takes the global collection of overloaded constant / type definitions into account. Type definitions with open dependencies on overloaded definitions need to be specified as "typedef (overloaded)". This provides extra robustness in theory construction. Rare INCOMPATIBILITY. * Qualification of various formal entities in the libraries is done more uniformly via "context begin qualified definition ... end" instead of old-style "hide_const (open) ...". Consequently, both the defined constant and its defining fact become qualified, e.g. Option.is_none and Option.is_none_def. Occasional INCOMPATIBILITY in applications. * Some old and rarely used ASCII replacement syntax has been removed. INCOMPATIBILITY, standard syntax with symbols should be used instead. The subsequent commands help to reproduce the old forms, e.g. to simplify porting old theories: notation iff (infixr "<->" 25) notation Times (infixr "<*>" 80) type_notation Map.map (infixr "~=>" 0) notation Map.map_comp (infixl "o'_m" 55) type_notation FinFun.finfun ("(_ =>f /_)" [22, 21] 21) notation FuncSet.funcset (infixr "->" 60) notation FuncSet.extensional_funcset (infixr "->\<^sub>E" 60) notation Omega_Words_Fun.conc (infixr "conc" 65) notation Preorder.equiv ("op ~~") and Preorder.equiv ("(_/ ~~ _)" [51, 51] 50) notation (in topological_space) tendsto (infixr "--->" 55) notation (in topological_space) LIMSEQ ("((_)/ ----> (_))" [60, 60] 60) notation LIM ("((_)/ -- (_)/ --> (_))" [60, 0, 60] 60) notation NSA.approx (infixl "@=" 50) notation NSLIMSEQ ("((_)/ ----NS> (_))" [60, 60] 60) notation NSLIM ("((_)/ -- (_)/ --NS> (_))" [60, 0, 60] 60) * The alternative notation "\" for type and sort constraints has been removed: in LaTeX document output it looks the same as "::". INCOMPATIBILITY, use plain "::" instead. * Commands 'inductive' and 'inductive_set' work better when names for intro rules are omitted: the "cases" and "induct" rules no longer declare empty case_names, but no case_names at all. This allows to use numbered cases in proofs, without requiring method "goal_cases". * Inductive definitions ('inductive', 'coinductive', etc.) expose low-level facts of the internal construction only if the option "inductive_internals" is enabled. This refers to the internal predicate definition and its monotonicity result. Rare INCOMPATIBILITY. * Recursive function definitions ('fun', 'function', 'partial_function') expose low-level facts of the internal construction only if the option "function_internals" is enabled. Its internal inductive definition is also subject to "inductive_internals". Rare INCOMPATIBILITY. * BNF datatypes ('datatype', 'codatatype', etc.) expose low-level facts of the internal construction only if the option "bnf_internals" is enabled. This supersedes the former option "bnf_note_all". Rare INCOMPATIBILITY. * Combinator to represent case distinction on products is named "case_prod", uniformly, discontinuing any input aliasses. Very popular theorem aliasses have been retained. Consolidated facts: PairE ~> prod.exhaust Pair_eq ~> prod.inject pair_collapse ~> prod.collapse Pair_fst_snd_eq ~> prod_eq_iff split_twice ~> prod.case_distrib split_weak_cong ~> prod.case_cong_weak split_split ~> prod.split split_split_asm ~> prod.split_asm splitI ~> case_prodI splitD ~> case_prodD splitI2 ~> case_prodI2 splitI2' ~> case_prodI2' splitE ~> case_prodE splitE' ~> case_prodE' split_pair ~> case_prod_Pair split_eta ~> case_prod_eta split_comp ~> case_prod_comp mem_splitI ~> mem_case_prodI mem_splitI2 ~> mem_case_prodI2 mem_splitE ~> mem_case_prodE The_split ~> The_case_prod cond_split_eta ~> cond_case_prod_eta Collect_split_in_rel_leE ~> Collect_case_prod_in_rel_leE Collect_split_in_rel_leI ~> Collect_case_prod_in_rel_leI in_rel_Collect_split_eq ~> in_rel_Collect_case_prod_eq Collect_split_Grp_eqD ~> Collect_case_prod_Grp_eqD Collect_split_Grp_inD ~> Collect_case_prod_Grp_in Domain_Collect_split ~> Domain_Collect_case_prod Image_Collect_split ~> Image_Collect_case_prod Range_Collect_split ~> Range_Collect_case_prod Eps_split ~> Eps_case_prod Eps_split_eq ~> Eps_case_prod_eq split_rsp ~> case_prod_rsp curry_split ~> curry_case_prod split_curry ~> case_prod_curry Changes in structure HOLogic: split_const ~> case_prod_const mk_split ~> mk_case_prod mk_psplits ~> mk_ptupleabs strip_psplits ~> strip_ptupleabs INCOMPATIBILITY. * The coercions to type 'real' have been reorganised. The function 'real' is no longer overloaded, but has type 'nat => real' and abbreviates of_nat for that type. Also 'real_of_int :: int => real' abbreviates of_int for that type. Other overloaded instances of 'real' have been replaced by 'real_of_ereal' and 'real_of_float'. Consolidated facts (among others): real_of_nat_le_iff -> of_nat_le_iff real_of_nat_numeral of_nat_numeral real_of_int_zero of_int_0 real_of_nat_zero of_nat_0 real_of_one of_int_1 real_of_int_add of_int_add real_of_nat_add of_nat_add real_of_int_diff of_int_diff real_of_nat_diff of_nat_diff floor_subtract floor_diff_of_int real_of_int_inject of_int_eq_iff real_of_int_gt_zero_cancel_iff of_int_0_less_iff real_of_int_ge_zero_cancel_iff of_int_0_le_iff real_of_nat_ge_zero of_nat_0_le_iff real_of_int_ceiling_ge le_of_int_ceiling ceiling_less_eq ceiling_less_iff ceiling_le_eq ceiling_le_iff less_floor_eq less_floor_iff floor_less_eq floor_less_iff floor_divide_eq_div floor_divide_of_int_eq real_of_int_zero_cancel of_nat_eq_0_iff ceiling_real_of_int ceiling_of_int INCOMPATIBILITY. * Theory Map: lemma map_of_is_SomeD was a clone of map_of_SomeD and has been removed. INCOMPATIBILITY. * Quickcheck setup for finite sets. * Discontinued simp_legacy_precond. Potential INCOMPATIBILITY. * Sledgehammer: - The MaSh relevance filter has been sped up. - Proof reconstruction has been improved, to minimize the incidence of cases where Sledgehammer gives a proof that does not work. - Auto Sledgehammer now minimizes and preplays the results. - Handle Vampire 4.0 proof output without raising exception. - Eliminated "MASH" environment variable. Use the "MaSh" option in Isabelle/jEdit instead. INCOMPATIBILITY. - Eliminated obsolete "blocking" option and related subcommands. * Nitpick: - Fixed soundness bug in translation of "finite" predicate. - Fixed soundness bug in "destroy_constrs" optimization. - Fixed soundness bug in translation of "rat" type. - Removed "check_potential" and "check_genuine" options. - Eliminated obsolete "blocking" option. * (Co)datatype package: - New commands "lift_bnf" and "copy_bnf" for lifting (copying) a BNF structure on the raw type to an abstract type defined using typedef. - Always generate "case_transfer" theorem. - For mutual types, generate slightly stronger "rel_induct", "rel_coinduct", and "coinduct" theorems. INCOMPATIBILITY. - Allow discriminators and selectors with the same name as the type being defined. - Avoid various internal name clashes (e.g., 'datatype f = f'). * Transfer: new methods for interactive debugging of 'transfer' and 'transfer_prover': 'transfer_start', 'transfer_step', 'transfer_end', 'transfer_prover_start' and 'transfer_prover_end'. * New diagnostic command print_record for displaying record definitions. * Division on integers is bootstrapped directly from division on naturals and uses generic numeral algorithm for computations. Slight INCOMPATIBILITY, simproc numeral_divmod replaces and generalizes former simprocs binary_int_div and binary_int_mod * Tightened specification of class semiring_no_zero_divisors. Minor INCOMPATIBILITY. * Class algebraic_semidom introduces common algebraic notions of integral (semi)domains, particularly units. Although logically subsumed by fields, is is not a super class of these in order not to burden fields with notions that are trivial there. * Class normalization_semidom specifies canonical representants for equivalence classes of associated elements in an integral (semi)domain. This formalizes associated elements as well. * Abstract specification of gcd/lcm operations in classes semiring_gcd, semiring_Gcd, semiring_Lcd. Minor INCOMPATIBILITY: facts gcd_nat.commute and gcd_int.commute are subsumed by gcd.commute, as well as gcd_nat.assoc and gcd_int.assoc by gcd.assoc. * Former constants Fields.divide (_ / _) and Divides.div (_ div _) are logically unified to Rings.divide in syntactic type class Rings.divide, with infix syntax (_ div _). Infix syntax (_ / _) for field division is added later as abbreviation in class Fields.inverse. INCOMPATIBILITY, instantiations must refer to Rings.divide rather than the former separate constants, hence infix syntax (_ / _) is usually not available during instantiation. * New cancellation simprocs for boolean algebras to cancel complementary terms for sup and inf. For example, "sup x (sup y (- x))" simplifies to "top". INCOMPATIBILITY. * Class uniform_space introduces uniform spaces btw topological spaces and metric spaces. Minor INCOMPATIBILITY: open__def needs to be introduced in the form of an uniformity. Some constants are more general now, it may be necessary to add type class constraints. open_real_def \ open_dist open_complex_def \ open_dist * Library/Monad_Syntax: notation uses symbols \ and \. INCOMPATIBILITY. * Library/Multiset: - Renamed multiset inclusion operators: < ~> <# > ~> ># <= ~> <=# >= ~> >=# \ ~> \# \ ~> \# INCOMPATIBILITY. - Added multiset inclusion operator syntax: \# \# \# \# - "'a multiset" is no longer an instance of the "order", "ordered_ab_semigroup_add_imp_le", "ordered_cancel_comm_monoid_diff", "semilattice_inf", and "semilattice_sup" type classes. The theorems previously provided by these type classes (directly or indirectly) are now available through the "subset_mset" interpretation (e.g. add_mono ~> subset_mset.add_mono). INCOMPATIBILITY. - Renamed conversions: multiset_of ~> mset multiset_of_set ~> mset_set set_of ~> set_mset INCOMPATIBILITY - Renamed lemmas: mset_le_def ~> subseteq_mset_def mset_less_def ~> subset_mset_def less_eq_multiset.rep_eq ~> subseteq_mset_def INCOMPATIBILITY - Removed lemmas generated by lift_definition: less_eq_multiset.abs_eq, less_eq_multiset.rsp, less_eq_multiset.transfer, less_eq_multiset_def INCOMPATIBILITY * Library/Omega_Words_Fun: Infinite words modeled as functions nat \ 'a. * Library/Bourbaki_Witt_Fixpoint: Added formalisation of the Bourbaki-Witt fixpoint theorem for increasing functions in chain-complete partial orders. * Library/Old_Recdef: discontinued obsolete 'defer_recdef' command. Minor INCOMPATIBILITY, use 'function' instead. * Library/Periodic_Fun: a locale that provides convenient lemmas for periodic functions. * Library/Formal_Power_Series: proper definition of division (with remainder) for formal power series; instances for Euclidean Ring and GCD. * HOL-Imperative_HOL: obsolete theory Legacy_Mrec has been removed. * HOL-Statespace: command 'statespace' uses mandatory qualifier for import of parent, as for general 'locale' expressions. INCOMPATIBILITY, remove '!' and add '?' as required. * HOL-Decision_Procs: The "approximation" method works with "powr" (exponentiation on real numbers) again. * HOL-Multivariate_Analysis: theory Cauchy_Integral_Thm with Contour integrals (= complex path integrals), Cauchy's integral theorem, winding numbers and Cauchy's integral formula, Liouville theorem, Fundamental Theorem of Algebra. Ported from HOL Light. * HOL-Multivariate_Analysis: topological concepts such as connected components, homotopic paths and the inside or outside of a set. * HOL-Multivariate_Analysis: radius of convergence of power series and various summability tests; Harmonic numbers and the Euler–Mascheroni constant; the Generalised Binomial Theorem; the complex and real Gamma/log-Gamma/Digamma/ Polygamma functions and their most important properties. * HOL-Probability: The central limit theorem based on Levy's uniqueness and continuity theorems, weak convergence, and characterisitc functions. * HOL-Data_Structures: new and growing session of standard data structures. *** ML *** * The following combinators for low-level profiling of the ML runtime system are available: profile_time (*CPU time*) profile_time_thread (*CPU time on this thread*) profile_allocations (*overall heap allocations*) * Antiquotation @{undefined} or \<^undefined> inlines (raise Match). * Antiquotation @{method NAME} inlines the (checked) name of the given Isar proof method. * Pretty printing of Poly/ML compiler output in Isabelle has been improved: proper treatment of break offsets and blocks with consistent breaks. * The auxiliary module Pure/display.ML has been eliminated. Its elementary thm print operations are now in Pure/more_thm.ML and thus called Thm.pretty_thm, Thm.string_of_thm etc. INCOMPATIBILITY. * Simproc programming interfaces have been simplified: Simplifier.make_simproc and Simplifier.define_simproc supersede various forms of Simplifier.mk_simproc, Simplifier.simproc_global etc. Note that term patterns for the left-hand sides are specified with implicitly fixed variables, like top-level theorem statements. INCOMPATIBILITY. * Instantiation rules have been re-organized as follows: Thm.instantiate (*low-level instantiation with named arguments*) Thm.instantiate' (*version with positional arguments*) Drule.infer_instantiate (*instantiation with type inference*) Drule.infer_instantiate' (*version with positional arguments*) The LHS only requires variable specifications, instead of full terms. Old cterm_instantiate is superseded by infer_instantiate. INCOMPATIBILITY, need to re-adjust some ML names and types accordingly. * Old tactic shorthands atac, rtac, etac, dtac, ftac have been discontinued. INCOMPATIBILITY, use regular assume_tac, resolve_tac etc. instead (with proper context). * Thm.instantiate (and derivatives) no longer require the LHS of the instantiation to be certified: plain variables are given directly. * Subgoal.SUBPROOF and Subgoal.FOCUS combinators use anonymous quasi-bound variables (like the Simplifier), instead of accidentally named local fixes. This has the potential to improve stability of proof tools, but can also cause INCOMPATIBILITY for tools that don't observe the proof context discipline. * Isar proof methods are based on a slightly more general type context_tactic, which allows to change the proof context dynamically (e.g. to update cases) and indicate explicit Seq.Error results. Former METHOD_CASES is superseded by CONTEXT_METHOD; further combinators are provided in src/Pure/Isar/method.ML for convenience. INCOMPATIBILITY. *** System *** * Command-line tool "isabelle console" enables print mode "ASCII". * Command-line tool "isabelle update_then" expands old Isar command conflations: hence ~> then have thus ~> then show This syntax is more orthogonal and improves readability and maintainability of proofs. * Global session timeout is multiplied by timeout_scale factor. This allows to adjust large-scale tests (e.g. AFP) to overall hardware performance. * Property values in etc/symbols may contain spaces, if written with the replacement character "␣" (Unicode point 0x2324). For example: \ code: 0x0022c6 group: operator font: Deja␣Vu␣Sans␣Mono * Java runtime environment for x86_64-windows allows to use larger heap space. * Java runtime options are determined separately for 32bit vs. 64bit platforms as follows. - Isabelle desktop application: platform-specific files that are associated with the main app bundle - isabelle jedit: settings JEDIT_JAVA_SYSTEM_OPTIONS JEDIT_JAVA_OPTIONS32 vs. JEDIT_JAVA_OPTIONS64 - isabelle build: settings ISABELLE_BUILD_JAVA_OPTIONS32 vs. ISABELLE_BUILD_JAVA_OPTIONS64 * Bash shell function "jvmpath" has been renamed to "platform_path": it is relevant both for Poly/ML and JVM processes. * Poly/ML default platform architecture may be changed from 32bit to 64bit via system option ML_system_64. A system restart (and rebuild) is required after change. * Poly/ML 5.6 runs natively on x86-windows and x86_64-windows, which both allow larger heap space than former x86-cygwin. * Heap images are 10-15% smaller due to less wasteful persistent theory content (using ML type theory_id instead of theory); New in Isabelle2015 (May 2015) ------------------------------ *** General *** * Local theory specification commands may have a 'private' or 'qualified' modifier to restrict name space accesses to the local scope, as provided by some "context begin ... end" block. For example: context begin private definition ... private lemma ... qualified definition ... qualified lemma ... lemma ... theorem ... end * Command 'experiment' opens an anonymous locale context with private naming policy. * Command 'notepad' requires proper nesting of begin/end and its proof structure in the body: 'oops' is no longer supported here. Minor INCOMPATIBILITY, use 'sorry' instead. * Command 'named_theorems' declares a dynamic fact within the context, together with an attribute to maintain the content incrementally. This supersedes functor Named_Thms in Isabelle/ML, but with a subtle change of semantics due to external visual order vs. internal reverse order. * 'find_theorems': search patterns which are abstractions are schematically expanded before search. Search results match the naive expectation more closely, particularly wrt. abbreviations. INCOMPATIBILITY. * Commands 'method_setup' and 'attribute_setup' now work within a local theory context. * Outer syntax commands are managed authentically within the theory context, without implicit global state. Potential for accidental INCOMPATIBILITY, make sure that required theories are really imported. * Historical command-line terminator ";" is no longer accepted (and already used differently in Isar). Minor INCOMPATIBILITY, use "isabelle update_semicolons" to remove obsolete semicolons from old theory sources. * Structural composition of proof methods (meth1; meth2) in Isar corresponds to (tac1 THEN_ALL_NEW tac2) in ML. * The Eisbach proof method language allows to define new proof methods by combining existing ones with their usual syntax. The "match" proof method provides basic fact/term matching in addition to premise/conclusion matching through Subgoal.focus, and binds fact names from matches as well as term patterns within matches. The Isabelle documentation provides an entry "eisbach" for the Eisbach User Manual. Sources and various examples are in ~~/src/HOL/Eisbach/. *** Prover IDE -- Isabelle/Scala/jEdit *** * Improved folding mode "isabelle" based on Isar syntax. Alternatively, the "sidekick" mode may be used for document structure. * Extended bracket matching based on Isar language structure. System option jedit_structure_limit determines maximum number of lines to scan in the buffer. * Support for BibTeX files: context menu, context-sensitive token marker, SideKick parser. * Document antiquotation @{cite} provides formal markup, which is interpreted semi-formally based on .bib files that happen to be open in the editor (hyperlinks, completion etc.). * Less waste of vertical space via negative line spacing (see Global Options / Text Area). * Improved graphview panel with optional output of PNG or PDF, for display of 'thy_deps', 'class_deps' etc. * The commands 'thy_deps' and 'class_deps' allow optional bounds to restrict the visualized hierarchy. * Improved scheduling for asynchronous print commands (e.g. provers managed by the Sledgehammer panel) wrt. ongoing document processing. *** Document preparation *** * Document markup commands 'chapter', 'section', 'subsection', 'subsubsection', 'text', 'txt', 'text_raw' work uniformly in any context, even before the initial 'theory' command. Obsolete proof commands 'sect', 'subsect', 'subsubsect', 'txt_raw' have been discontinued, use 'section', 'subsection', 'subsubsection', 'text_raw' instead. The old 'header' command is still retained for some time, but should be replaced by 'chapter', 'section' etc. (using "isabelle update_header"). Minor INCOMPATIBILITY. * Official support for "tt" style variants, via \isatt{...} or \begin{isabellett}...\end{isabellett}. The somewhat fragile \verb or verbatim environment of LaTeX is no longer used. This allows @{ML} etc. as argument to other macros (such as footnotes). * Document antiquotation @{verbatim} prints ASCII text literally in "tt" style. * Discontinued obsolete option "document_graph": session_graph.pdf is produced unconditionally for HTML browser_info and PDF-LaTeX document. * Diagnostic commands and document markup commands within a proof do not affect the command tag for output. Thus commands like 'thm' are subject to proof document structure, and no longer "stick out" accidentally. Commands 'text' and 'txt' merely differ in the LaTeX style, not their tags. Potential INCOMPATIBILITY in exotic situations. * System option "pretty_margin" is superseded by "thy_output_margin", which is also accessible via document antiquotation option "margin". Only the margin for document output may be changed, but not the global pretty printing: that is 76 for plain console output, and adapted dynamically in GUI front-ends. Implementations of document antiquotations need to observe the margin explicitly according to Thy_Output.string_of_margin. Minor INCOMPATIBILITY. * Specification of 'document_files' in the session ROOT file is mandatory for document preparation. The legacy mode with implicit copying of the document/ directory is no longer supported. Minor INCOMPATIBILITY. *** Pure *** * Proof methods with explicit instantiation ("rule_tac", "subgoal_tac" etc.) allow an optional context of local variables ('for' declaration): these variables become schematic in the instantiated theorem; this behaviour is analogous to 'for' in attributes "where" and "of". Configuration option rule_insts_schematic (default false) controls use of schematic variables outside the context. Minor INCOMPATIBILITY, declare rule_insts_schematic = true temporarily and update to use local variable declarations or dummy patterns instead. * Explicit instantiation via attributes "where", "of", and proof methods "rule_tac" with derivatives like "subgoal_tac" etc. admit dummy patterns ("_") that stand for anonymous local variables. * Generated schematic variables in standard format of exported facts are incremented to avoid material in the proof context. Rare INCOMPATIBILITY, explicit instantiation sometimes needs to refer to different index. * Lexical separation of signed and unsigned numerals: categories "num" and "float" are unsigned. INCOMPATIBILITY: subtle change in precedence of numeral signs, particularly in expressions involving infix syntax like "(- 1) ^ n". * Old inner token category "xnum" has been discontinued. Potential INCOMPATIBILITY for exotic syntax: may use mixfix grammar with "num" token category instead. *** HOL *** * New (co)datatype package: - The 'datatype_new' command has been renamed 'datatype'. The old command of that name is now called 'old_datatype' and is provided by "~~/src/HOL/Library/Old_Datatype.thy". See 'isabelle doc datatypes' for information on porting. INCOMPATIBILITY. - Renamed theorems: disc_corec ~> corec_disc disc_corec_iff ~> corec_disc_iff disc_exclude ~> distinct_disc disc_exhaust ~> exhaust_disc disc_map_iff ~> map_disc_iff sel_corec ~> corec_sel sel_exhaust ~> exhaust_sel sel_map ~> map_sel sel_set ~> set_sel sel_split ~> split_sel sel_split_asm ~> split_sel_asm strong_coinduct ~> coinduct_strong weak_case_cong ~> case_cong_weak INCOMPATIBILITY. - The "no_code" option to "free_constructors", "datatype_new", and "codatatype" has been renamed "plugins del: code". INCOMPATIBILITY. - The rules "set_empty" have been removed. They are easy consequences of other set rules "by auto". INCOMPATIBILITY. - The rule "set_cases" is now registered with the "[cases set]" attribute. This can influence the behavior of the "cases" proof method when more than one case rule is applicable (e.g., an assumption is of the form "w : set ws" and the method "cases w" is invoked). The solution is to specify the case rule explicitly (e.g. "cases w rule: widget.exhaust"). INCOMPATIBILITY. - Renamed theories: BNF_Comp ~> BNF_Composition BNF_FP_Base ~> BNF_Fixpoint_Base BNF_GFP ~> BNF_Greatest_Fixpoint BNF_LFP ~> BNF_Least_Fixpoint BNF_Constructions_on_Wellorders ~> BNF_Wellorder_Constructions Cardinals/Constructions_on_Wellorders ~> Cardinals/Wellorder_Constructions INCOMPATIBILITY. - Lifting and Transfer setup for basic HOL types sum and prod (also option) is now performed by the BNF package. Theories Lifting_Sum, Lifting_Product and Lifting_Option from Main became obsolete and were removed. Changed definitions of the relators rel_prod and rel_sum (using inductive). INCOMPATIBILITY: use rel_prod.simps and rel_sum.simps instead of rel_prod_def and rel_sum_def. Minor INCOMPATIBILITY: (rarely used by name) transfer theorem names changed (e.g. map_prod_transfer ~> prod.map_transfer). - Parametricity theorems for map functions, relators, set functions, constructors, case combinators, discriminators, selectors and (co)recursors are automatically proved and registered as transfer rules. * Old datatype package: - The old 'datatype' command has been renamed 'old_datatype', and 'rep_datatype' has been renamed 'old_rep_datatype'. They are provided by "~~/src/HOL/Library/Old_Datatype.thy". See 'isabelle doc datatypes' for information on porting. INCOMPATIBILITY. - Renamed theorems: weak_case_cong ~> case_cong_weak INCOMPATIBILITY. - Renamed theory: ~~/src/HOL/Datatype.thy ~> ~~/src/HOL/Library/Old_Datatype.thy INCOMPATIBILITY. * Nitpick: - Fixed soundness bug related to the strict and non-strict subset operations. * Sledgehammer: - CVC4 is now included with Isabelle instead of CVC3 and run by default. - Z3 is now always enabled by default, now that it is fully open source. The "z3_non_commercial" option is discontinued. - Minimization is now always enabled by default. Removed sub-command: min - Proof reconstruction, both one-liners and Isar, has been dramatically improved. - Improved support for CVC4 and veriT. * Old and new SMT modules: - The old 'smt' method has been renamed 'old_smt' and moved to 'src/HOL/Library/Old_SMT.thy'. It is provided for compatibility, until applications have been ported to use the new 'smt' method. For the method to work, an older version of Z3 (e.g. Z3 3.2 or 4.0) must be installed, and the environment variable "OLD_Z3_SOLVER" must point to it. INCOMPATIBILITY. - The 'smt2' method has been renamed 'smt'. INCOMPATIBILITY. - New option 'smt_reconstruction_step_timeout' to limit the reconstruction time of Z3 proof steps in the new 'smt' method. - New option 'smt_statistics' to display statistics of the new 'smt' method, especially runtime statistics of Z3 proof reconstruction. * Lifting: command 'lift_definition' allows to execute lifted constants that have as a return type a datatype containing a subtype. This overcomes long-time limitations in the area of code generation and lifting, and avoids tedious workarounds. * Command and antiquotation "value" provide different evaluation slots (again), where the previous strategy (NBE after ML) serves as default. Minor INCOMPATIBILITY. * Add NO_MATCH-simproc, allows to check for syntactic non-equality. * field_simps: Use NO_MATCH-simproc for distribution rules, to avoid non-termination in case of distributing a division. With this change field_simps is in some cases slightly less powerful, if it fails try to add algebra_simps, or use divide_simps. Minor INCOMPATIBILITY. * Separate class no_zero_divisors has been given up in favour of fully algebraic semiring_no_zero_divisors. INCOMPATIBILITY. * Class linordered_semidom really requires no zero divisors. INCOMPATIBILITY. * Classes division_ring, field and linordered_field always demand "inverse 0 = 0". Given up separate classes division_ring_inverse_zero, field_inverse_zero and linordered_field_inverse_zero. INCOMPATIBILITY. * Classes cancel_ab_semigroup_add / cancel_monoid_add specify explicit additive inverse operation. INCOMPATIBILITY. * Complex powers and square roots. The functions "ln" and "powr" are now overloaded for types real and complex, and 0 powr y = 0 by definition. INCOMPATIBILITY: type constraints may be necessary. * The functions "sin" and "cos" are now defined for any type of sort "{real_normed_algebra_1,banach}" type, so in particular on "real" and "complex" uniformly. Minor INCOMPATIBILITY: type constraints may be needed. * New library of properties of the complex transcendental functions sin, cos, tan, exp, Ln, Arctan, Arcsin, Arccos. Ported from HOL Light. * The factorial function, "fact", now has type "nat => 'a" (of a sort that admits numeric types including nat, int, real and complex. INCOMPATIBILITY: an expression such as "fact 3 = 6" may require a type constraint, and the combination "real (fact k)" is likely to be unsatisfactory. If a type conversion is still necessary, then use "of_nat (fact k)" or "real_of_nat (fact k)". * Removed functions "natfloor" and "natceiling", use "nat o floor" and "nat o ceiling" instead. A few of the lemmas have been retained and adapted: in their names "natfloor"/"natceiling" has been replaced by "nat_floor"/"nat_ceiling". * Qualified some duplicated fact names required for boostrapping the type class hierarchy: ab_add_uminus_conv_diff ~> diff_conv_add_uminus field_inverse_zero ~> inverse_zero field_divide_inverse ~> divide_inverse field_inverse ~> left_inverse Minor INCOMPATIBILITY. * Eliminated fact duplicates: mult_less_imp_less_right ~> mult_right_less_imp_less mult_less_imp_less_left ~> mult_left_less_imp_less Minor INCOMPATIBILITY. * Fact consolidation: even_less_0_iff is subsumed by double_add_less_zero_iff_single_add_less_zero (simp by default anyway). * Generalized and consolidated some theorems concerning divsibility: dvd_reduce ~> dvd_add_triv_right_iff dvd_plus_eq_right ~> dvd_add_right_iff dvd_plus_eq_left ~> dvd_add_left_iff Minor INCOMPATIBILITY. * "even" and "odd" are mere abbreviations for "2 dvd _" and "~ 2 dvd _" and part of theory Main. even_def ~> even_iff_mod_2_eq_zero INCOMPATIBILITY. * Lemma name consolidation: divide_Numeral1 ~> divide_numeral_1. Minor INCOMPATIBILITY. * Bootstrap of listsum as special case of abstract product over lists. Fact rename: listsum_def ~> listsum.eq_foldr INCOMPATIBILITY. * Product over lists via constant "listprod". * Theory List: renamed drop_Suc_conv_tl and nth_drop' to Cons_nth_drop_Suc. * New infrastructure for compiling, running, evaluating and testing generated code in target languages in HOL/Library/Code_Test. See HOL/Codegenerator_Test/Code_Test* for examples. * Library/Multiset: - Introduced "replicate_mset" operation. - Introduced alternative characterizations of the multiset ordering in "Library/Multiset_Order". - Renamed multiset ordering: <# ~> #<# <=# ~> #<=# \# ~> #\# \# ~> #\# INCOMPATIBILITY. - Introduced abbreviations for ill-named multiset operations: <#, \# abbreviate < (strict subset) <=#, \#, \# abbreviate <= (subset or equal) INCOMPATIBILITY. - Renamed in_multiset_of ~> in_multiset_in_set Multiset.fold ~> fold_mset Multiset.filter ~> filter_mset INCOMPATIBILITY. - Removed mcard, is equal to size. - Added attributes: image_mset.id [simp] image_mset_id [simp] elem_multiset_of_set [simp, intro] comp_fun_commute_plus_mset [simp] comp_fun_commute.fold_mset_insert [OF comp_fun_commute_plus_mset, simp] in_mset_fold_plus_iff [iff] set_of_Union_mset [simp] in_Union_mset_iff [iff] INCOMPATIBILITY. * Library/Sum_of_Squares: simplified and improved "sos" method. Always use local CSDP executable, which is much faster than the NEOS server. The "sos_cert" functionality is invoked as "sos" with additional argument. Minor INCOMPATIBILITY. * HOL-Decision_Procs: New counterexample generator quickcheck [approximation] for inequalities of transcendental functions. Uses hardware floating point arithmetic to randomly discover potential counterexamples. Counterexamples are certified with the "approximation" method. See HOL/Decision_Procs/ex/Approximation_Quickcheck_Ex.thy for examples. * HOL-Probability: Reworked measurability prover - applies destructor rules repeatedly - removed application splitting (replaced by destructor rule) - added congruence rules to rewrite measure spaces under the sets projection * New proof method "rewrite" (in theory ~~/src/HOL/Library/Rewrite) for single-step rewriting with subterm selection based on patterns. *** ML *** * Subtle change of name space policy: undeclared entries are now considered inaccessible, instead of accessible via the fully-qualified internal name. This mainly affects Name_Space.intern (and derivatives), which may produce an unexpected Long_Name.hidden prefix. Note that contemporary applications use the strict Name_Space.check (and derivatives) instead, which is not affected by the change. Potential INCOMPATIBILITY in rare applications of Name_Space.intern. * Subtle change of error semantics of Toplevel.proof_of: regular user ERROR instead of internal Toplevel.UNDEF. * Basic combinators map, fold, fold_map, split_list, apply are available as parameterized antiquotations, e.g. @{map 4} for lists of quadruples. * Renamed "pairself" to "apply2", in accordance to @{apply 2}. INCOMPATIBILITY. * Former combinators NAMED_CRITICAL and CRITICAL for central critical sections have been discontinued, in favour of the more elementary Multithreading.synchronized and its high-level derivative Synchronized.var (which is usually sufficient in applications). Subtle INCOMPATIBILITY: synchronized access needs to be atomic and cannot be nested. * Synchronized.value (ML) is actually synchronized (as in Scala): subtle change of semantics with minimal potential for INCOMPATIBILITY. * The main operations to certify logical entities are Thm.ctyp_of and Thm.cterm_of with a local context; old-style global theory variants are available as Thm.global_ctyp_of and Thm.global_cterm_of. INCOMPATIBILITY. * Elementary operations in module Thm are no longer pervasive. INCOMPATIBILITY, need to use qualified Thm.prop_of, Thm.cterm_of, Thm.term_of etc. * Proper context for various elementary tactics: assume_tac, resolve_tac, eresolve_tac, dresolve_tac, forward_tac, match_tac, compose_tac, Splitter.split_tac etc. INCOMPATIBILITY. * Tactical PARALLEL_ALLGOALS is the most common way to refer to PARALLEL_GOALS. * Goal.prove_multi is superseded by the fully general Goal.prove_common, which also allows to specify a fork priority. * Antiquotation @{command_spec "COMMAND"} is superseded by @{command_keyword COMMAND} (usually without quotes and with PIDE markup). Minor INCOMPATIBILITY. * Cartouches within ML sources are turned into values of type Input.source (with formal position information). *** System *** * The Isabelle tool "update_cartouches" changes theory files to use cartouches instead of old-style {* verbatim *} or `alt_string` tokens. * The Isabelle tool "build" provides new options -X, -k, -x. * Discontinued old-fashioned "codegen" tool. Code generation can always be externally triggered using an appropriate ROOT file plus a corresponding theory. Parametrization is possible using environment variables, or ML snippets in the most extreme cases. Minor INCOMPATIBILITY. * JVM system property "isabelle.threads" determines size of Scala thread pool, like Isabelle system option "threads" for ML. * JVM system property "isabelle.laf" determines the default Swing look-and-feel, via internal class name or symbolic name as in the jEdit menu Global Options / Appearance. * Support for Proof General and Isar TTY loop has been discontinued. Minor INCOMPATIBILITY, use standard PIDE infrastructure instead. New in Isabelle2014 (August 2014) --------------------------------- *** General *** * Support for official Standard ML within the Isabelle context. Command 'SML_file' reads and evaluates the given Standard ML file. Toplevel bindings are stored within the theory context; the initial environment is restricted to the Standard ML implementation of Poly/ML, without the add-ons of Isabelle/ML. Commands 'SML_import' and 'SML_export' allow to exchange toplevel bindings between the two separate environments. See also ~~/src/Tools/SML/Examples.thy for some examples. * Standard tactics and proof methods such as "clarsimp", "auto" and "safe" now preserve equality hypotheses "x = expr" where x is a free variable. Locale assumptions and chained facts containing "x" continue to be useful. The new method "hypsubst_thin" and the configuration option "hypsubst_thin" (within the attribute name space) restore the previous behavior. INCOMPATIBILITY, especially where induction is done after these methods or when the names of free and bound variables clash. As first approximation, old proofs may be repaired by "using [[hypsubst_thin = true]]" in the critical spot. * More static checking of proof methods, which allows the system to form a closure over the concrete syntax. Method arguments should be processed in the original proof context as far as possible, before operating on the goal state. In any case, the standard discipline for subgoal-addressing needs to be observed: no subgoals or a subgoal number that is out of range produces an empty result sequence, not an exception. Potential INCOMPATIBILITY for non-conformant tactical proof tools. * Lexical syntax (inner and outer) supports text cartouches with arbitrary nesting, and without escapes of quotes etc. The Prover IDE supports input via ` (backquote). * The outer syntax categories "text" (for formal comments and document markup commands) and "altstring" (for literal fact references) allow cartouches as well, in addition to the traditional mix of quotations. * Syntax of document antiquotation @{rail} now uses \ instead of "\\", to avoid the optical illusion of escaped backslash within string token. General renovation of its syntax using text cartouches. Minor INCOMPATIBILITY. * Discontinued legacy_isub_isup, which was a temporary workaround for Isabelle/ML in Isabelle2013-1. The prover process no longer accepts old identifier syntax with \<^isub> or \<^isup>. Potential INCOMPATIBILITY. * Document antiquotation @{url} produces markup for the given URL, which results in an active hyperlink within the text. * Document antiquotation @{file_unchecked} is like @{file}, but does not check existence within the file-system. * Updated and extended manuals: codegen, datatypes, implementation, isar-ref, jedit, system. *** Prover IDE -- Isabelle/Scala/jEdit *** * Improved Document panel: simplified interaction where every single mouse click (re)opens document via desktop environment or as jEdit buffer. * Support for Navigator plugin (with toolbar buttons), with connection to PIDE hyperlinks. * Auxiliary files ('ML_file' etc.) are managed by the Prover IDE. Open text buffers take precedence over copies within the file-system. * Improved support for Isabelle/ML, with jEdit mode "isabelle-ml" for auxiliary ML files. * Improved syntactic and semantic completion mechanism, with simple templates, completion language context, name-space completion, file-name completion, spell-checker completion. * Refined GUI popup for completion: more robust key/mouse event handling and propagation to enclosing text area -- avoid loosing keystrokes with slow / remote graphics displays. * Completion popup supports both ENTER and TAB (default) to select an item, depending on Isabelle options. * Refined insertion of completion items wrt. jEdit text: multiple selections, rectangular selections, rectangular selection as "tall caret". * Integrated spell-checker for document text, comments etc. with completion popup and context-menu. * More general "Query" panel supersedes "Find" panel, with GUI access to commands 'find_theorems' and 'find_consts', as well as print operations for the context. Minor incompatibility in keyboard shortcuts etc.: replace action isabelle-find by isabelle-query. * Search field for all output panels ("Output", "Query", "Info" etc.) to highlight text via regular expression. * Option "jedit_print_mode" (see also "Plugin Options / Isabelle / General") allows to specify additional print modes for the prover process, without requiring old-fashioned command-line invocation of "isabelle jedit -m MODE". * More support for remote files (e.g. http) using standard Java networking operations instead of jEdit virtual file-systems. * Empty editors buffers that are no longer required (e.g.\ via theory imports) are automatically removed from the document model. * Improved monitor panel. * Improved Console/Scala plugin: more uniform scala.Console output, more robust treatment of threads and interrupts. * Improved management of dockable windows: clarified keyboard focus and window placement wrt. main editor view; optional menu item to "Detach" a copy where this makes sense. * New Simplifier Trace panel provides an interactive view of the simplification process, enabled by the "simp_trace_new" attribute within the context. *** Pure *** * Low-level type-class commands 'classes', 'classrel', 'arities' have been discontinued to avoid the danger of non-trivial axiomatization that is not immediately visible. INCOMPATIBILITY, use regular 'instance' command with proof. The required OFCLASS(...) theorem might be postulated via 'axiomatization' beforehand, or the proof finished trivially if the underlying class definition is made vacuous (without any assumptions). See also Isabelle/ML operations Axclass.class_axiomatization, Axclass.classrel_axiomatization, Axclass.arity_axiomatization. * Basic constants of Pure use more conventional names and are always qualified. Rare INCOMPATIBILITY, but with potentially serious consequences, notably for tools in Isabelle/ML. The following renaming needs to be applied: == ~> Pure.eq ==> ~> Pure.imp all ~> Pure.all TYPE ~> Pure.type dummy_pattern ~> Pure.dummy_pattern Systematic porting works by using the following theory setup on a *previous* Isabelle version to introduce the new name accesses for the old constants: setup {* fn thy => thy |> Sign.root_path |> Sign.const_alias (Binding.qualify true "Pure" @{binding eq}) "==" |> Sign.const_alias (Binding.qualify true "Pure" @{binding imp}) "==>" |> Sign.const_alias (Binding.qualify true "Pure" @{binding all}) "all" |> Sign.restore_naming thy *} Thus ML antiquotations like @{const_name Pure.eq} may be used already. Later the application is moved to the current Isabelle version, and the auxiliary aliases are deleted. * Attributes "where" and "of" allow an optional context of local variables ('for' declaration): these variables become schematic in the instantiated theorem. * Obsolete attribute "standard" has been discontinued (legacy since Isabelle2012). Potential INCOMPATIBILITY, use explicit 'for' context where instantiations with schematic variables are intended (for declaration commands like 'lemmas' or attributes like "of"). The following temporary definition may help to port old applications: attribute_setup standard = "Scan.succeed (Thm.rule_attribute (K Drule.export_without_context))" * More thorough check of proof context for goal statements and attributed fact expressions (concerning background theory, declared hyps). Potential INCOMPATIBILITY, tools need to observe standard context discipline. See also Assumption.add_assumes and the more primitive Thm.assume_hyps. * Inner syntax token language allows regular quoted strings "..." (only makes sense in practice, if outer syntax is delimited differently, e.g. via cartouches). * Command 'print_term_bindings' supersedes 'print_binds' for clarity, but the latter is retained some time as Proof General legacy. * Code generator preprocessor: explicit control of simp tracing on a per-constant basis. See attribute "code_preproc". *** HOL *** * Code generator: enforce case of identifiers only for strict target language requirements. INCOMPATIBILITY. * Code generator: explicit proof contexts in many ML interfaces. INCOMPATIBILITY. * Code generator: minimize exported identifiers by default. Minor INCOMPATIBILITY. * Code generation for SML and OCaml: dropped arcane "no_signatures" option. Minor INCOMPATIBILITY. * "declare [[code abort: ...]]" replaces "code_abort ...". INCOMPATIBILITY. * "declare [[code drop: ...]]" drops all code equations associated with the given constants. * Code generations are provided for make, fields, extend and truncate operations on records. * Command and antiquotation "value" are now hardcoded against nbe and ML. Minor INCOMPATIBILITY. * Renamed command 'enriched_type' to 'functor'. INCOMPATIBILITY. * The symbol "\" may be used within char or string literals to represent (Char Nibble0 NibbleA), i.e. ASCII newline. * Qualified String.implode and String.explode. INCOMPATIBILITY. * Simplifier: Enhanced solver of preconditions of rewrite rules can now deal with conjunctions. For help with converting proofs, the old behaviour of the simplifier can be restored like this: declare/using [[simp_legacy_precond]]. This configuration option will disappear again in the future. INCOMPATIBILITY. * Simproc "finite_Collect" is no longer enabled by default, due to spurious crashes and other surprises. Potential INCOMPATIBILITY. * Moved new (co)datatype package and its dependencies from session "HOL-BNF" to "HOL". The commands 'bnf', 'wrap_free_constructors', 'datatype_new', 'codatatype', 'primcorec', 'primcorecursive' are now part of theory "Main". Theory renamings: FunDef.thy ~> Fun_Def.thy (and Fun_Def_Base.thy) Library/Wfrec.thy ~> Wfrec.thy Library/Zorn.thy ~> Zorn.thy Cardinals/Order_Relation.thy ~> Order_Relation.thy Library/Order_Union.thy ~> Cardinals/Order_Union.thy Cardinals/Cardinal_Arithmetic_Base.thy ~> BNF_Cardinal_Arithmetic.thy Cardinals/Cardinal_Order_Relation_Base.thy ~> BNF_Cardinal_Order_Relation.thy Cardinals/Constructions_on_Wellorders_Base.thy ~> BNF_Constructions_on_Wellorders.thy Cardinals/Wellorder_Embedding_Base.thy ~> BNF_Wellorder_Embedding.thy Cardinals/Wellorder_Relation_Base.thy ~> BNF_Wellorder_Relation.thy BNF/Ctr_Sugar.thy ~> Ctr_Sugar.thy BNF/Basic_BNFs.thy ~> Basic_BNFs.thy BNF/BNF_Comp.thy ~> BNF_Comp.thy BNF/BNF_Def.thy ~> BNF_Def.thy BNF/BNF_FP_Base.thy ~> BNF_FP_Base.thy BNF/BNF_GFP.thy ~> BNF_GFP.thy BNF/BNF_LFP.thy ~> BNF_LFP.thy BNF/BNF_Util.thy ~> BNF_Util.thy BNF/Coinduction.thy ~> Coinduction.thy BNF/More_BNFs.thy ~> Library/More_BNFs.thy BNF/Countable_Type.thy ~> Library/Countable_Set_Type.thy BNF/Examples/* ~> BNF_Examples/* New theories: Wellorder_Extension.thy (split from Zorn.thy) Library/Cardinal_Notations.thy Library/BNF_Axomatization.thy BNF_Examples/Misc_Primcorec.thy BNF_Examples/Stream_Processor.thy Discontinued theories: BNF/BNF.thy BNF/Equiv_Relations_More.thy INCOMPATIBILITY. * New (co)datatype package: - Command 'primcorec' is fully implemented. - Command 'datatype_new' generates size functions ("size_xxx" and "size") as required by 'fun'. - BNFs are integrated with the Lifting tool and new-style (co)datatypes with Transfer. - Renamed commands: datatype_new_compat ~> datatype_compat primrec_new ~> primrec wrap_free_constructors ~> free_constructors INCOMPATIBILITY. - The generated constants "xxx_case" and "xxx_rec" have been renamed "case_xxx" and "rec_xxx" (e.g., "prod_case" ~> "case_prod"). INCOMPATIBILITY. - The constant "xxx_(un)fold" and related theorems are no longer generated. Use "xxx_(co)rec" or define "xxx_(un)fold" manually using "prim(co)rec". INCOMPATIBILITY. - No discriminators are generated for nullary constructors by default, eliminating the need for the odd "=:" syntax. INCOMPATIBILITY. - No discriminators or selectors are generated by default by "datatype_new", unless custom names are specified or the new "discs_sels" option is passed. INCOMPATIBILITY. * Old datatype package: - The generated theorems "xxx.cases" and "xxx.recs" have been renamed "xxx.case" and "xxx.rec" (e.g., "sum.cases" -> "sum.case"). INCOMPATIBILITY. - The generated constants "xxx_case", "xxx_rec", and "xxx_size" have been renamed "case_xxx", "rec_xxx", and "size_xxx" (e.g., "prod_case" ~> "case_prod"). INCOMPATIBILITY. * The types "'a list" and "'a option", their set and map functions, their relators, and their selectors are now produced using the new BNF-based datatype package. Renamed constants: Option.set ~> set_option Option.map ~> map_option option_rel ~> rel_option Renamed theorems: set_def ~> set_rec[abs_def] map_def ~> map_rec[abs_def] Option.map_def ~> map_option_case[abs_def] (with "case_option" instead of "rec_option") option.recs ~> option.rec list_all2_def ~> list_all2_iff set.simps ~> set_simps (or the slightly different "list.set") map.simps ~> list.map hd.simps ~> list.sel(1) tl.simps ~> list.sel(2-3) the.simps ~> option.sel INCOMPATIBILITY. * The following map functions and relators have been renamed: sum_map ~> map_sum map_pair ~> map_prod prod_rel ~> rel_prod sum_rel ~> rel_sum fun_rel ~> rel_fun set_rel ~> rel_set filter_rel ~> rel_filter fset_rel ~> rel_fset (in "src/HOL/Library/FSet.thy") cset_rel ~> rel_cset (in "src/HOL/Library/Countable_Set_Type.thy") vset ~> rel_vset (in "src/HOL/Library/Quotient_Set.thy") INCOMPATIBILITY. * Lifting and Transfer: - a type variable as a raw type is supported - stronger reflexivity prover - rep_eq is always generated by lift_definition - setup for Lifting/Transfer is now automated for BNFs + holds for BNFs that do not contain a dead variable + relator_eq, relator_mono, relator_distr, relator_domain, relator_eq_onp, quot_map, transfer rules for bi_unique, bi_total, right_unique, right_total, left_unique, left_total are proved automatically + definition of a predicator is generated automatically + simplification rules for a predicator definition are proved automatically for datatypes - consolidation of the setup of Lifting/Transfer + property that a relator preservers reflexivity is not needed any more Minor INCOMPATIBILITY. + left_total and left_unique rules are now transfer rules (reflexivity_rule attribute not needed anymore) INCOMPATIBILITY. + Domainp does not have to be a separate assumption in relator_domain theorems (=> more natural statement) INCOMPATIBILITY. - registration of code equations is more robust Potential INCOMPATIBILITY. - respectfulness proof obligation is preprocessed to a more readable form Potential INCOMPATIBILITY. - eq_onp is always unfolded in respectfulness proof obligation Potential INCOMPATIBILITY. - unregister lifting setup for Code_Numeral.integer and Code_Numeral.natural Potential INCOMPATIBILITY. - Lifting.invariant -> eq_onp INCOMPATIBILITY. * New internal SAT solver "cdclite" that produces models and proof traces. This solver replaces the internal SAT solvers "enumerate" and "dpll". Applications that explicitly used one of these two SAT solvers should use "cdclite" instead. In addition, "cdclite" is now the default SAT solver for the "sat" and "satx" proof methods and corresponding tactics; the old default can be restored using "declare [[sat_solver = zchaff_with_proofs]]". Minor INCOMPATIBILITY. * SMT module: A new version of the SMT module, temporarily called "SMT2", uses SMT-LIB 2 and supports recent versions of Z3 (e.g., 4.3). The new proof method is called "smt2". CVC3 and CVC4 are also supported as oracles. Yices is no longer supported, because no version of the solver can handle both SMT-LIB 2 and quantifiers. * Activation of Z3 now works via "z3_non_commercial" system option (without requiring restart), instead of former settings variable "Z3_NON_COMMERCIAL". The option can be edited in Isabelle/jEdit menu Plugin Options / Isabelle / General. * Sledgehammer: - Z3 can now produce Isar proofs. - MaSh overhaul: . New SML-based learning algorithms eliminate the dependency on Python and increase performance and reliability. . MaSh and MeSh are now used by default together with the traditional MePo (Meng-Paulson) relevance filter. To disable MaSh, set the "MaSh" system option in Isabelle/jEdit Plugin Options / Isabelle / General to "none". - New option: smt_proofs - Renamed options: isar_compress ~> compress isar_try0 ~> try0 INCOMPATIBILITY. * Removed solvers remote_cvc3 and remote_z3. Use cvc3 and z3 instead. * Nitpick: - Fixed soundness bug whereby mutually recursive datatypes could take infinite values. - Fixed soundness bug with low-level number functions such as "Abs_Integ" and "Rep_Integ". - Removed "std" option. - Renamed "show_datatypes" to "show_types" and "hide_datatypes" to "hide_types". * Metis: Removed legacy proof method 'metisFT'. Use 'metis (full_types)' instead. INCOMPATIBILITY. * Try0: Added 'algebra' and 'meson' to the set of proof methods. * Adjustion of INF and SUP operations: - Elongated constants INFI and SUPR to INFIMUM and SUPREMUM. - Consolidated theorem names containing INFI and SUPR: have INF and SUP instead uniformly. - More aggressive normalization of expressions involving INF and Inf or SUP and Sup. - INF_image and SUP_image do not unfold composition. - Dropped facts INF_comp, SUP_comp. - Default congruence rules strong_INF_cong and strong_SUP_cong, with simplifier implication in premises. Generalize and replace former INT_cong, SUP_cong INCOMPATIBILITY. * SUP and INF generalized to conditionally_complete_lattice. * Swapped orientation of facts image_comp and vimage_comp: image_compose ~> image_comp [symmetric] image_comp ~> image_comp [symmetric] vimage_compose ~> vimage_comp [symmetric] vimage_comp ~> vimage_comp [symmetric] INCOMPATIBILITY. * Theory reorganization: split of Big_Operators.thy into Groups_Big.thy and Lattices_Big.thy. * Consolidated some facts about big group operators: setsum_0' ~> setsum.neutral setsum_0 ~> setsum.neutral_const setsum_addf ~> setsum.distrib setsum_cartesian_product ~> setsum.cartesian_product setsum_cases ~> setsum.If_cases setsum_commute ~> setsum.commute setsum_cong ~> setsum.cong setsum_delta ~> setsum.delta setsum_delta' ~> setsum.delta' setsum_diff1' ~> setsum.remove setsum_empty ~> setsum.empty setsum_infinite ~> setsum.infinite setsum_insert ~> setsum.insert setsum_inter_restrict'' ~> setsum.inter_filter setsum_mono_zero_cong_left ~> setsum.mono_neutral_cong_left setsum_mono_zero_cong_right ~> setsum.mono_neutral_cong_right setsum_mono_zero_left ~> setsum.mono_neutral_left setsum_mono_zero_right ~> setsum.mono_neutral_right setsum_reindex ~> setsum.reindex setsum_reindex_cong ~> setsum.reindex_cong setsum_reindex_nonzero ~> setsum.reindex_nontrivial setsum_restrict_set ~> setsum.inter_restrict setsum_Plus ~> setsum.Plus setsum_setsum_restrict ~> setsum.commute_restrict setsum_Sigma ~> setsum.Sigma setsum_subset_diff ~> setsum.subset_diff setsum_Un_disjoint ~> setsum.union_disjoint setsum_UN_disjoint ~> setsum.UNION_disjoint setsum_Un_Int ~> setsum.union_inter setsum_Union_disjoint ~> setsum.Union_disjoint setsum_UNION_zero ~> setsum.Union_comp setsum_Un_zero ~> setsum.union_inter_neutral strong_setprod_cong ~> setprod.strong_cong strong_setsum_cong ~> setsum.strong_cong setprod_1' ~> setprod.neutral setprod_1 ~> setprod.neutral_const setprod_cartesian_product ~> setprod.cartesian_product setprod_cong ~> setprod.cong setprod_delta ~> setprod.delta setprod_delta' ~> setprod.delta' setprod_empty ~> setprod.empty setprod_infinite ~> setprod.infinite setprod_insert ~> setprod.insert setprod_mono_one_cong_left ~> setprod.mono_neutral_cong_left setprod_mono_one_cong_right ~> setprod.mono_neutral_cong_right setprod_mono_one_left ~> setprod.mono_neutral_left setprod_mono_one_right ~> setprod.mono_neutral_right setprod_reindex ~> setprod.reindex setprod_reindex_cong ~> setprod.reindex_cong setprod_reindex_nonzero ~> setprod.reindex_nontrivial setprod_Sigma ~> setprod.Sigma setprod_subset_diff ~> setprod.subset_diff setprod_timesf ~> setprod.distrib setprod_Un2 ~> setprod.union_diff2 setprod_Un_disjoint ~> setprod.union_disjoint setprod_UN_disjoint ~> setprod.UNION_disjoint setprod_Un_Int ~> setprod.union_inter setprod_Union_disjoint ~> setprod.Union_disjoint setprod_Un_one ~> setprod.union_inter_neutral Dropped setsum_cong2 (simple variant of setsum.cong). Dropped setsum_inter_restrict' (simple variant of setsum.inter_restrict) Dropped setsum_reindex_id, setprod_reindex_id (simple variants of setsum.reindex [symmetric], setprod.reindex [symmetric]). INCOMPATIBILITY. * Abolished slightly odd global lattice interpretation for min/max. Fact consolidations: min_max.inf_assoc ~> min.assoc min_max.inf_commute ~> min.commute min_max.inf_left_commute ~> min.left_commute min_max.inf_idem ~> min.idem min_max.inf_left_idem ~> min.left_idem min_max.inf_right_idem ~> min.right_idem min_max.sup_assoc ~> max.assoc min_max.sup_commute ~> max.commute min_max.sup_left_commute ~> max.left_commute min_max.sup_idem ~> max.idem min_max.sup_left_idem ~> max.left_idem min_max.sup_inf_distrib1 ~> max_min_distrib2 min_max.sup_inf_distrib2 ~> max_min_distrib1 min_max.inf_sup_distrib1 ~> min_max_distrib2 min_max.inf_sup_distrib2 ~> min_max_distrib1 min_max.distrib ~> min_max_distribs min_max.inf_absorb1 ~> min.absorb1 min_max.inf_absorb2 ~> min.absorb2 min_max.sup_absorb1 ~> max.absorb1 min_max.sup_absorb2 ~> max.absorb2 min_max.le_iff_inf ~> min.absorb_iff1 min_max.le_iff_sup ~> max.absorb_iff2 min_max.inf_le1 ~> min.cobounded1 min_max.inf_le2 ~> min.cobounded2 le_maxI1, min_max.sup_ge1 ~> max.cobounded1 le_maxI2, min_max.sup_ge2 ~> max.cobounded2 min_max.le_infI1 ~> min.coboundedI1 min_max.le_infI2 ~> min.coboundedI2 min_max.le_supI1 ~> max.coboundedI1 min_max.le_supI2 ~> max.coboundedI2 min_max.less_infI1 ~> min.strict_coboundedI1 min_max.less_infI2 ~> min.strict_coboundedI2 min_max.less_supI1 ~> max.strict_coboundedI1 min_max.less_supI2 ~> max.strict_coboundedI2 min_max.inf_mono ~> min.mono min_max.sup_mono ~> max.mono min_max.le_infI, min_max.inf_greatest ~> min.boundedI min_max.le_supI, min_max.sup_least ~> max.boundedI min_max.le_inf_iff ~> min.bounded_iff min_max.le_sup_iff ~> max.bounded_iff For min_max.inf_sup_aci, prefer (one of) min.commute, min.assoc, min.left_commute, min.left_idem, max.commute, max.assoc, max.left_commute, max.left_idem directly. For min_max.inf_sup_ord, prefer (one of) min.cobounded1, min.cobounded2, max.cobounded1m max.cobounded2 directly. For min_ac or max_ac, prefer more general collection ac_simps. INCOMPATIBILITY. * Theorem disambiguation Inf_le_Sup (on finite sets) ~> Inf_fin_le_Sup_fin. INCOMPATIBILITY. * Qualified constant names Wellfounded.acc, Wellfounded.accp. INCOMPATIBILITY. * Fact generalization and consolidation: neq_one_mod_two, mod_2_not_eq_zero_eq_one_int ~> not_mod_2_eq_0_eq_1 INCOMPATIBILITY. * Purely algebraic definition of even. Fact generalization and consolidation: nat_even_iff_2_dvd, int_even_iff_2_dvd ~> even_iff_2_dvd even_zero_(nat|int) ~> even_zero INCOMPATIBILITY. * Abolished neg_numeral. - Canonical representation for minus one is "- 1". - Canonical representation for other negative numbers is "- (numeral _)". - When devising rule sets for number calculation, consider the following canonical cases: 0, 1, numeral _, - 1, - numeral _. - HOLogic.dest_number also recognizes numerals in non-canonical forms like "numeral One", "- numeral One", "- 0" and even "- ... - _". - Syntax for negative numerals is mere input syntax. INCOMPATIBILITY. * Reduced name variants for rules on associativity and commutativity: add_assoc ~> add.assoc add_commute ~> add.commute add_left_commute ~> add.left_commute mult_assoc ~> mult.assoc mult_commute ~> mult.commute mult_left_commute ~> mult.left_commute nat_add_assoc ~> add.assoc nat_add_commute ~> add.commute nat_add_left_commute ~> add.left_commute nat_mult_assoc ~> mult.assoc nat_mult_commute ~> mult.commute eq_assoc ~> iff_assoc eq_left_commute ~> iff_left_commute INCOMPATIBILITY. * Fact collections add_ac and mult_ac are considered old-fashioned. Prefer ac_simps instead, or specify rules (add|mult).(assoc|commute|left_commute) individually. * Elimination of fact duplicates: equals_zero_I ~> minus_unique diff_eq_0_iff_eq ~> right_minus_eq nat_infinite ~> infinite_UNIV_nat int_infinite ~> infinite_UNIV_int INCOMPATIBILITY. * Fact name consolidation: diff_def, diff_minus, ab_diff_minus ~> diff_conv_add_uminus minus_le_self_iff ~> neg_less_eq_nonneg le_minus_self_iff ~> less_eq_neg_nonpos neg_less_nonneg ~> neg_less_pos less_minus_self_iff ~> less_neg_neg [simp] INCOMPATIBILITY. * More simplification rules on unary and binary minus: add_diff_cancel, add_diff_cancel_left, add_le_same_cancel1, add_le_same_cancel2, add_less_same_cancel1, add_less_same_cancel2, add_minus_cancel, diff_add_cancel, le_add_same_cancel1, le_add_same_cancel2, less_add_same_cancel1, less_add_same_cancel2, minus_add_cancel, uminus_add_conv_diff. These correspondingly have been taken away from fact collections algebra_simps and field_simps. INCOMPATIBILITY. To restore proofs, the following patterns are helpful: a) Arbitrary failing proof not involving "diff_def": Consider simplification with algebra_simps or field_simps. b) Lifting rules from addition to subtraction: Try with "using of [... "- _" ...]" by simp". c) Simplification with "diff_def": just drop "diff_def". Consider simplification with algebra_simps or field_simps; or the brute way with "simp add: diff_conv_add_uminus del: add_uminus_conv_diff". * Introduce bdd_above and bdd_below in theory Conditionally_Complete_Lattices, use them instead of explicitly stating boundedness of sets. * ccpo.admissible quantifies only over non-empty chains to allow more syntax-directed proof rules; the case of the empty chain shows up as additional case in fixpoint induction proofs. INCOMPATIBILITY. * Removed and renamed theorems in Series: summable_le ~> suminf_le suminf_le ~> suminf_le_const series_pos_le ~> setsum_le_suminf series_pos_less ~> setsum_less_suminf suminf_ge_zero ~> suminf_nonneg suminf_gt_zero ~> suminf_pos suminf_gt_zero_iff ~> suminf_pos_iff summable_sumr_LIMSEQ_suminf ~> summable_LIMSEQ suminf_0_le ~> suminf_nonneg [rotate] pos_summable ~> summableI_nonneg_bounded ratio_test ~> summable_ratio_test removed series_zero, replaced by sums_finite removed auxiliary lemmas: sumr_offset, sumr_offset2, sumr_offset3, sumr_offset4, sumr_group, half, le_Suc_ex_iff, lemma_realpow_diff_sumr, real_setsum_nat_ivl_bounded, summable_le2, ratio_test_lemma2, sumr_minus_one_realpow_zerom, sumr_one_lb_realpow_zero, summable_convergent_sumr_iff, sumr_diff_mult_const INCOMPATIBILITY. * Replace (F)DERIV syntax by has_derivative: - "(f has_derivative f') (at x within s)" replaces "FDERIV f x : s : f'" - "(f has_field_derivative f') (at x within s)" replaces "DERIV f x : s : f'" - "f differentiable at x within s" replaces "_ differentiable _ in _" syntax - removed constant isDiff - "DERIV f x : f'" and "FDERIV f x : f'" syntax is only available as input syntax. - "DERIV f x : s : f'" and "FDERIV f x : s : f'" syntax removed. - Renamed FDERIV_... lemmas to has_derivative_... - renamed deriv (the syntax constant used for "DERIV _ _ :> _") to DERIV - removed DERIV_intros, has_derivative_eq_intros - introduced derivative_intros and deriative_eq_intros which includes now rules for DERIV, has_derivative and has_vector_derivative. - Other renamings: differentiable_def ~> real_differentiable_def differentiableE ~> real_differentiableE fderiv_def ~> has_derivative_at field_fderiv_def ~> field_has_derivative_at isDiff_der ~> differentiable_def deriv_fderiv ~> has_field_derivative_def deriv_def ~> DERIV_def INCOMPATIBILITY. * Include more theorems in continuous_intros. Remove the continuous_on_intros, isCont_intros collections, these facts are now in continuous_intros. * Theorems about complex numbers are now stated only using Re and Im, the Complex constructor is not used anymore. It is possible to use primcorec to defined the behaviour of a complex-valued function. Removed theorems about the Complex constructor from the simpset, they are available as the lemma collection legacy_Complex_simps. This especially removes i_complex_of_real: "ii * complex_of_real r = Complex 0 r". Instead the reverse direction is supported with Complex_eq: "Complex a b = a + \ * b" Moved csqrt from Fundamental_Algebra_Theorem to Complex. Renamings: Re/Im ~> complex.sel complex_Re/Im_zero ~> zero_complex.sel complex_Re/Im_add ~> plus_complex.sel complex_Re/Im_minus ~> uminus_complex.sel complex_Re/Im_diff ~> minus_complex.sel complex_Re/Im_one ~> one_complex.sel complex_Re/Im_mult ~> times_complex.sel complex_Re/Im_inverse ~> inverse_complex.sel complex_Re/Im_scaleR ~> scaleR_complex.sel complex_Re/Im_i ~> ii.sel complex_Re/Im_cnj ~> cnj.sel Re/Im_cis ~> cis.sel complex_divide_def ~> divide_complex_def complex_norm_def ~> norm_complex_def cmod_def ~> norm_complex_de Removed theorems: complex_zero_def complex_add_def complex_minus_def complex_diff_def complex_one_def complex_mult_def complex_inverse_def complex_scaleR_def INCOMPATIBILITY. * Theory Lubs moved HOL image to HOL-Library. It is replaced by Conditionally_Complete_Lattices. INCOMPATIBILITY. * HOL-Library: new theory src/HOL/Library/Tree.thy. * HOL-Library: removed theory src/HOL/Library/Kleene_Algebra.thy; it is subsumed by session Kleene_Algebra in AFP. * HOL-Library / theory RBT: various constants and facts are hidden; lifting setup is unregistered. INCOMPATIBILITY. * HOL-Cardinals: new theory src/HOL/Cardinals/Ordinal_Arithmetic.thy. * HOL-Word: bit representations prefer type bool over type bit. INCOMPATIBILITY. * HOL-Word: - Abandoned fact collection "word_arith_alts", which is a duplicate of "word_arith_wis". - Dropped first (duplicated) element in fact collections "sint_word_ariths", "word_arith_alts", "uint_word_ariths", "uint_word_arith_bintrs". * HOL-Number_Theory: - consolidated the proofs of the binomial theorem - the function fib is again of type nat => nat and not overloaded - no more references to Old_Number_Theory in the HOL libraries (except the AFP) INCOMPATIBILITY. * HOL-Multivariate_Analysis: - Type class ordered_real_vector for ordered vector spaces. - New theory Complex_Basic_Analysis defining complex derivatives, holomorphic functions, etc., ported from HOL Light's canal.ml. - Changed order of ordered_euclidean_space to be compatible with pointwise ordering on products. Therefore instance of conditionally_complete_lattice and ordered_real_vector. INCOMPATIBILITY: use box instead of greaterThanLessThan or explicit set-comprehensions with eucl_less for other (half-)open intervals. - removed dependencies on type class ordered_euclidean_space with introduction of "cbox" on euclidean_space - renamed theorems: interval ~> box mem_interval ~> mem_box interval_eq_empty ~> box_eq_empty interval_ne_empty ~> box_ne_empty interval_sing(1) ~> cbox_sing interval_sing(2) ~> box_sing subset_interval_imp ~> subset_box_imp subset_interval ~> subset_box open_interval ~> open_box closed_interval ~> closed_cbox interior_closed_interval ~> interior_cbox bounded_closed_interval ~> bounded_cbox compact_interval ~> compact_cbox bounded_subset_closed_interval_symmetric ~> bounded_subset_cbox_symmetric bounded_subset_closed_interval ~> bounded_subset_cbox mem_interval_componentwiseI ~> mem_box_componentwiseI convex_box ~> convex_prod rel_interior_real_interval ~> rel_interior_real_box convex_interval ~> convex_box convex_hull_eq_real_interval ~> convex_hull_eq_real_cbox frechet_derivative_within_closed_interval ~> frechet_derivative_within_cbox content_closed_interval' ~> content_cbox' elementary_subset_interval ~> elementary_subset_box diameter_closed_interval ~> diameter_cbox frontier_closed_interval ~> frontier_cbox frontier_open_interval ~> frontier_box bounded_subset_open_interval_symmetric ~> bounded_subset_box_symmetric closure_open_interval ~> closure_box open_closed_interval_convex ~> open_cbox_convex open_interval_midpoint ~> box_midpoint content_image_affinity_interval ~> content_image_affinity_cbox is_interval_interval ~> is_interval_cbox + is_interval_box + is_interval_closed_interval bounded_interval ~> bounded_closed_interval + bounded_boxes - respective theorems for intervals over the reals: content_closed_interval + content_cbox has_integral + has_integral_real fine_division_exists + fine_division_exists_real has_integral_null + has_integral_null_real tagged_division_union_interval + tagged_division_union_interval_real has_integral_const + has_integral_const_real integral_const + integral_const_real has_integral_bound + has_integral_bound_real integrable_continuous + integrable_continuous_real integrable_subinterval + integrable_subinterval_real has_integral_reflect_lemma + has_integral_reflect_lemma_real integrable_reflect + integrable_reflect_real integral_reflect + integral_reflect_real image_affinity_interval + image_affinity_cbox image_smult_interval + image_smult_cbox integrable_const + integrable_const_ivl integrable_on_subinterval + integrable_on_subcbox - renamed theorems: derivative_linear ~> has_derivative_bounded_linear derivative_is_linear ~> has_derivative_linear bounded_linear_imp_linear ~> bounded_linear.linear * HOL-Probability: - Renamed positive_integral to nn_integral: . Renamed all lemmas "*positive_integral*" to *nn_integral*" positive_integral_positive ~> nn_integral_nonneg . Renamed abbreviation integral\<^sup>P to integral\<^sup>N. - replaced the Lebesgue integral on real numbers by the more general Bochner integral for functions into a real-normed vector space. integral_zero ~> integral_zero / integrable_zero integral_minus ~> integral_minus / integrable_minus integral_add ~> integral_add / integrable_add integral_diff ~> integral_diff / integrable_diff integral_setsum ~> integral_setsum / integrable_setsum integral_multc ~> integral_mult_left / integrable_mult_left integral_cmult ~> integral_mult_right / integrable_mult_right integral_triangle_inequality~> integral_norm_bound integrable_nonneg ~> integrableI_nonneg integral_positive ~> integral_nonneg_AE integrable_abs_iff ~> integrable_abs_cancel positive_integral_lim_INF ~> nn_integral_liminf lebesgue_real_affine ~> lborel_real_affine borel_integral_has_integral ~> has_integral_lebesgue_integral integral_indicator ~> integral_real_indicator / integrable_real_indicator positive_integral_fst ~> nn_integral_fst' positive_integral_fst_measurable ~> nn_integral_fst positive_integral_snd_measurable ~> nn_integral_snd integrable_fst_measurable ~> integral_fst / integrable_fst / AE_integrable_fst integrable_snd_measurable ~> integral_snd / integrable_snd / AE_integrable_snd integral_monotone_convergence ~> integral_monotone_convergence / integrable_monotone_convergence integral_monotone_convergence_at_top ~> integral_monotone_convergence_at_top / integrable_monotone_convergence_at_top has_integral_iff_positive_integral_lebesgue ~> has_integral_iff_has_bochner_integral_lebesgue_nonneg lebesgue_integral_has_integral ~> has_integral_integrable_lebesgue_nonneg positive_integral_lebesgue_has_integral ~> integral_has_integral_lebesgue_nonneg / integrable_has_integral_lebesgue_nonneg lebesgue_integral_real_affine ~> nn_integral_real_affine has_integral_iff_positive_integral_lborel ~> integral_has_integral_nonneg / integrable_has_integral_nonneg The following theorems where removed: lebesgue_integral_nonneg lebesgue_integral_uminus lebesgue_integral_cmult lebesgue_integral_multc lebesgue_integral_cmult_nonneg integral_cmul_indicator integral_real - Formalized properties about exponentially, Erlang, and normal distributed random variables. * HOL-Decision_Procs: Separate command 'approximate' for approximative computation in src/HOL/Decision_Procs/Approximation. Minor INCOMPATIBILITY. *** Scala *** * The signature and semantics of Document.Snapshot.cumulate_markup / select_markup have been clarified. Markup is now traversed in the order of reports given by the prover: later markup is usually more specific and may override results accumulated so far. The elements guard is mandatory and checked precisely. Subtle INCOMPATIBILITY. * Substantial reworking of internal PIDE protocol communication channels. INCOMPATIBILITY. *** ML *** * Subtle change of semantics of Thm.eq_thm: theory stamps are not compared (according to Thm.thm_ord), but assumed to be covered by the current background theory. Thus equivalent data produced in different branches of the theory graph usually coincides (e.g. relevant for theory merge). Note that the softer Thm.eq_thm_prop is often more appropriate than Thm.eq_thm. * Proper context for basic Simplifier operations: rewrite_rule, rewrite_goals_rule, rewrite_goals_tac etc. INCOMPATIBILITY, need to pass runtime Proof.context (and ensure that the simplified entity actually belongs to it). * Proper context discipline for read_instantiate and instantiate_tac: variables that are meant to become schematic need to be given as fixed, and are generalized by the explicit context of local variables. This corresponds to Isar attributes "where" and "of" with 'for' declaration. INCOMPATIBILITY, also due to potential change of indices of schematic variables. * Moved ML_Compiler.exn_trace and other operations on exceptions to structure Runtime. Minor INCOMPATIBILITY. * Discontinued old Toplevel.debug in favour of system option "ML_exception_trace", which may be also declared within the context via "declare [[ML_exception_trace = true]]". Minor INCOMPATIBILITY. * Renamed configuration option "ML_trace" to "ML_source_trace". Minor INCOMPATIBILITY. * Configuration option "ML_print_depth" controls the pretty-printing depth of the ML compiler within the context. The old print_depth in ML is still available as default_print_depth, but rarely used. Minor INCOMPATIBILITY. * Toplevel function "use" refers to raw ML bootstrap environment, without Isar context nor antiquotations. Potential INCOMPATIBILITY. Note that 'ML_file' is the canonical command to load ML files into the formal context. * Simplified programming interface to define ML antiquotations, see structure ML_Antiquotation. Minor INCOMPATIBILITY. * ML antiquotation @{here} refers to its source position, which is occasionally useful for experimentation and diagnostic purposes. * ML antiquotation @{path} produces a Path.T value, similarly to Path.explode, but with compile-time check against the file-system and some PIDE markup. Note that unlike theory source, ML does not have a well-defined master directory, so an absolute symbolic path specification is usually required, e.g. "~~/src/HOL". * ML antiquotation @{print} inlines a function to print an arbitrary ML value, which is occasionally useful for diagnostic or demonstration purposes. *** System *** * Proof General with its traditional helper scripts is now an optional Isabelle component, e.g. see ProofGeneral-4.2-2 from the Isabelle component repository http://isabelle.in.tum.de/components/. Note that the "system" manual provides general explanations about add-on components, especially those that are not bundled with the release. * The raw Isabelle process executable has been renamed from "isabelle-process" to "isabelle_process", which conforms to common shell naming conventions, and allows to define a shell function within the Isabelle environment to avoid dynamic path lookup. Rare incompatibility for old tools that do not use the ISABELLE_PROCESS settings variable. * Former "isabelle tty" has been superseded by "isabelle console", with implicit build like "isabelle jedit", and without the mostly obsolete Isar TTY loop. * Simplified "isabelle display" tool. Settings variables DVI_VIEWER and PDF_VIEWER now refer to the actual programs, not shell command-lines. Discontinued option -c: invocation may be asynchronous via desktop environment, without any special precautions. Potential INCOMPATIBILITY with ambitious private settings. * Removed obsolete "isabelle unsymbolize". Note that the usual format for email communication is the Unicode rendering of Isabelle symbols, as produced by Isabelle/jEdit, for example. * Removed obsolete tool "wwwfind". Similar functionality may be integrated into Isabelle/jEdit eventually. * Improved 'display_drafts' concerning desktop integration and repeated invocation in PIDE front-end: re-use single file $ISABELLE_HOME_USER/tmp/drafts.pdf and corresponding views. * Session ROOT specifications require explicit 'document_files' for robust dependencies on LaTeX sources. Only these explicitly given files are copied to the document output directory, before document processing is started. * Windows: support for regular TeX installation (e.g. MiKTeX) instead of TeX Live from Cygwin. New in Isabelle2013-2 (December 2013) ------------------------------------- *** Prover IDE -- Isabelle/Scala/jEdit *** * More robust editing of running commands with internal forks, e.g. non-terminating 'by' steps. * More relaxed Sledgehammer panel: avoid repeated application of query after edits surrounding the command location. * More status information about commands that are interrupted accidentally (via physical event or Poly/ML runtime system signal, e.g. out-of-memory). *** System *** * More robust termination of external processes managed by Isabelle/ML: support cancellation of tasks within the range of milliseconds, as required for PIDE document editing with automatically tried tools (e.g. Sledgehammer). * Reactivated Isabelle/Scala kill command for external processes on Mac OS X, which was accidentally broken in Isabelle2013-1 due to a workaround for some Debian/Ubuntu Linux versions from 2013. New in Isabelle2013-1 (November 2013) ------------------------------------- *** General *** * Discontinued obsolete 'uses' within theory header. Note that commands like 'ML_file' work without separate declaration of file dependencies. Minor INCOMPATIBILITY. * Discontinued redundant 'use' command, which was superseded by 'ML_file' in Isabelle2013. Minor INCOMPATIBILITY. * Simplified subscripts within identifiers, using plain \<^sub> instead of the second copy \<^isub> and \<^isup>. Superscripts are only for literal tokens within notation; explicit mixfix annotations for consts or fixed variables may be used as fall-back for unusual names. Obsolete \ has been expanded to \<^sup>2 in Isabelle/HOL. INCOMPATIBILITY, use "isabelle update_sub_sup" to standardize symbols as a starting point for further manual cleanup. The ML reference variable "legacy_isub_isup" may be set as temporary workaround, to make the prover accept a subset of the old identifier syntax. * Document antiquotations: term style "isub" has been renamed to "sub". Minor INCOMPATIBILITY. * Uniform management of "quick_and_dirty" as system option (see also "isabelle options"), configuration option within the context (see also Config.get in Isabelle/ML), and attribute in Isabelle/Isar. Minor INCOMPATIBILITY, need to use more official Isabelle means to access quick_and_dirty, instead of historical poking into mutable reference. * Renamed command 'print_configs' to 'print_options'. Minor INCOMPATIBILITY. * Proper diagnostic command 'print_state'. Old 'pr' (with its implicit change of some global references) is retained for now as control command, e.g. for ProofGeneral 3.7.x. * Discontinued 'print_drafts' command with its old-fashioned PS output and Unix command-line print spooling. Minor INCOMPATIBILITY: use 'display_drafts' instead and print via the regular document viewer. * Updated and extended "isar-ref" and "implementation" manual, eliminated old "ref" manual. *** Prover IDE -- Isabelle/Scala/jEdit *** * New manual "jedit" for Isabelle/jEdit, see isabelle doc or Documentation panel. * Dockable window "Documentation" provides access to Isabelle documentation. * Dockable window "Find" provides query operations for formal entities (GUI front-end to 'find_theorems' command). * Dockable window "Sledgehammer" manages asynchronous / parallel sledgehammer runs over existing document sources, independently of normal editing and checking process. * Dockable window "Timing" provides an overview of relevant command timing information, depending on option jedit_timing_threshold. The same timing information is shown in the extended tooltip of the command keyword, when hovering the mouse over it while the CONTROL or COMMAND modifier is pressed. * Improved dockable window "Theories": Continuous checking of proof document (visible and required parts) may be controlled explicitly, using check box or shortcut "C+e ENTER". Individual theory nodes may be marked explicitly as required and checked in full, using check box or shortcut "C+e SPACE". * Improved completion mechanism, which is now managed by the Isabelle/jEdit plugin instead of SideKick. Refined table of Isabelle symbol abbreviations (see $ISABELLE_HOME/etc/symbols). * Standard jEdit keyboard shortcut C+b complete-word is remapped to isabelle.complete for explicit completion in Isabelle sources. INCOMPATIBILITY wrt. jEdit defaults, may have to invent new shortcuts to resolve conflict. * Improved support of various "minor modes" for Isabelle NEWS, options, session ROOT etc., with completion and SideKick tree view. * Strictly monotonic document update, without premature cancellation of running transactions that are still needed: avoid reset/restart of such command executions while editing. * Support for asynchronous print functions, as overlay to existing document content. * Support for automatic tools in HOL, which try to prove or disprove toplevel theorem statements. * Action isabelle.reset-font-size resets main text area font size according to Isabelle/Scala plugin option "jedit_font_reset_size" (see also "Plugin Options / Isabelle / General"). It can be bound to some keyboard shortcut by the user (e.g. C+0 and/or C+NUMPAD0). * File specifications in jEdit (e.g. file browser) may refer to $ISABELLE_HOME and $ISABELLE_HOME_USER on all platforms. Discontinued obsolete $ISABELLE_HOME_WINDOWS variable. * Improved support for Linux look-and-feel "GTK+", see also "Utilities / Global Options / Appearance". * Improved support of native Mac OS X functionality via "MacOSX" plugin, which is now enabled by default. *** Pure *** * Commands 'interpretation' and 'sublocale' are now target-sensitive. In particular, 'interpretation' allows for non-persistent interpretation within "context ... begin ... end" blocks offering a light-weight alternative to 'sublocale'. See "isar-ref" manual for details. * Improved locales diagnostic command 'print_dependencies'. * Discontinued obsolete 'axioms' command, which has been marked as legacy since Isabelle2009-2. INCOMPATIBILITY, use 'axiomatization' instead, while observing its uniform scope for polymorphism. * Discontinued empty name bindings in 'axiomatization'. INCOMPATIBILITY. * System option "proofs" has been discontinued. Instead the global state of Proofterm.proofs is persistently compiled into logic images as required, notably HOL-Proofs. Users no longer need to change Proofterm.proofs dynamically. Minor INCOMPATIBILITY. * Syntax translation functions (print_translation etc.) always depend on Proof.context. Discontinued former "(advanced)" option -- this is now the default. Minor INCOMPATIBILITY. * Former global reference trace_unify_fail is now available as configuration option "unify_trace_failure" (global context only). * SELECT_GOAL now retains the syntactic context of the overall goal state (schematic variables etc.). Potential INCOMPATIBILITY in rare situations. *** HOL *** * Stronger precedence of syntax for big intersection and union on sets, in accordance with corresponding lattice operations. INCOMPATIBILITY. * Notation "{p:A. P}" now allows tuple patterns as well. * Nested case expressions are now translated in a separate check phase rather than during parsing. The data for case combinators is separated from the datatype package. The declaration attribute "case_translation" can be used to register new case combinators: declare [[case_translation case_combinator constructor1 ... constructorN]] * Code generator: - 'code_printing' unifies 'code_const' / 'code_type' / 'code_class' / 'code_instance'. - 'code_identifier' declares name hints for arbitrary identifiers in generated code, subsuming 'code_modulename'. See the isar-ref manual for syntax diagrams, and the HOL theories for examples. * Attibute 'code': 'code' now declares concrete and abstract code equations uniformly. Use explicit 'code equation' and 'code abstract' to distinguish both when desired. * Discontinued theories Code_Integer and Efficient_Nat by a more fine-grain stack of theories Code_Target_Int, Code_Binary_Nat, Code_Target_Nat and Code_Target_Numeral. See the tutorial on code generation for details. INCOMPATIBILITY. * Numeric types are mapped by default to target language numerals: natural (replaces former code_numeral) and integer (replaces former code_int). Conversions are available as integer_of_natural / natural_of_integer / integer_of_nat / nat_of_integer (in HOL) and Code_Numeral.integer_of_natural / Code_Numeral.natural_of_integer (in ML). INCOMPATIBILITY. * Function package: For mutually recursive functions f and g, separate cases rules f.cases and g.cases are generated instead of unusable f_g.cases which exposed internal sum types. Potential INCOMPATIBILITY, in the case that the unusable rule was used nevertheless. * Function package: For each function f, new rules f.elims are generated, which eliminate equalities of the form "f x = t". * New command 'fun_cases' derives ad-hoc elimination rules for function equations as simplified instances of f.elims, analogous to inductive_cases. See ~~/src/HOL/ex/Fundefs.thy for some examples. * Lifting: - parametrized correspondence relations are now supported: + parametricity theorems for the raw term can be specified in the command lift_definition, which allow us to generate stronger transfer rules + setup_lifting generates stronger transfer rules if parametric correspondence relation can be generated + various new properties of the relator must be specified to support parametricity + parametricity theorem for the Quotient relation can be specified - setup_lifting generates domain rules for the Transfer package - stronger reflexivity prover of respectfulness theorems for type copies - ===> and --> are now local. The symbols can be introduced by interpreting the locale lifting_syntax (typically in an anonymous context) - Lifting/Transfer relevant parts of Library/Quotient_* are now in Main. Potential INCOMPATIBILITY - new commands for restoring and deleting Lifting/Transfer context: lifting_forget, lifting_update - the command print_quotmaps was renamed to print_quot_maps. INCOMPATIBILITY * Transfer: - better support for domains in Transfer: replace Domainp T by the actual invariant in a transferred goal - transfer rules can have as assumptions other transfer rules - Experimental support for transferring from the raw level to the abstract level: Transfer.transferred attribute - Attribute version of the transfer method: untransferred attribute * Reification and reflection: - Reification is now directly available in HOL-Main in structure "Reification". - Reflection now handles multiple lists with variables also. - The whole reflection stack has been decomposed into conversions. INCOMPATIBILITY. * Revised devices for recursive definitions over finite sets: - Only one fundamental fold combinator on finite set remains: Finite_Set.fold :: ('a => 'b => 'b) => 'b => 'a set => 'b This is now identity on infinite sets. - Locales ("mini packages") for fundamental definitions with Finite_Set.fold: folding, folding_idem. - Locales comm_monoid_set, semilattice_order_set and semilattice_neutr_order_set for big operators on sets. See theory Big_Operators for canonical examples. Note that foundational constants comm_monoid_set.F and semilattice_set.F correspond to former combinators fold_image and fold1 respectively. These are now gone. You may use those foundational constants as substitutes, but it is preferable to interpret the above locales accordingly. - Dropped class ab_semigroup_idem_mult (special case of lattice, no longer needed in connection with Finite_Set.fold etc.) - Fact renames: card.union_inter ~> card_Un_Int [symmetric] card.union_disjoint ~> card_Un_disjoint INCOMPATIBILITY. * Locale hierarchy for abstract orderings and (semi)lattices. * Complete_Partial_Order.admissible is defined outside the type class ccpo, but with mandatory prefix ccpo. Admissibility theorems lose the class predicate assumption or sort constraint when possible. INCOMPATIBILITY. * Introduce type class "conditionally_complete_lattice": Like a complete lattice but does not assume the existence of the top and bottom elements. Allows to generalize some lemmas about reals and extended reals. Removed SupInf and replaced it by the instantiation of conditionally_complete_lattice for real. Renamed lemmas about conditionally-complete lattice from Sup_... to cSup_... and from Inf_... to cInf_... to avoid hidding of similar complete lattice lemmas. * Introduce type class linear_continuum as combination of conditionally-complete lattices and inner dense linorders which have more than one element. INCOMPATIBILITY. * Introduced type classes order_top and order_bot. The old classes top and bot only contain the syntax without assumptions. INCOMPATIBILITY: Rename bot -> order_bot, top -> order_top * Introduce type classes "no_top" and "no_bot" for orderings without top and bottom elements. * Split dense_linorder into inner_dense_order and no_top, no_bot. * Complex_Main: Unify and move various concepts from HOL-Multivariate_Analysis to HOL-Complex_Main. - Introduce type class (lin)order_topology and linear_continuum_topology. Allows to generalize theorems about limits and order. Instances are reals and extended reals. - continuous and continuos_on from Multivariate_Analysis: "continuous" is the continuity of a function at a filter. "isCont" is now an abbrevitation: "isCont x f == continuous (at _) f". Generalized continuity lemmas from isCont to continuous on an arbitrary filter. - compact from Multivariate_Analysis. Use Bolzano's lemma to prove compactness of closed intervals on reals. Continuous functions attain infimum and supremum on compact sets. The inverse of a continuous function is continuous, when the function is continuous on a compact set. - connected from Multivariate_Analysis. Use it to prove the intermediate value theorem. Show connectedness of intervals on linear_continuum_topology). - first_countable_topology from Multivariate_Analysis. Is used to show equivalence of properties on the neighbourhood filter of x and on all sequences converging to x. - FDERIV: Definition of has_derivative moved to Deriv.thy. Moved theorems from Library/FDERIV.thy to Deriv.thy and base the definition of DERIV on FDERIV. Add variants of DERIV and FDERIV which are restricted to sets, i.e. to represent derivatives from left or right. - Removed the within-filter. It is replaced by the principal filter: F within X = inf F (principal X) - Introduce "at x within U" as a single constant, "at x" is now an abbreviation for "at x within UNIV" - Introduce named theorem collections tendsto_intros, continuous_intros, continuous_on_intros and FDERIV_intros. Theorems in tendsto_intros (or FDERIV_intros) are also available as tendsto_eq_intros (or FDERIV_eq_intros) where the right-hand side is replaced by a congruence rule. This allows to apply them as intro rules and then proving equivalence by the simplifier. - Restructured theories in HOL-Complex_Main: + Moved RealDef and RComplete into Real + Introduced Topological_Spaces and moved theorems about topological spaces, filters, limits and continuity to it + Renamed RealVector to Real_Vector_Spaces + Split Lim, SEQ, Series into Topological_Spaces, Real_Vector_Spaces, and Limits + Moved Ln and Log to Transcendental + Moved theorems about continuity from Deriv to Topological_Spaces - Remove various auxiliary lemmas. INCOMPATIBILITY. * Nitpick: - Added option "spy". - Reduce incidence of "too high arity" errors. * Sledgehammer: - Renamed option: isar_shrink ~> isar_compress INCOMPATIBILITY. - Added options "isar_try0", "spy". - Better support for "isar_proofs". - MaSh has been fined-tuned and now runs as a local server. * Improved support for ad hoc overloading of constants (see also isar-ref manual and ~~/src/HOL/ex/Adhoc_Overloading_Examples.thy). * Library/Polynomial.thy: - Use lifting for primitive definitions. - Explicit conversions from and to lists of coefficients, used for generated code. - Replaced recursion operator poly_rec by fold_coeffs. - Prefer pre-existing gcd operation for gcd. - Fact renames: poly_eq_iff ~> poly_eq_poly_eq_iff poly_ext ~> poly_eqI expand_poly_eq ~> poly_eq_iff IMCOMPATIBILITY. * New Library/Simps_Case_Conv.thy: Provides commands simps_of_case and case_of_simps to convert function definitions between a list of equations with patterns on the lhs and a single equation with case expressions on the rhs. See also Ex/Simps_Case_Conv_Examples.thy. * New Library/FSet.thy: type of finite sets defined as a subtype of sets defined by Lifting/Transfer. * Discontinued theory src/HOL/Library/Eval_Witness. INCOMPATIBILITY. * Consolidation of library theories on product orders: Product_Lattice ~> Product_Order -- pointwise order on products Product_ord ~> Product_Lexorder -- lexicographic order on products INCOMPATIBILITY. * Imperative-HOL: The MREC combinator is considered legacy and no longer included by default. INCOMPATIBILITY, use partial_function instead, or import theory Legacy_Mrec as a fallback. * HOL-Algebra: Discontinued theories ~~/src/HOL/Algebra/abstract and ~~/src/HOL/Algebra/poly. Existing theories should be based on ~~/src/HOL/Library/Polynomial instead. The latter provides integration with HOL's type classes for rings. INCOMPATIBILITY. * HOL-BNF: - Various improvements to BNF-based (co)datatype package, including new commands "primrec_new", "primcorec", and "datatype_new_compat", as well as documentation. See "datatypes.pdf" for details. - New "coinduction" method to avoid some boilerplate (compared to coinduct). - Renamed keywords: data ~> datatype_new codata ~> codatatype bnf_def ~> bnf - Renamed many generated theorems, including discs ~> disc map_comp' ~> map_comp map_id' ~> map_id sels ~> sel set_map' ~> set_map sets ~> set IMCOMPATIBILITY. *** ML *** * Spec_Check is a Quickcheck tool for Isabelle/ML. The ML function "check_property" allows to check specifications of the form "ALL x y z. prop x y z". See also ~~/src/Tools/Spec_Check/ with its Examples.thy in particular. * Improved printing of exception trace in Poly/ML 5.5.1, with regular tracing output in the command transaction context instead of physical stdout. See also Toplevel.debug, Toplevel.debugging and ML_Compiler.exn_trace. * ML type "theory" is now immutable, without any special treatment of drafts or linear updates (which could lead to "stale theory" errors in the past). Discontinued obsolete operations like Theory.copy, Theory.checkpoint, and the auxiliary type theory_ref. Minor INCOMPATIBILITY. * More uniform naming of goal functions for skipped proofs: Skip_Proof.prove ~> Goal.prove_sorry Skip_Proof.prove_global ~> Goal.prove_sorry_global Minor INCOMPATIBILITY. * Simplifier tactics and tools use proper Proof.context instead of historic type simpset. Old-style declarations like addsimps, addsimprocs etc. operate directly on Proof.context. Raw type simpset retains its use as snapshot of the main Simplifier context, using simpset_of and put_simpset on Proof.context. INCOMPATIBILITY -- port old tools by making them depend on (ctxt : Proof.context) instead of (ss : simpset), then turn (simpset_of ctxt) into ctxt. * Modifiers for classical wrappers (e.g. addWrapper, delWrapper) operate on Proof.context instead of claset, for uniformity with addIs, addEs, addDs etc. Note that claset_of and put_claset allow to manage clasets separately from the context. * Discontinued obsolete ML antiquotations @{claset} and @{simpset}. INCOMPATIBILITY, use @{context} instead. * Antiquotation @{theory_context A} is similar to @{theory A}, but presents the result as initial Proof.context. *** System *** * Discontinued obsolete isabelle usedir, mkdir, make -- superseded by "isabelle build" in Isabelle2013. INCOMPATIBILITY. * Discontinued obsolete isabelle-process options -f and -u (former administrative aliases of option -e). Minor INCOMPATIBILITY. * Discontinued obsolete isabelle print tool, and PRINT_COMMAND settings variable. * Discontinued ISABELLE_DOC_FORMAT settings variable and historic document formats: dvi.gz, ps, ps.gz -- the default document format is always pdf. * Isabelle settings variable ISABELLE_BUILD_JAVA_OPTIONS allows to specify global resources of the JVM process run by isabelle build. * Toplevel executable $ISABELLE_HOME/bin/isabelle_scala_script allows to run Isabelle/Scala source files as standalone programs. * Improved "isabelle keywords" tool (for old-style ProofGeneral keyword tables): use Isabelle/Scala operations, which inspect outer syntax without requiring to build sessions first. * Sessions may be organized via 'chapter' specifications in the ROOT file, which determines a two-level hierarchy of browser info. The old tree-like organization via implicit sub-session relation (with its tendency towards erratic fluctuation of URLs) has been discontinued. The default chapter is called "Unsorted". Potential INCOMPATIBILITY for HTML presentation of theories. New in Isabelle2013 (February 2013) ----------------------------------- *** General *** * Theorem status about oracles and unfinished/failed future proofs is no longer printed by default, since it is incompatible with incremental / parallel checking of the persistent document model. ML function Thm.peek_status may be used to inspect a snapshot of the ongoing evaluation process. Note that in batch mode --- notably isabelle build --- the system ensures that future proofs of all accessible theorems in the theory context are finished (as before). * Configuration option show_markup controls direct inlining of markup into the printed representation of formal entities --- notably type and sort constraints. This enables Prover IDE users to retrieve that information via tooltips in the output window, for example. * Command 'ML_file' evaluates ML text from a file directly within the theory, without any predeclaration via 'uses' in the theory header. * Old command 'use' command and corresponding keyword 'uses' in the theory header are legacy features and will be discontinued soon. Tools that load their additional source files may imitate the 'ML_file' implementation, such that the system can take care of dependencies properly. * Discontinued obsolete method fastsimp / tactic fast_simp_tac, which is called fastforce / fast_force_tac already since Isabelle2011-1. * Updated and extended "isar-ref" and "implementation" manual, reduced remaining material in old "ref" manual. * Improved support for auxiliary contexts that indicate block structure for specifications. Nesting of "context fixes ... context assumes ..." and "class ... context ...". * Attribute "consumes" allows a negative value as well, which is interpreted relatively to the total number of premises of the rule in the target context. This form of declaration is stable when exported from a nested 'context' with additional assumptions. It is the preferred form for definitional packages, notably cases/rules produced in HOL/inductive and HOL/function. * More informative error messages for Isar proof commands involving lazy enumerations (method applications etc.). * Refined 'help' command to retrieve outer syntax commands according to name patterns (with clickable results). *** Prover IDE -- Isabelle/Scala/jEdit *** * Parallel terminal proofs ('by') are enabled by default, likewise proofs that are built into packages like 'datatype', 'function'. This allows to "run ahead" checking the theory specifications on the surface, while the prover is still crunching on internal justifications. Unfinished / cancelled proofs are restarted as required to complete full proof checking eventually. * Improved output panel with tooltips, hyperlinks etc. based on the same Rich_Text_Area as regular Isabelle/jEdit buffers. Activation of tooltips leads to some window that supports the same recursively, which can lead to stacks of tooltips as the semantic document content is explored. ESCAPE closes the whole stack, individual windows may be closed separately, or detached to become independent jEdit dockables. * Improved support for commands that produce graph output: the text message contains a clickable area to open a new instance of the graph browser on demand. * More robust incremental parsing of outer syntax (partial comments, malformed symbols). Changing the balance of open/close quotes and comment delimiters works more conveniently with unfinished situations that frequently occur in user interaction. * More efficient painting and improved reactivity when editing large files. More scalable management of formal document content. * Smarter handling of tracing messages: prover process pauses after certain number of messages per command transaction, with some user dialog to stop or continue. This avoids swamping the front-end with potentially infinite message streams. * More plugin options and preferences, based on Isabelle/Scala. The jEdit plugin option panel provides access to some Isabelle/Scala options, including tuning parameters for editor reactivity and color schemes. * Dockable window "Symbols" provides some editing support for Isabelle symbols. * Dockable window "Monitor" shows ML runtime statistics. Note that continuous display of the chart slows down the system. * Improved editing support for control styles: subscript, superscript, bold, reset of style -- operating on single symbols or text selections. Cf. keyboard shortcuts C+e DOWN/UP/RIGHT/LEFT. * Actions isabelle.increase-font-size and isabelle.decrease-font-size adjust the main text area font size, and its derivatives for output, tooltips etc. Cf. keyboard shortcuts C-PLUS and C-MINUS, which often need to be adapted to local keyboard layouts. * More reactive completion popup by default: use \t (TAB) instead of \n (NEWLINE) to minimize intrusion into regular flow of editing. See also "Plugin Options / SideKick / General / Code Completion Options". * Implicit check and build dialog of the specified logic session image. For example, HOL, HOLCF, HOL-Nominal can be produced on demand, without bundling big platform-dependent heap images in the Isabelle distribution. * Uniform Java 7 platform on Linux, Mac OS X, Windows: recent updates from Oracle provide better multi-platform experience. This version is now bundled exclusively with Isabelle. *** Pure *** * Code generation for Haskell: restrict unqualified imports from Haskell Prelude to a small set of fundamental operations. * Command 'export_code': relative file names are interpreted relatively to master directory of current theory rather than the rather arbitrary current working directory. INCOMPATIBILITY. * Discontinued obsolete attribute "COMP". Potential INCOMPATIBILITY, use regular rule composition via "OF" / "THEN", or explicit proof structure instead. Note that Isabelle/ML provides a variety of operators like COMP, INCR_COMP, COMP_INCR, which need to be applied with some care where this is really required. * Command 'typ' supports an additional variant with explicit sort constraint, to infer and check the most general type conforming to a given sort. Example (in HOL): typ "_ * _ * bool * unit" :: finite * Command 'locale_deps' visualizes all locales and their relations as a Hasse diagram. *** HOL *** * Sledgehammer: - Added MaSh relevance filter based on machine-learning; see the Sledgehammer manual for details. - Polished Isar proofs generated with "isar_proofs" option. - Rationalized type encodings ("type_enc" option). - Renamed "kill_provers" subcommand to "kill_all". - Renamed options: isar_proof ~> isar_proofs isar_shrink_factor ~> isar_shrink max_relevant ~> max_facts relevance_thresholds ~> fact_thresholds * Quickcheck: added an optimisation for equality premises. It is switched on by default, and can be switched off by setting the configuration quickcheck_optimise_equality to false. * Quotient: only one quotient can be defined by quotient_type INCOMPATIBILITY. * Lifting: - generation of an abstraction function equation in lift_definition - quot_del attribute - renamed no_abs_code -> no_code (INCOMPATIBILITY.) * Simproc "finite_Collect" rewrites set comprehensions into pointfree expressions. * Preprocessing of the code generator rewrites set comprehensions into pointfree expressions. * The SMT solver Z3 has now by default a restricted set of directly supported features. For the full set of features (div/mod, nonlinear arithmetic, datatypes/records) with potential proof reconstruction failures, enable the configuration option "z3_with_extensions". Minor INCOMPATIBILITY. * Simplified 'typedef' specifications: historical options for implicit set definition and alternative name have been discontinued. The former behavior of "typedef (open) t = A" is now the default, but written just "typedef t = A". INCOMPATIBILITY, need to adapt theories accordingly. * Removed constant "chars"; prefer "Enum.enum" on type "char" directly. INCOMPATIBILITY. * Moved operation product, sublists and n_lists from theory Enum to List. INCOMPATIBILITY. * Theorem UN_o generalized to SUP_comp. INCOMPATIBILITY. * Class "comm_monoid_diff" formalises properties of bounded subtraction, with natural numbers and multisets as typical instances. * Added combinator "Option.these" with type "'a option set => 'a set". * Theory "Transitive_Closure": renamed lemmas reflcl_tranclp -> reflclp_tranclp rtranclp_reflcl -> rtranclp_reflclp INCOMPATIBILITY. * Theory "Rings": renamed lemmas (in class semiring) left_distrib ~> distrib_right right_distrib ~> distrib_left INCOMPATIBILITY. * Generalized the definition of limits: - Introduced the predicate filterlim (LIM x F. f x :> G) which expresses that when the input values x converge to F then the output f x converges to G. - Added filters for convergence to positive (at_top) and negative infinity (at_bot). - Moved infinity in the norm (at_infinity) from Multivariate_Analysis to Complex_Main. - Removed real_tendsto_inf, it is superseded by "LIM x F. f x :> at_top". INCOMPATIBILITY. * Theory "Library/Option_ord" provides instantiation of option type to lattice type classes. * Theory "Library/Multiset": renamed constant fold_mset ~> Multiset.fold fact fold_mset_commute ~> fold_mset_comm INCOMPATIBILITY. * Renamed theory Library/List_Prefix to Library/Sublist, with related changes as follows. - Renamed constants (and related lemmas) prefix ~> prefixeq strict_prefix ~> prefix - Replaced constant "postfix" by "suffixeq" with swapped argument order (i.e., "postfix xs ys" is now "suffixeq ys xs") and dropped old infix syntax "xs >>= ys"; use "suffixeq ys xs" instead. Renamed lemmas accordingly. - Added constant "list_hembeq" for homeomorphic embedding on lists. Added abbreviation "sublisteq" for special case "list_hembeq (op =)". - Theory Library/Sublist no longer provides "order" and "bot" type class instances for the prefix order (merely corresponding locale interpretations). The type class instances are now in theory Library/Prefix_Order. - The sublist relation of theory Library/Sublist_Order is now based on "Sublist.sublisteq". Renamed lemmas accordingly: le_list_append_le_same_iff ~> Sublist.sublisteq_append_le_same_iff le_list_append_mono ~> Sublist.list_hembeq_append_mono le_list_below_empty ~> Sublist.list_hembeq_Nil, Sublist.list_hembeq_Nil2 le_list_Cons_EX ~> Sublist.list_hembeq_ConsD le_list_drop_Cons2 ~> Sublist.sublisteq_Cons2' le_list_drop_Cons_neq ~> Sublist.sublisteq_Cons2_neq le_list_drop_Cons ~> Sublist.sublisteq_Cons' le_list_drop_many ~> Sublist.sublisteq_drop_many le_list_filter_left ~> Sublist.sublisteq_filter_left le_list_rev_drop_many ~> Sublist.sublisteq_rev_drop_many le_list_rev_take_iff ~> Sublist.sublisteq_append le_list_same_length ~> Sublist.sublisteq_same_length le_list_take_many_iff ~> Sublist.sublisteq_append' less_eq_list.drop ~> less_eq_list_drop less_eq_list.induct ~> less_eq_list_induct not_le_list_length ~> Sublist.not_sublisteq_length INCOMPATIBILITY. * New theory Library/Countable_Set. * Theory Library/Debug and Library/Parallel provide debugging and parallel execution for code generated towards Isabelle/ML. * Theory Library/FuncSet: Extended support for Pi and extensional and introduce the extensional dependent function space "PiE". Replaced extensional_funcset by an abbreviation, and renamed lemmas from extensional_funcset to PiE as follows: extensional_empty ~> PiE_empty extensional_funcset_empty_domain ~> PiE_empty_domain extensional_funcset_empty_range ~> PiE_empty_range extensional_funcset_arb ~> PiE_arb extensional_funcset_mem ~> PiE_mem extensional_funcset_extend_domainI ~> PiE_fun_upd extensional_funcset_restrict_domain ~> fun_upd_in_PiE extensional_funcset_extend_domain_eq ~> PiE_insert_eq card_extensional_funcset ~> card_PiE finite_extensional_funcset ~> finite_PiE INCOMPATIBILITY. * Theory Library/FinFun: theory of almost everywhere constant functions (supersedes the AFP entry "Code Generation for Functions as Data"). * Theory Library/Phantom: generic phantom type to make a type parameter appear in a constant's type. This alternative to adding TYPE('a) as another parameter avoids unnecessary closures in generated code. * Theory Library/RBT_Impl: efficient construction of red-black trees from sorted associative lists. Merging two trees with rbt_union may return a structurally different tree than before. Potential INCOMPATIBILITY. * Theory Library/IArray: immutable arrays with code generation. * Theory Library/Finite_Lattice: theory of finite lattices. * HOL/Multivariate_Analysis: replaced "basis :: 'a::euclidean_space => nat => real" "\\ :: (nat => real) => 'a::euclidean_space" on euclidean spaces by using the inner product "_ \ _" with vectors from the Basis set: "\\ i. f i" is superseded by "SUM i : Basis. f i * r i". With this change the following constants are also changed or removed: DIM('a) :: nat ~> card (Basis :: 'a set) (is an abbreviation) a $$ i ~> inner a i (where i : Basis) cart_base i removed \, \' removed Theorems about these constants where removed. Renamed lemmas: component_le_norm ~> Basis_le_norm euclidean_eq ~> euclidean_eq_iff differential_zero_maxmin_component ~> differential_zero_maxmin_cart euclidean_simps ~> inner_simps independent_basis ~> independent_Basis span_basis ~> span_Basis in_span_basis ~> in_span_Basis norm_bound_component_le ~> norm_boound_Basis_le norm_bound_component_lt ~> norm_boound_Basis_lt component_le_infnorm ~> Basis_le_infnorm INCOMPATIBILITY. * HOL/Probability: - Added simproc "measurable" to automatically prove measurability. - Added induction rules for sigma sets with disjoint union (sigma_sets_induct_disjoint) and for Borel-measurable functions (borel_measurable_induct). - Added the Daniell-Kolmogorov theorem (the existence the limit of a projective family). * HOL/Cardinals: Theories of ordinals and cardinals (supersedes the AFP entry "Ordinals_and_Cardinals"). * HOL/BNF: New (co)datatype package based on bounded natural functors with support for mixed, nested recursion and interesting non-free datatypes. * HOL/Finite_Set and Relation: added new set and relation operations expressed by Finite_Set.fold. * New theory HOL/Library/RBT_Set: implementation of sets by red-black trees for the code generator. * HOL/Library/RBT and HOL/Library/Mapping have been converted to Lifting/Transfer. possible INCOMPATIBILITY. * HOL/Set: renamed Set.project -> Set.filter INCOMPATIBILITY. *** Document preparation *** * Dropped legacy antiquotations "term_style" and "thm_style", since styles may be given as arguments to "term" and "thm" already. Discontinued legacy styles "prem1" .. "prem19". * Default LaTeX rendering for \ is now based on eurosym package, instead of slightly exotic babel/greek. * Document variant NAME may use different LaTeX entry point document/root_NAME.tex if that file exists, instead of the common document/root.tex. * Simplified custom document/build script, instead of old-style document/IsaMakefile. Minor INCOMPATIBILITY. *** ML *** * The default limit for maximum number of worker threads is now 8, instead of 4, in correspondence to capabilities of contemporary hardware and Poly/ML runtime system. * Type Seq.results and related operations support embedded error messages within lazy enumerations, and thus allow to provide informative errors in the absence of any usable results. * Renamed Position.str_of to Position.here to emphasize that this is a formal device to inline positions into message text, but not necessarily printing visible text. *** System *** * Advanced support for Isabelle sessions and build management, see "system" manual for the chapter of that name, especially the "isabelle build" tool and its examples. The "isabelle mkroot" tool prepares session root directories for use with "isabelle build", similar to former "isabelle mkdir" for "isabelle usedir". Note that this affects document preparation as well. INCOMPATIBILITY, isabelle usedir / mkdir / make are rendered obsolete. * Discontinued obsolete Isabelle/build script, it is superseded by the regular isabelle build tool. For example: isabelle build -s -b HOL * Discontinued obsolete "isabelle makeall". * Discontinued obsolete IsaMakefile and ROOT.ML files from the Isabelle distribution, except for rudimentary src/HOL/IsaMakefile that provides some traditional targets that invoke "isabelle build". Note that this is inefficient! Applications of Isabelle/HOL involving "isabelle make" should be upgraded to use "isabelle build" directly. * The "isabelle options" tool prints Isabelle system options, as required for "isabelle build", for example. * The "isabelle logo" tool produces EPS and PDF format simultaneously. Minor INCOMPATIBILITY in command-line options. * The "isabelle install" tool has now a simpler command-line. Minor INCOMPATIBILITY. * The "isabelle components" tool helps to resolve add-on components that are not bundled, or referenced from a bare-bones repository version of Isabelle. * Settings variable ISABELLE_PLATFORM_FAMILY refers to the general platform family: "linux", "macos", "windows". * The ML system is configured as regular component, and no longer picked up from some surrounding directory. Potential INCOMPATIBILITY for home-made settings. * Improved ML runtime statistics (heap, threads, future tasks etc.). * Discontinued support for Poly/ML 5.2.1, which was the last version without exception positions and advanced ML compiler/toplevel configuration. * Discontinued special treatment of Proof General -- no longer guess PROOFGENERAL_HOME based on accidental file-system layout. Minor INCOMPATIBILITY: provide PROOFGENERAL_HOME and PROOFGENERAL_OPTIONS settings manually, or use a Proof General version that has been bundled as Isabelle component. New in Isabelle2012 (May 2012) ------------------------------ *** General *** * Prover IDE (PIDE) improvements: - more robust Sledgehammer integration (as before the sledgehammer command-line needs to be typed into the source buffer) - markup for bound variables - markup for types of term variables (displayed as tooltips) - support for user-defined Isar commands within the running session - improved support for Unicode outside original 16bit range e.g. glyph for \ (thanks to jEdit 4.5.1) * Forward declaration of outer syntax keywords within the theory header -- minor INCOMPATIBILITY for user-defined commands. Allow new commands to be used in the same theory where defined. * Auxiliary contexts indicate block structure for specifications with additional parameters and assumptions. Such unnamed contexts may be nested within other targets, like 'theory', 'locale', 'class', 'instantiation' etc. Results from the local context are generalized accordingly and applied to the enclosing target context. Example: context fixes x y z :: 'a assumes xy: "x = y" and yz: "y = z" begin lemma my_trans: "x = z" using xy yz by simp end thm my_trans The most basic application is to factor-out context elements of several fixes/assumes/shows theorem statements, e.g. see ~~/src/HOL/Isar_Examples/Group_Context.thy Any other local theory specification element works within the "context ... begin ... end" block as well. * Bundled declarations associate attributed fact expressions with a given name in the context. These may be later included in other contexts. This allows to manage context extensions casually, without the logical dependencies of locales and locale interpretation. See commands 'bundle', 'include', 'including' etc. in the isar-ref manual. * Commands 'lemmas' and 'theorems' allow local variables using 'for' declaration, and results are standardized before being stored. Thus old-style "standard" after instantiation or composition of facts becomes obsolete. Minor INCOMPATIBILITY, due to potential change of indices of schematic variables. * Rule attributes in local theory declarations (e.g. locale or class) are now statically evaluated: the resulting theorem is stored instead of the original expression. INCOMPATIBILITY in rare situations, where the historic accident of dynamic re-evaluation in interpretations etc. was exploited. * New tutorial "Programming and Proving in Isabelle/HOL" ("prog-prove"). It completely supersedes "A Tutorial Introduction to Structured Isar Proofs" ("isar-overview"), which has been removed. It also supersedes "Isabelle/HOL, A Proof Assistant for Higher-Order Logic" as the recommended beginners tutorial, but does not cover all of the material of that old tutorial. * Updated and extended reference manuals: "isar-ref", "implementation", "system"; reduced remaining material in old "ref" manual. *** Pure *** * Command 'definition' no longer exports the foundational "raw_def" into the user context. Minor INCOMPATIBILITY, may use the regular "def" result with attribute "abs_def" to imitate the old version. * Attribute "abs_def" turns an equation of the form "f x y == t" into "f == %x y. t", which ensures that "simp" or "unfold" steps always expand it. This also works for object-logic equality. (Formerly undocumented feature.) * Sort constraints are now propagated in simultaneous statements, just like type constraints. INCOMPATIBILITY in rare situations, where distinct sorts used to be assigned accidentally. For example: lemma "P (x::'a::foo)" and "Q (y::'a::bar)" -- "now illegal" lemma "P (x::'a)" and "Q (y::'a::bar)" -- "now uniform 'a::bar instead of default sort for first occurrence (!)" * Rule composition via attribute "OF" (or ML functions OF/MRS) is more tolerant against multiple unifiers, as long as the final result is unique. (As before, rules are composed in canonical right-to-left order to accommodate newly introduced premises.) * Renamed some inner syntax categories: num ~> num_token xnum ~> xnum_token xstr ~> str_token Minor INCOMPATIBILITY. Note that in practice "num_const" or "num_position" etc. are mainly used instead (which also include position information via constraints). * Simplified configuration options for syntax ambiguity: see "syntax_ambiguity_warning" and "syntax_ambiguity_limit" in isar-ref manual. Minor INCOMPATIBILITY. * Discontinued configuration option "syntax_positions": atomic terms in parse trees are always annotated by position constraints. * Old code generator for SML and its commands 'code_module', 'code_library', 'consts_code', 'types_code' have been discontinued. Use commands of the generic code generator instead. INCOMPATIBILITY. * Redundant attribute "code_inline" has been discontinued. Use "code_unfold" instead. INCOMPATIBILITY. * Dropped attribute "code_unfold_post" in favor of the its dual "code_abbrev", which yields a common pattern in definitions like definition [code_abbrev]: "f = t" INCOMPATIBILITY. * Obsolete 'types' command has been discontinued. Use 'type_synonym' instead. INCOMPATIBILITY. * Discontinued old "prems" fact, which used to refer to the accidental collection of foundational premises in the context (already marked as legacy since Isabelle2011). *** HOL *** * Type 'a set is now a proper type constructor (just as before Isabelle2008). Definitions mem_def and Collect_def have disappeared. Non-trivial INCOMPATIBILITY. For developments keeping predicates and sets separate, it is often sufficient to rephrase some set S that has been accidentally used as predicates by "%x. x : S", and some predicate P that has been accidentally used as set by "{x. P x}". Corresponding proofs in a first step should be pruned from any tinkering with former theorems mem_def and Collect_def as far as possible. For developments which deliberately mix predicates and sets, a planning step is necessary to determine what should become a predicate and what a set. It can be helpful to carry out that step in Isabelle2011-1 before jumping right into the current release. * Code generation by default implements sets as container type rather than predicates. INCOMPATIBILITY. * New type synonym 'a rel = ('a * 'a) set * The representation of numerals has changed. Datatype "num" represents strictly positive binary numerals, along with functions "numeral :: num => 'a" and "neg_numeral :: num => 'a" to represent positive and negated numeric literals, respectively. See also definitions in ~~/src/HOL/Num.thy. Potential INCOMPATIBILITY, some user theories may require adaptations as follows: - Theorems with number_ring or number_semiring constraints: These classes are gone; use comm_ring_1 or comm_semiring_1 instead. - Theories defining numeric types: Remove number, number_semiring, and number_ring instances. Defer all theorems about numerals until after classes one and semigroup_add have been instantiated. - Numeral-only simp rules: Replace each rule having a "number_of v" pattern with two copies, one for numeral and one for neg_numeral. - Theorems about subclasses of semiring_1 or ring_1: These classes automatically support numerals now, so more simp rules and simprocs may now apply within the proof. - Definitions and theorems using old constructors Pls/Min/Bit0/Bit1: Redefine using other integer operations. * Transfer: New package intended to generalize the existing "descending" method and related theorem attributes from the Quotient package. (Not all functionality is implemented yet, but future development will focus on Transfer as an eventual replacement for the corresponding parts of the Quotient package.) - transfer_rule attribute: Maintains a collection of transfer rules, which relate constants at two different types. Transfer rules may relate different type instances of the same polymorphic constant, or they may relate an operation on a raw type to a corresponding operation on an abstract type (quotient or subtype). For example: ((A ===> B) ===> list_all2 A ===> list_all2 B) map map (cr_int ===> cr_int ===> cr_int) (%(x,y) (u,v). (x+u, y+v)) plus_int - transfer method: Replaces a subgoal on abstract types with an equivalent subgoal on the corresponding raw types. Constants are replaced with corresponding ones according to the transfer rules. Goals are generalized over all free variables by default; this is necessary for variables whose types change, but can be overridden for specific variables with e.g. "transfer fixing: x y z". The variant transfer' method allows replacing a subgoal with one that is logically stronger (rather than equivalent). - relator_eq attribute: Collects identity laws for relators of various type constructors, e.g. "list_all2 (op =) = (op =)". The transfer method uses these lemmas to infer transfer rules for non-polymorphic constants on the fly. - transfer_prover method: Assists with proving a transfer rule for a new constant, provided the constant is defined in terms of other constants that already have transfer rules. It should be applied after unfolding the constant definitions. - HOL/ex/Transfer_Int_Nat.thy: Example theory demonstrating transfer from type nat to type int. * Lifting: New package intended to generalize the quotient_definition facility of the Quotient package; designed to work with Transfer. - lift_definition command: Defines operations on an abstract type in terms of a corresponding operation on a representation type. Example syntax: lift_definition dlist_insert :: "'a => 'a dlist => 'a dlist" is List.insert Users must discharge a respectfulness proof obligation when each constant is defined. (For a type copy, i.e. a typedef with UNIV, the proof is discharged automatically.) The obligation is presented in a user-friendly, readable form; a respectfulness theorem in the standard format and a transfer rule are generated by the package. - Integration with code_abstype: For typedefs (e.g. subtypes corresponding to a datatype invariant, such as dlist), lift_definition generates a code certificate theorem and sets up code generation for each constant. - setup_lifting command: Sets up the Lifting package to work with a user-defined type. The user must provide either a quotient theorem or a type_definition theorem. The package configures transfer rules for equality and quantifiers on the type, and sets up the lift_definition command to work with the type. - Usage examples: See Quotient_Examples/Lift_DList.thy, Quotient_Examples/Lift_RBT.thy, Quotient_Examples/Lift_FSet.thy, Word/Word.thy and Library/Float.thy. * Quotient package: - The 'quotient_type' command now supports a 'morphisms' option with rep and abs functions, similar to typedef. - 'quotient_type' sets up new types to work with the Lifting and Transfer packages, as with 'setup_lifting'. - The 'quotient_definition' command now requires the user to prove a respectfulness property at the point where the constant is defined, similar to lift_definition; INCOMPATIBILITY. - Renamed predicate 'Quotient' to 'Quotient3', and renamed theorems accordingly, INCOMPATIBILITY. * New diagnostic command 'find_unused_assms' to find potentially superfluous assumptions in theorems using Quickcheck. * Quickcheck: - Quickcheck returns variable assignments as counterexamples, which allows to reveal the underspecification of functions under test. For example, refuting "hd xs = x", it presents the variable assignment xs = [] and x = a1 as a counterexample, assuming that any property is false whenever "hd []" occurs in it. These counterexample are marked as potentially spurious, as Quickcheck also returns "xs = []" as a counterexample to the obvious theorem "hd xs = hd xs". After finding a potentially spurious counterexample, Quickcheck continues searching for genuine ones. By default, Quickcheck shows potentially spurious and genuine counterexamples. The option "genuine_only" sets quickcheck to only show genuine counterexamples. - The command 'quickcheck_generator' creates random and exhaustive value generators for a given type and operations. It generates values by using the operations as if they were constructors of that type. - Support for multisets. - Added "use_subtype" options. - Added "quickcheck_locale" configuration to specify how to process conjectures in a locale context. * Nitpick: Fixed infinite loop caused by the 'peephole_optim' option and affecting 'rat' and 'real'. * Sledgehammer: - Integrated more tightly with SPASS, as described in the ITP 2012 paper "More SPASS with Isabelle". - Made it try "smt" as a fallback if "metis" fails or times out. - Added support for the following provers: Alt-Ergo (via Why3 and TFF1), iProver, iProver-Eq. - Sped up the minimizer. - Added "lam_trans", "uncurry_aliases", and "minimize" options. - Renamed "slicing" ("no_slicing") option to "slice" ("dont_slice"). - Renamed "sound" option to "strict". * Metis: Added possibility to specify lambda translations scheme as a parenthesized argument (e.g., "by (metis (lifting) ...)"). * SMT: Renamed "smt_fixed" option to "smt_read_only_certificates". * Command 'try0': Renamed from 'try_methods'. INCOMPATIBILITY. * New "case_product" attribute to generate a case rule doing multiple case distinctions at the same time. E.g. list.exhaust [case_product nat.exhaust] produces a rule which can be used to perform case distinction on both a list and a nat. * New "eventually_elim" method as a generalized variant of the eventually_elim* rules. Supports structured proofs. * Typedef with implicit set definition is considered legacy. Use "typedef (open)" form instead, which will eventually become the default. * Record: code generation can be switched off manually with declare [[record_coden = false]] -- "default true" * Datatype: type parameters allow explicit sort constraints. * Concrete syntax for case expressions includes constraints for source positions, and thus produces Prover IDE markup for its bindings. INCOMPATIBILITY for old-style syntax translations that augment the pattern notation; e.g. see src/HOL/HOLCF/One.thy for translations of one_case. * Clarified attribute "mono_set": pure declaration without modifying the result of the fact expression. * More default pred/set conversions on a couple of relation operations and predicates. Added powers of predicate relations. Consolidation of some relation theorems: converse_def ~> converse_unfold rel_comp_def ~> relcomp_unfold symp_def ~> (modified, use symp_def and sym_def instead) transp_def ~> transp_trans Domain_def ~> Domain_unfold Range_def ~> Domain_converse [symmetric] Generalized theorems INF_INT_eq, INF_INT_eq2, SUP_UN_eq, SUP_UN_eq2. See theory "Relation" for examples for making use of pred/set conversions by means of attributes "to_set" and "to_pred". INCOMPATIBILITY. * Renamed facts about the power operation on relations, i.e., relpow to match the constant's name: rel_pow_1 ~> relpow_1 rel_pow_0_I ~> relpow_0_I rel_pow_Suc_I ~> relpow_Suc_I rel_pow_Suc_I2 ~> relpow_Suc_I2 rel_pow_0_E ~> relpow_0_E rel_pow_Suc_E ~> relpow_Suc_E rel_pow_E ~> relpow_E rel_pow_Suc_D2 ~> relpow_Suc_D2 rel_pow_Suc_E2 ~> relpow_Suc_E2 rel_pow_Suc_D2' ~> relpow_Suc_D2' rel_pow_E2 ~> relpow_E2 rel_pow_add ~> relpow_add rel_pow_commute ~> relpow rel_pow_empty ~> relpow_empty: rtrancl_imp_UN_rel_pow ~> rtrancl_imp_UN_relpow rel_pow_imp_rtrancl ~> relpow_imp_rtrancl rtrancl_is_UN_rel_pow ~> rtrancl_is_UN_relpow rtrancl_imp_rel_pow ~> rtrancl_imp_relpow rel_pow_fun_conv ~> relpow_fun_conv rel_pow_finite_bounded1 ~> relpow_finite_bounded1 rel_pow_finite_bounded ~> relpow_finite_bounded rtrancl_finite_eq_rel_pow ~> rtrancl_finite_eq_relpow trancl_finite_eq_rel_pow ~> trancl_finite_eq_relpow single_valued_rel_pow ~> single_valued_relpow INCOMPATIBILITY. * Theory Relation: Consolidated constant name for relation composition and corresponding theorem names: - Renamed constant rel_comp to relcomp. - Dropped abbreviation pred_comp. Use relcompp instead. - Renamed theorems: rel_compI ~> relcompI rel_compEpair ~> relcompEpair rel_compE ~> relcompE pred_comp_rel_comp_eq ~> relcompp_relcomp_eq rel_comp_empty1 ~> relcomp_empty1 rel_comp_mono ~> relcomp_mono rel_comp_subset_Sigma ~> relcomp_subset_Sigma rel_comp_distrib ~> relcomp_distrib rel_comp_distrib2 ~> relcomp_distrib2 rel_comp_UNION_distrib ~> relcomp_UNION_distrib rel_comp_UNION_distrib2 ~> relcomp_UNION_distrib2 single_valued_rel_comp ~> single_valued_relcomp rel_comp_def ~> relcomp_unfold converse_rel_comp ~> converse_relcomp pred_compI ~> relcomppI pred_compE ~> relcomppE pred_comp_bot1 ~> relcompp_bot1 pred_comp_bot2 ~> relcompp_bot2 transp_pred_comp_less_eq ~> transp_relcompp_less_eq pred_comp_mono ~> relcompp_mono pred_comp_distrib ~> relcompp_distrib pred_comp_distrib2 ~> relcompp_distrib2 converse_pred_comp ~> converse_relcompp finite_rel_comp ~> finite_relcomp set_rel_comp ~> set_relcomp INCOMPATIBILITY. * Theory Divides: Discontinued redundant theorems about div and mod. INCOMPATIBILITY, use the corresponding generic theorems instead. DIVISION_BY_ZERO ~> div_by_0, mod_by_0 zdiv_self ~> div_self zmod_self ~> mod_self zdiv_zero ~> div_0 zmod_zero ~> mod_0 zdiv_zmod_equality ~> div_mod_equality2 zdiv_zmod_equality2 ~> div_mod_equality zmod_zdiv_trivial ~> mod_div_trivial zdiv_zminus_zminus ~> div_minus_minus zmod_zminus_zminus ~> mod_minus_minus zdiv_zminus2 ~> div_minus_right zmod_zminus2 ~> mod_minus_right zdiv_minus1_right ~> div_minus1_right zmod_minus1_right ~> mod_minus1_right zdvd_mult_div_cancel ~> dvd_mult_div_cancel zmod_zmult1_eq ~> mod_mult_right_eq zpower_zmod ~> power_mod zdvd_zmod ~> dvd_mod zdvd_zmod_imp_zdvd ~> dvd_mod_imp_dvd mod_mult_distrib ~> mult_mod_left mod_mult_distrib2 ~> mult_mod_right * Removed redundant theorems nat_mult_2 and nat_mult_2_right; use generic mult_2 and mult_2_right instead. INCOMPATIBILITY. * Finite_Set.fold now qualified. INCOMPATIBILITY. * Consolidated theorem names concerning fold combinators: inf_INFI_fold_inf ~> inf_INF_fold_inf sup_SUPR_fold_sup ~> sup_SUP_fold_sup INFI_fold_inf ~> INF_fold_inf SUPR_fold_sup ~> SUP_fold_sup union_set ~> union_set_fold minus_set ~> minus_set_fold INFI_set_fold ~> INF_set_fold SUPR_set_fold ~> SUP_set_fold INF_code ~> INF_set_foldr SUP_code ~> SUP_set_foldr foldr.simps ~> foldr.simps (in point-free formulation) foldr_fold_rev ~> foldr_conv_fold foldl_fold ~> foldl_conv_fold foldr_foldr ~> foldr_conv_foldl foldl_foldr ~> foldl_conv_foldr fold_set_remdups ~> fold_set_fold_remdups fold_set ~> fold_set_fold fold1_set ~> fold1_set_fold INCOMPATIBILITY. * Dropped rarely useful theorems concerning fold combinators: foldl_apply, foldl_fun_comm, foldl_rev, fold_weak_invariant, rev_foldl_cons, fold_set_remdups, fold_set, fold_set1, concat_conv_foldl, foldl_weak_invariant, foldl_invariant, foldr_invariant, foldl_absorb0, foldl_foldr1_lemma, foldl_foldr1, listsum_conv_fold, listsum_foldl, sort_foldl_insort, foldl_assoc, foldr_conv_foldl, start_le_sum, elem_le_sum, sum_eq_0_conv. INCOMPATIBILITY. For the common phrases "%xs. List.foldr plus xs 0" and "List.foldl plus 0", prefer "List.listsum". Otherwise it can be useful to boil down "List.foldr" and "List.foldl" to "List.fold" by unfolding "foldr_conv_fold" and "foldl_conv_fold". * Dropped lemmas minus_set_foldr, union_set_foldr, union_coset_foldr, inter_coset_foldr, Inf_fin_set_foldr, Sup_fin_set_foldr, Min_fin_set_foldr, Max_fin_set_foldr, Inf_set_foldr, Sup_set_foldr, INF_set_foldr, SUP_set_foldr. INCOMPATIBILITY. Prefer corresponding lemmas over fold rather than foldr, or make use of lemmas fold_conv_foldr and fold_rev. * Congruence rules Option.map_cong and Option.bind_cong for recursion through option types. * "Transitive_Closure.ntrancl": bounded transitive closure on relations. * Constant "Set.not_member" now qualified. INCOMPATIBILITY. * Theory Int: Discontinued many legacy theorems specific to type int. INCOMPATIBILITY, use the corresponding generic theorems instead. zminus_zminus ~> minus_minus zminus_0 ~> minus_zero zminus_zadd_distrib ~> minus_add_distrib zadd_commute ~> add_commute zadd_assoc ~> add_assoc zadd_left_commute ~> add_left_commute zadd_ac ~> add_ac zmult_ac ~> mult_ac zadd_0 ~> add_0_left zadd_0_right ~> add_0_right zadd_zminus_inverse2 ~> left_minus zmult_zminus ~> mult_minus_left zmult_commute ~> mult_commute zmult_assoc ~> mult_assoc zadd_zmult_distrib ~> left_distrib zadd_zmult_distrib2 ~> right_distrib zdiff_zmult_distrib ~> left_diff_distrib zdiff_zmult_distrib2 ~> right_diff_distrib zmult_1 ~> mult_1_left zmult_1_right ~> mult_1_right zle_refl ~> order_refl zle_trans ~> order_trans zle_antisym ~> order_antisym zle_linear ~> linorder_linear zless_linear ~> linorder_less_linear zadd_left_mono ~> add_left_mono zadd_strict_right_mono ~> add_strict_right_mono zadd_zless_mono ~> add_less_le_mono int_0_less_1 ~> zero_less_one int_0_neq_1 ~> zero_neq_one zless_le ~> less_le zpower_zadd_distrib ~> power_add zero_less_zpower_abs_iff ~> zero_less_power_abs_iff zero_le_zpower_abs ~> zero_le_power_abs * Theory Deriv: Renamed DERIV_nonneg_imp_nonincreasing ~> DERIV_nonneg_imp_nondecreasing * Theory Library/Multiset: Improved code generation of multisets. * Theory HOL/Library/Set_Algebras: Addition and multiplication on sets are expressed via type classes again. The special syntax \/\ has been replaced by plain +/*. Removed constant setsum_set, which is now subsumed by Big_Operators.setsum. INCOMPATIBILITY. * Theory HOL/Library/Diagonalize has been removed. INCOMPATIBILITY, use theory HOL/Library/Nat_Bijection instead. * Theory HOL/Library/RBT_Impl: Backing implementation of red-black trees is now inside a type class context. Names of affected operations and lemmas have been prefixed by rbt_. INCOMPATIBILITY for theories working directly with raw red-black trees, adapt the names as follows: Operations: bulkload -> rbt_bulkload del_from_left -> rbt_del_from_left del_from_right -> rbt_del_from_right del -> rbt_del delete -> rbt_delete ins -> rbt_ins insert -> rbt_insert insertw -> rbt_insert_with insert_with_key -> rbt_insert_with_key map_entry -> rbt_map_entry lookup -> rbt_lookup sorted -> rbt_sorted tree_greater -> rbt_greater tree_less -> rbt_less tree_less_symbol -> rbt_less_symbol union -> rbt_union union_with -> rbt_union_with union_with_key -> rbt_union_with_key Lemmas: balance_left_sorted -> balance_left_rbt_sorted balance_left_tree_greater -> balance_left_rbt_greater balance_left_tree_less -> balance_left_rbt_less balance_right_sorted -> balance_right_rbt_sorted balance_right_tree_greater -> balance_right_rbt_greater balance_right_tree_less -> balance_right_rbt_less balance_sorted -> balance_rbt_sorted balance_tree_greater -> balance_rbt_greater balance_tree_less -> balance_rbt_less bulkload_is_rbt -> rbt_bulkload_is_rbt combine_sorted -> combine_rbt_sorted combine_tree_greater -> combine_rbt_greater combine_tree_less -> combine_rbt_less delete_in_tree -> rbt_delete_in_tree delete_is_rbt -> rbt_delete_is_rbt del_from_left_tree_greater -> rbt_del_from_left_rbt_greater del_from_left_tree_less -> rbt_del_from_left_rbt_less del_from_right_tree_greater -> rbt_del_from_right_rbt_greater del_from_right_tree_less -> rbt_del_from_right_rbt_less del_in_tree -> rbt_del_in_tree del_inv1_inv2 -> rbt_del_inv1_inv2 del_sorted -> rbt_del_rbt_sorted del_tree_greater -> rbt_del_rbt_greater del_tree_less -> rbt_del_rbt_less dom_lookup_Branch -> dom_rbt_lookup_Branch entries_lookup -> entries_rbt_lookup finite_dom_lookup -> finite_dom_rbt_lookup insert_sorted -> rbt_insert_rbt_sorted insertw_is_rbt -> rbt_insertw_is_rbt insertwk_is_rbt -> rbt_insertwk_is_rbt insertwk_sorted -> rbt_insertwk_rbt_sorted insertw_sorted -> rbt_insertw_rbt_sorted ins_sorted -> ins_rbt_sorted ins_tree_greater -> ins_rbt_greater ins_tree_less -> ins_rbt_less is_rbt_sorted -> is_rbt_rbt_sorted lookup_balance -> rbt_lookup_balance lookup_bulkload -> rbt_lookup_rbt_bulkload lookup_delete -> rbt_lookup_rbt_delete lookup_Empty -> rbt_lookup_Empty lookup_from_in_tree -> rbt_lookup_from_in_tree lookup_in_tree -> rbt_lookup_in_tree lookup_ins -> rbt_lookup_ins lookup_insert -> rbt_lookup_rbt_insert lookup_insertw -> rbt_lookup_rbt_insertw lookup_insertwk -> rbt_lookup_rbt_insertwk lookup_keys -> rbt_lookup_keys lookup_map -> rbt_lookup_map lookup_map_entry -> rbt_lookup_rbt_map_entry lookup_tree_greater -> rbt_lookup_rbt_greater lookup_tree_less -> rbt_lookup_rbt_less lookup_union -> rbt_lookup_rbt_union map_entry_color_of -> rbt_map_entry_color_of map_entry_inv1 -> rbt_map_entry_inv1 map_entry_inv2 -> rbt_map_entry_inv2 map_entry_is_rbt -> rbt_map_entry_is_rbt map_entry_sorted -> rbt_map_entry_rbt_sorted map_entry_tree_greater -> rbt_map_entry_rbt_greater map_entry_tree_less -> rbt_map_entry_rbt_less map_tree_greater -> map_rbt_greater map_tree_less -> map_rbt_less map_sorted -> map_rbt_sorted paint_sorted -> paint_rbt_sorted paint_lookup -> paint_rbt_lookup paint_tree_greater -> paint_rbt_greater paint_tree_less -> paint_rbt_less sorted_entries -> rbt_sorted_entries tree_greater_eq_trans -> rbt_greater_eq_trans tree_greater_nit -> rbt_greater_nit tree_greater_prop -> rbt_greater_prop tree_greater_simps -> rbt_greater_simps tree_greater_trans -> rbt_greater_trans tree_less_eq_trans -> rbt_less_eq_trans tree_less_nit -> rbt_less_nit tree_less_prop -> rbt_less_prop tree_less_simps -> rbt_less_simps tree_less_trans -> rbt_less_trans tree_ord_props -> rbt_ord_props union_Branch -> rbt_union_Branch union_is_rbt -> rbt_union_is_rbt unionw_is_rbt -> rbt_unionw_is_rbt unionwk_is_rbt -> rbt_unionwk_is_rbt unionwk_sorted -> rbt_unionwk_rbt_sorted * Theory HOL/Library/Float: Floating point numbers are now defined as a subset of the real numbers. All operations are defined using the lifing-framework and proofs use the transfer method. INCOMPATIBILITY. Changed Operations: float_abs -> abs float_nprt -> nprt float_pprt -> pprt pow2 -> use powr round_down -> float_round_down round_up -> float_round_up scale -> exponent Removed Operations: ceiling_fl, lb_mult, lb_mod, ub_mult, ub_mod Renamed Lemmas: abs_float_def -> Float.compute_float_abs bitlen_ge0 -> bitlen_nonneg bitlen.simps -> Float.compute_bitlen float_components -> Float_mantissa_exponent float_divl.simps -> Float.compute_float_divl float_divr.simps -> Float.compute_float_divr float_eq_odd -> mult_powr_eq_mult_powr_iff float_power -> real_of_float_power lapprox_posrat_def -> Float.compute_lapprox_posrat lapprox_rat.simps -> Float.compute_lapprox_rat le_float_def' -> Float.compute_float_le le_float_def -> less_eq_float.rep_eq less_float_def' -> Float.compute_float_less less_float_def -> less_float.rep_eq normfloat_def -> Float.compute_normfloat normfloat_imp_odd_or_zero -> mantissa_not_dvd and mantissa_noteq_0 normfloat -> normfloat_def normfloat_unique -> use normfloat_def number_of_float_Float -> Float.compute_float_numeral, Float.compute_float_neg_numeral one_float_def -> Float.compute_float_one plus_float_def -> Float.compute_float_plus rapprox_posrat_def -> Float.compute_rapprox_posrat rapprox_rat.simps -> Float.compute_rapprox_rat real_of_float_0 -> zero_float.rep_eq real_of_float_1 -> one_float.rep_eq real_of_float_abs -> abs_float.rep_eq real_of_float_add -> plus_float.rep_eq real_of_float_minus -> uminus_float.rep_eq real_of_float_mult -> times_float.rep_eq real_of_float_simp -> Float.rep_eq real_of_float_sub -> minus_float.rep_eq round_down.simps -> Float.compute_float_round_down round_up.simps -> Float.compute_float_round_up times_float_def -> Float.compute_float_times uminus_float_def -> Float.compute_float_uminus zero_float_def -> Float.compute_float_zero Lemmas not necessary anymore, use the transfer method: bitlen_B0, bitlen_B1, bitlen_ge1, bitlen_Min, bitlen_Pls, float_divl, float_divr, float_le_simp, float_less1_mantissa_bound, float_less_simp, float_less_zero, float_le_zero, float_pos_less1_e_neg, float_pos_m_pos, float_split, float_split2, floor_pos_exp, lapprox_posrat, lapprox_posrat_bottom, lapprox_rat, lapprox_rat_bottom, normalized_float, rapprox_posrat, rapprox_posrat_le1, rapprox_rat, real_of_float_ge0_exp, real_of_float_neg_exp, real_of_float_nge0_exp, round_down floor_fl, round_up, zero_le_float, zero_less_float * New theory HOL/Library/DAList provides an abstract type for association lists with distinct keys. * Session HOL/IMP: Added new theory of abstract interpretation of annotated commands. * Session HOL-Import: Re-implementation from scratch is faster, simpler, and more scalable. Requires a proof bundle, which is available as an external component. Discontinued old (and mostly dead) Importer for HOL4 and HOL Light. INCOMPATIBILITY. * Session HOL-Word: Discontinued many redundant theorems specific to type 'a word. INCOMPATIBILITY, use the corresponding generic theorems instead. word_sub_alt ~> word_sub_wi word_add_alt ~> word_add_def word_mult_alt ~> word_mult_def word_minus_alt ~> word_minus_def word_0_alt ~> word_0_wi word_1_alt ~> word_1_wi word_add_0 ~> add_0_left word_add_0_right ~> add_0_right word_mult_1 ~> mult_1_left word_mult_1_right ~> mult_1_right word_add_commute ~> add_commute word_add_assoc ~> add_assoc word_add_left_commute ~> add_left_commute word_mult_commute ~> mult_commute word_mult_assoc ~> mult_assoc word_mult_left_commute ~> mult_left_commute word_left_distrib ~> left_distrib word_right_distrib ~> right_distrib word_left_minus ~> left_minus word_diff_0_right ~> diff_0_right word_diff_self ~> diff_self word_sub_def ~> diff_minus word_diff_minus ~> diff_minus word_add_ac ~> add_ac word_mult_ac ~> mult_ac word_plus_ac0 ~> add_0_left add_0_right add_ac word_times_ac1 ~> mult_1_left mult_1_right mult_ac word_order_trans ~> order_trans word_order_refl ~> order_refl word_order_antisym ~> order_antisym word_order_linear ~> linorder_linear lenw1_zero_neq_one ~> zero_neq_one word_number_of_eq ~> number_of_eq word_of_int_add_hom ~> wi_hom_add word_of_int_sub_hom ~> wi_hom_sub word_of_int_mult_hom ~> wi_hom_mult word_of_int_minus_hom ~> wi_hom_neg word_of_int_succ_hom ~> wi_hom_succ word_of_int_pred_hom ~> wi_hom_pred word_of_int_0_hom ~> word_0_wi word_of_int_1_hom ~> word_1_wi * Session HOL-Word: New proof method "word_bitwise" for splitting machine word equalities and inequalities into logical circuits, defined in HOL/Word/WordBitwise.thy. Supports addition, subtraction, multiplication, shifting by constants, bitwise operators and numeric constants. Requires fixed-length word types, not 'a word. Solves many standard word identities outright and converts more into first order problems amenable to blast or similar. See also examples in HOL/Word/Examples/WordExamples.thy. * Session HOL-Probability: Introduced the type "'a measure" to represent measures, this replaces the records 'a algebra and 'a measure_space. The locales based on subset_class now have two locale-parameters the space \ and the set of measurable sets M. The product of probability spaces uses now the same constant as the finite product of sigma-finite measure spaces "PiM :: ('i => 'a) measure". Most constants are defined now outside of locales and gain an additional parameter, like null_sets, almost_eventually or \'. Measure space constructions for distributions and densities now got their own constants distr and density. Instead of using locales to describe measure spaces with a finite space, the measure count_space and point_measure is introduced. INCOMPATIBILITY. Renamed constants: measure -> emeasure finite_measure.\' -> measure product_algebra_generator -> prod_algebra product_prob_space.emb -> prod_emb product_prob_space.infprod_algebra -> PiM Removed locales: completeable_measure_space finite_measure_space finite_prob_space finite_product_finite_prob_space finite_product_sigma_algebra finite_sigma_algebra measure_space pair_finite_prob_space pair_finite_sigma_algebra pair_finite_space pair_sigma_algebra product_sigma_algebra Removed constants: conditional_space distribution -> use distr measure, or distributed predicate image_space joint_distribution -> use distr measure, or distributed predicate pair_measure_generator product_prob_space.infprod_algebra -> use PiM subvimage Replacement theorems: finite_additivity_sufficient -> ring_of_sets.countably_additiveI_finite finite_measure.empty_measure -> measure_empty finite_measure.finite_continuity_from_above -> finite_measure.finite_Lim_measure_decseq finite_measure.finite_continuity_from_below -> finite_measure.finite_Lim_measure_incseq finite_measure.finite_measure_countably_subadditive -> finite_measure.finite_measure_subadditive_countably finite_measure.finite_measure_eq -> finite_measure.emeasure_eq_measure finite_measure.finite_measure -> finite_measure.emeasure_finite finite_measure.finite_measure_finite_singleton -> finite_measure.finite_measure_eq_setsum_singleton finite_measure.positive_measure' -> measure_nonneg finite_measure.real_measure -> finite_measure.emeasure_real finite_product_prob_space.finite_measure_times -> finite_product_prob_space.finite_measure_PiM_emb finite_product_sigma_algebra.in_P -> sets_PiM_I_finite finite_product_sigma_algebra.P_empty -> space_PiM_empty, sets_PiM_empty information_space.conditional_entropy_eq -> information_space.conditional_entropy_simple_distributed information_space.conditional_entropy_positive -> information_space.conditional_entropy_nonneg_simple information_space.conditional_mutual_information_eq_mutual_information -> information_space.conditional_mutual_information_eq_mutual_information_simple information_space.conditional_mutual_information_generic_positive -> information_space.conditional_mutual_information_nonneg_simple information_space.conditional_mutual_information_positive -> information_space.conditional_mutual_information_nonneg_simple information_space.entropy_commute -> information_space.entropy_commute_simple information_space.entropy_eq -> information_space.entropy_simple_distributed information_space.entropy_generic_eq -> information_space.entropy_simple_distributed information_space.entropy_positive -> information_space.entropy_nonneg_simple information_space.entropy_uniform_max -> information_space.entropy_uniform information_space.KL_eq_0_imp -> information_space.KL_eq_0_iff_eq information_space.KL_eq_0 -> information_space.KL_same_eq_0 information_space.KL_ge_0 -> information_space.KL_nonneg information_space.mutual_information_eq -> information_space.mutual_information_simple_distributed information_space.mutual_information_positive -> information_space.mutual_information_nonneg_simple Int_stable_cuboids -> Int_stable_atLeastAtMost Int_stable_product_algebra_generator -> positive_integral measure_preserving -> equality "distr M N f = N" "f : measurable M N" measure_space.additive -> emeasure_additive measure_space.AE_iff_null_set -> AE_iff_null measure_space.almost_everywhere_def -> eventually_ae_filter measure_space.almost_everywhere_vimage -> AE_distrD measure_space.continuity_from_above -> INF_emeasure_decseq measure_space.continuity_from_above_Lim -> Lim_emeasure_decseq measure_space.continuity_from_below_Lim -> Lim_emeasure_incseq measure_space.continuity_from_below -> SUP_emeasure_incseq measure_space_density -> emeasure_density measure_space.density_is_absolutely_continuous -> absolutely_continuousI_density measure_space.integrable_vimage -> integrable_distr measure_space.integral_translated_density -> integral_density measure_space.integral_vimage -> integral_distr measure_space.measure_additive -> plus_emeasure measure_space.measure_compl -> emeasure_compl measure_space.measure_countable_increasing -> emeasure_countable_increasing measure_space.measure_countably_subadditive -> emeasure_subadditive_countably measure_space.measure_decseq -> decseq_emeasure measure_space.measure_Diff -> emeasure_Diff measure_space.measure_Diff_null_set -> emeasure_Diff_null_set measure_space.measure_eq_0 -> emeasure_eq_0 measure_space.measure_finitely_subadditive -> emeasure_subadditive_finite measure_space.measure_finite_singleton -> emeasure_eq_setsum_singleton measure_space.measure_incseq -> incseq_emeasure measure_space.measure_insert -> emeasure_insert measure_space.measure_mono -> emeasure_mono measure_space.measure_not_negative -> emeasure_not_MInf measure_space.measure_preserving_Int_stable -> measure_eqI_generator_eq measure_space.measure_setsum -> setsum_emeasure measure_space.measure_setsum_split -> setsum_emeasure_cover measure_space.measure_space_vimage -> emeasure_distr measure_space.measure_subadditive_finite -> emeasure_subadditive_finite measure_space.measure_subadditive -> subadditive measure_space.measure_top -> emeasure_space measure_space.measure_UN_eq_0 -> emeasure_UN_eq_0 measure_space.measure_Un_null_set -> emeasure_Un_null_set measure_space.positive_integral_translated_density -> positive_integral_density measure_space.positive_integral_vimage -> positive_integral_distr measure_space.real_continuity_from_above -> Lim_measure_decseq measure_space.real_continuity_from_below -> Lim_measure_incseq measure_space.real_measure_countably_subadditive -> measure_subadditive_countably measure_space.real_measure_Diff -> measure_Diff measure_space.real_measure_finite_Union -> measure_finite_Union measure_space.real_measure_setsum_singleton -> measure_eq_setsum_singleton measure_space.real_measure_subadditive -> measure_subadditive measure_space.real_measure_Union -> measure_Union measure_space.real_measure_UNION -> measure_UNION measure_space.simple_function_vimage -> simple_function_comp measure_space.simple_integral_vimage -> simple_integral_distr measure_space.simple_integral_vimage -> simple_integral_distr measure_unique_Int_stable -> measure_eqI_generator_eq measure_unique_Int_stable_vimage -> measure_eqI_generator_eq pair_sigma_algebra.measurable_cut_fst -> sets_Pair1 pair_sigma_algebra.measurable_cut_snd -> sets_Pair2 pair_sigma_algebra.measurable_pair_image_fst -> measurable_Pair1 pair_sigma_algebra.measurable_pair_image_snd -> measurable_Pair2 pair_sigma_algebra.measurable_product_swap -> measurable_pair_swap_iff pair_sigma_algebra.pair_sigma_algebra_measurable -> measurable_pair_swap pair_sigma_algebra.pair_sigma_algebra_swap_measurable -> measurable_pair_swap' pair_sigma_algebra.sets_swap -> sets_pair_swap pair_sigma_finite.measure_cut_measurable_fst -> pair_sigma_finite.measurable_emeasure_Pair1 pair_sigma_finite.measure_cut_measurable_snd -> pair_sigma_finite.measurable_emeasure_Pair2 pair_sigma_finite.measure_preserving_swap -> pair_sigma_finite.distr_pair_swap pair_sigma_finite.pair_measure_alt2 -> pair_sigma_finite.emeasure_pair_measure_alt2 pair_sigma_finite.pair_measure_alt -> pair_sigma_finite.emeasure_pair_measure_alt pair_sigma_finite.pair_measure_times -> pair_sigma_finite.emeasure_pair_measure_Times prob_space.indep_distribution_eq_measure -> prob_space.indep_vars_iff_distr_eq_PiM prob_space.indep_var_distributionD -> prob_space.indep_var_distribution_eq prob_space.measure_space_1 -> prob_space.emeasure_space_1 prob_space.prob_space_vimage -> prob_space_distr prob_space.random_variable_restrict -> measurable_restrict prob_space_unique_Int_stable -> measure_eqI_prob_space product_algebraE -> prod_algebraE_all product_algebra_generator_der -> prod_algebra_eq_finite product_algebra_generator_into_space -> prod_algebra_sets_into_space product_algebraI -> sets_PiM_I_finite product_measure_exists -> product_sigma_finite.sigma_finite product_prob_space.finite_index_eq_finite_product -> product_prob_space.sets_PiM_generator product_prob_space.finite_measure_infprod_emb_Pi -> product_prob_space.measure_PiM_emb product_prob_space.infprod_spec -> product_prob_space.emeasure_PiM_emb_not_empty product_prob_space.measurable_component -> measurable_component_singleton product_prob_space.measurable_emb -> measurable_prod_emb product_prob_space.measurable_into_infprod_algebra -> measurable_PiM_single product_prob_space.measurable_singleton_infprod -> measurable_component_singleton product_prob_space.measure_emb -> emeasure_prod_emb product_prob_space.measure_preserving_restrict -> product_prob_space.distr_restrict product_sigma_algebra.product_algebra_into_space -> space_closed product_sigma_finite.measure_fold -> product_sigma_finite.distr_merge product_sigma_finite.measure_preserving_component_singelton -> product_sigma_finite.distr_singleton product_sigma_finite.measure_preserving_merge -> product_sigma_finite.distr_merge sequence_space.measure_infprod -> sequence_space.measure_PiM_countable sets_product_algebra -> sets_PiM sigma_algebra.measurable_sigma -> measurable_measure_of sigma_finite_measure.disjoint_sigma_finite -> sigma_finite_disjoint sigma_finite_measure.RN_deriv_vimage -> sigma_finite_measure.RN_deriv_distr sigma_product_algebra_sigma_eq -> sigma_prod_algebra_sigma_eq space_product_algebra -> space_PiM * Session HOL-TPTP: support to parse and import TPTP problems (all languages) into Isabelle/HOL. *** FOL *** * New "case_product" attribute (see HOL). *** ZF *** * Greater support for structured proofs involving induction or case analysis. * Much greater use of mathematical symbols. * Removal of many ML theorem bindings. INCOMPATIBILITY. *** ML *** * Antiquotation @{keyword "name"} produces a parser for outer syntax from a minor keyword introduced via theory header declaration. * Antiquotation @{command_spec "name"} produces the Outer_Syntax.command_spec from a major keyword introduced via theory header declaration; it can be passed to Outer_Syntax.command etc. * Local_Theory.define no longer hard-wires default theorem name "foo_def", but retains the binding as given. If that is Binding.empty / Attrib.empty_binding, the result is not registered as user-level fact. The Local_Theory.define_internal variant allows to specify a non-empty name (used for the foundation in the background theory), while omitting the fact binding in the user-context. Potential INCOMPATIBILITY for derived definitional packages: need to specify naming policy for primitive definitions more explicitly. * Renamed Thm.capply to Thm.apply, and Thm.cabs to Thm.lambda in conformance with similar operations in structure Term and Logic. * Antiquotation @{attributes [...]} embeds attribute source representation into the ML text, which is particularly useful with declarations like Local_Theory.note. * Structure Proof_Context follows standard naming scheme. Old ProofContext has been discontinued. INCOMPATIBILITY. * Refined Local_Theory.declaration {syntax, pervasive}, with subtle change of semantics: update is applied to auxiliary local theory context as well. * Modernized some old-style infix operations: addeqcongs ~> Simplifier.add_eqcong deleqcongs ~> Simplifier.del_eqcong addcongs ~> Simplifier.add_cong delcongs ~> Simplifier.del_cong setmksimps ~> Simplifier.set_mksimps setmkcong ~> Simplifier.set_mkcong setmksym ~> Simplifier.set_mksym setmkeqTrue ~> Simplifier.set_mkeqTrue settermless ~> Simplifier.set_termless setsubgoaler ~> Simplifier.set_subgoaler addsplits ~> Splitter.add_split delsplits ~> Splitter.del_split *** System *** * USER_HOME settings variable points to cross-platform user home directory, which coincides with HOME on POSIX systems only. Likewise, the Isabelle path specification "~" now expands to $USER_HOME, instead of former $HOME. A different default for USER_HOME may be set explicitly in shell environment, before Isabelle settings are evaluated. Minor INCOMPATIBILITY: need to adapt Isabelle path where the generic user home was intended. * ISABELLE_HOME_WINDOWS refers to ISABELLE_HOME in windows file name notation, which is useful for the jEdit file browser, for example. * ISABELLE_JDK_HOME settings variable points to JDK with javac and jar (not just JRE). New in Isabelle2011-1 (October 2011) ------------------------------------ *** General *** * Improved Isabelle/jEdit Prover IDE (PIDE), which can be invoked as "isabelle jedit" or "ISABELLE_HOME/Isabelle" on the command line. - Management of multiple theory files directly from the editor buffer store -- bypassing the file-system (no requirement to save files for checking). - Markup of formal entities within the text buffer, with semantic highlighting, tooltips and hyperlinks to jump to defining source positions. - Improved text rendering, with sub/superscripts in the source buffer (including support for copy/paste wrt. output panel, HTML theory output and other non-Isabelle text boxes). - Refined scheduling of proof checking and printing of results, based on interactive editor view. (Note: jEdit folding and narrowing allows to restrict buffer perspectives explicitly.) - Reduced CPU performance requirements, usable on machines with few cores. - Reduced memory requirements due to pruning of unused document versions (garbage collection). See also ~~/src/Tools/jEdit/README.html for further information, including some remaining limitations. * Theory loader: source files are exclusively located via the master directory of each theory node (where the .thy file itself resides). The global load path (such as src/HOL/Library) has been discontinued. Note that the path element ~~ may be used to reference theories in the Isabelle home folder -- for instance, "~~/src/HOL/Library/FuncSet". INCOMPATIBILITY. * Theory loader: source files are identified by content via SHA1 digests. Discontinued former path/modtime identification and optional ISABELLE_FILE_IDENT plugin scripts. * Parallelization of nested Isar proofs is subject to Goal.parallel_proofs_threshold (default 100). See also isabelle usedir option -Q. * Name space: former unsynchronized references are now proper configuration options, with more conventional names: long_names ~> names_long short_names ~> names_short unique_names ~> names_unique Minor INCOMPATIBILITY, need to declare options in context like this: declare [[names_unique = false]] * Literal facts `prop` may contain dummy patterns, e.g. `_ = _`. Note that the result needs to be unique, which means fact specifications may have to be refined after enriching a proof context. * Attribute "case_names" has been refined: the assumptions in each case can be named now by following the case name with [name1 name2 ...]. * Isabelle/Isar reference manual has been updated and extended: - "Synopsis" provides a catalog of main Isar language concepts. - Formal references in syntax diagrams, via @{rail} antiquotation. - Updated material from classic "ref" manual, notably about "Classical Reasoner". *** HOL *** * Class bot and top require underlying partial order rather than preorder: uniqueness of bot and top is guaranteed. INCOMPATIBILITY. * Class complete_lattice: generalized a couple of lemmas from sets; generalized theorems INF_cong and SUP_cong. New type classes for complete boolean algebras and complete linear orders. Lemmas Inf_less_iff, less_Sup_iff, INF_less_iff, less_SUP_iff now reside in class complete_linorder. Changed proposition of lemmas Inf_bool_def, Sup_bool_def, Inf_fun_def, Sup_fun_def, Inf_apply, Sup_apply. Removed redundant lemmas (the right hand side gives hints how to replace them for (metis ...), or (simp only: ...) proofs): Inf_singleton ~> Inf_insert [where A="{}", unfolded Inf_empty inf_top_right] Sup_singleton ~> Sup_insert [where A="{}", unfolded Sup_empty sup_bot_right] Inf_binary ~> Inf_insert, Inf_empty, and inf_top_right Sup_binary ~> Sup_insert, Sup_empty, and sup_bot_right Int_eq_Inter ~> Inf_insert, Inf_empty, and inf_top_right Un_eq_Union ~> Sup_insert, Sup_empty, and sup_bot_right Inter_def ~> INF_def, image_def Union_def ~> SUP_def, image_def INT_eq ~> INF_def, and image_def UN_eq ~> SUP_def, and image_def INF_subset ~> INF_superset_mono [OF _ order_refl] More consistent and comprehensive names: INTER_eq_Inter_image ~> INF_def UNION_eq_Union_image ~> SUP_def INFI_def ~> INF_def SUPR_def ~> SUP_def INF_leI ~> INF_lower INF_leI2 ~> INF_lower2 le_INFI ~> INF_greatest le_SUPI ~> SUP_upper le_SUPI2 ~> SUP_upper2 SUP_leI ~> SUP_least INFI_bool_eq ~> INF_bool_eq SUPR_bool_eq ~> SUP_bool_eq INFI_apply ~> INF_apply SUPR_apply ~> SUP_apply INTER_def ~> INTER_eq UNION_def ~> UNION_eq INCOMPATIBILITY. * Renamed theory Complete_Lattice to Complete_Lattices. INCOMPATIBILITY. * Theory Complete_Lattices: lemmas Inf_eq_top_iff, INF_eq_top_iff, INF_image, Inf_insert, INF_top, Inf_top_conv, INF_top_conv, SUP_bot, Sup_bot_conv, SUP_bot_conv, Sup_eq_top_iff, SUP_eq_top_iff, SUP_image, Sup_insert are now declared as [simp]. INCOMPATIBILITY. * Theory Lattice: lemmas compl_inf_bot, compl_le_comp_iff, compl_sup_top, inf_idem, inf_left_idem, inf_sup_absorb, sup_idem, sup_inf_absob, sup_left_idem are now declared as [simp]. Minor INCOMPATIBILITY. * Added syntactic classes "inf" and "sup" for the respective constants. INCOMPATIBILITY: Changes in the argument order of the (mostly internal) locale predicates for some derived classes. * Theorem collections ball_simps and bex_simps do not contain theorems referring to UNION any longer; these have been moved to collection UN_ball_bex_simps. INCOMPATIBILITY. * Theory Archimedean_Field: floor now is defined as parameter of a separate type class floor_ceiling. * Theory Finite_Set: more coherent development of fold_set locales: locale fun_left_comm ~> locale comp_fun_commute locale fun_left_comm_idem ~> locale comp_fun_idem Both use point-free characterization; interpretation proofs may need adjustment. INCOMPATIBILITY. * Theory Limits: Type "'a net" has been renamed to "'a filter", in accordance with standard mathematical terminology. INCOMPATIBILITY. * Theory Complex_Main: The locale interpretations for the bounded_linear and bounded_bilinear locales have been removed, in order to reduce the number of duplicate lemmas. Users must use the original names for distributivity theorems, potential INCOMPATIBILITY. divide.add ~> add_divide_distrib divide.diff ~> diff_divide_distrib divide.setsum ~> setsum_divide_distrib mult.add_right ~> right_distrib mult.diff_right ~> right_diff_distrib mult_right.setsum ~> setsum_right_distrib mult_left.diff ~> left_diff_distrib * Theory Complex_Main: Several redundant theorems have been removed or replaced by more general versions. INCOMPATIBILITY. real_diff_def ~> minus_real_def real_divide_def ~> divide_real_def real_less_def ~> less_le real_abs_def ~> abs_real_def real_sgn_def ~> sgn_real_def real_mult_commute ~> mult_commute real_mult_assoc ~> mult_assoc real_mult_1 ~> mult_1_left real_add_mult_distrib ~> left_distrib real_zero_not_eq_one ~> zero_neq_one real_mult_inverse_left ~> left_inverse INVERSE_ZERO ~> inverse_zero real_le_refl ~> order_refl real_le_antisym ~> order_antisym real_le_trans ~> order_trans real_le_linear ~> linear real_le_eq_diff ~> le_iff_diff_le_0 real_add_left_mono ~> add_left_mono real_mult_order ~> mult_pos_pos real_mult_less_mono2 ~> mult_strict_left_mono real_of_int_real_of_nat ~> real_of_int_of_nat_eq real_0_le_divide_iff ~> zero_le_divide_iff realpow_two_disj ~> power2_eq_iff real_squared_diff_one_factored ~> square_diff_one_factored realpow_two_diff ~> square_diff_square_factored reals_complete2 ~> complete_real real_sum_squared_expand ~> power2_sum exp_ln_eq ~> ln_unique expi_add ~> exp_add expi_zero ~> exp_zero lemma_DERIV_subst ~> DERIV_cong LIMSEQ_Zfun_iff ~> tendsto_Zfun_iff LIMSEQ_const ~> tendsto_const LIMSEQ_norm ~> tendsto_norm LIMSEQ_add ~> tendsto_add LIMSEQ_minus ~> tendsto_minus LIMSEQ_minus_cancel ~> tendsto_minus_cancel LIMSEQ_diff ~> tendsto_diff bounded_linear.LIMSEQ ~> bounded_linear.tendsto bounded_bilinear.LIMSEQ ~> bounded_bilinear.tendsto LIMSEQ_mult ~> tendsto_mult LIMSEQ_inverse ~> tendsto_inverse LIMSEQ_divide ~> tendsto_divide LIMSEQ_pow ~> tendsto_power LIMSEQ_setsum ~> tendsto_setsum LIMSEQ_setprod ~> tendsto_setprod LIMSEQ_norm_zero ~> tendsto_norm_zero_iff LIMSEQ_rabs_zero ~> tendsto_rabs_zero_iff LIMSEQ_imp_rabs ~> tendsto_rabs LIMSEQ_add_minus ~> tendsto_add [OF _ tendsto_minus] LIMSEQ_add_const ~> tendsto_add [OF _ tendsto_const] LIMSEQ_diff_const ~> tendsto_diff [OF _ tendsto_const] LIMSEQ_Complex ~> tendsto_Complex LIM_ident ~> tendsto_ident_at LIM_const ~> tendsto_const LIM_add ~> tendsto_add LIM_add_zero ~> tendsto_add_zero LIM_minus ~> tendsto_minus LIM_diff ~> tendsto_diff LIM_norm ~> tendsto_norm LIM_norm_zero ~> tendsto_norm_zero LIM_norm_zero_cancel ~> tendsto_norm_zero_cancel LIM_norm_zero_iff ~> tendsto_norm_zero_iff LIM_rabs ~> tendsto_rabs LIM_rabs_zero ~> tendsto_rabs_zero LIM_rabs_zero_cancel ~> tendsto_rabs_zero_cancel LIM_rabs_zero_iff ~> tendsto_rabs_zero_iff LIM_compose ~> tendsto_compose LIM_mult ~> tendsto_mult LIM_scaleR ~> tendsto_scaleR LIM_of_real ~> tendsto_of_real LIM_power ~> tendsto_power LIM_inverse ~> tendsto_inverse LIM_sgn ~> tendsto_sgn isCont_LIM_compose ~> isCont_tendsto_compose bounded_linear.LIM ~> bounded_linear.tendsto bounded_linear.LIM_zero ~> bounded_linear.tendsto_zero bounded_bilinear.LIM ~> bounded_bilinear.tendsto bounded_bilinear.LIM_prod_zero ~> bounded_bilinear.tendsto_zero bounded_bilinear.LIM_left_zero ~> bounded_bilinear.tendsto_left_zero bounded_bilinear.LIM_right_zero ~> bounded_bilinear.tendsto_right_zero LIM_inverse_fun ~> tendsto_inverse [OF tendsto_ident_at] * Theory Complex_Main: The definition of infinite series was generalized. Now it is defined on the type class {topological_space, comm_monoid_add}. Hence it is useable also for extended real numbers. * Theory Complex_Main: The complex exponential function "expi" is now a type-constrained abbreviation for "exp :: complex => complex"; thus several polymorphic lemmas about "exp" are now applicable to "expi". * Code generation: - Theory Library/Code_Char_ord provides native ordering of characters in the target language. - Commands code_module and code_library are legacy, use export_code instead. - Method "evaluation" is legacy, use method "eval" instead. - Legacy evaluator "SML" is deactivated by default. May be reactivated by the following theory command: setup {* Value.add_evaluator ("SML", Codegen.eval_term) *} * Declare ext [intro] by default. Rare INCOMPATIBILITY. * New proof method "induction" that gives induction hypotheses the name "IH", thus distinguishing them from further hypotheses that come from rule induction. The latter are still called "hyps". Method "induction" is a thin wrapper around "induct" and follows the same syntax. * Method "fastsimp" has been renamed to "fastforce", but "fastsimp" is still available as a legacy feature for some time. * Nitpick: - Added "need" and "total_consts" options. - Reintroduced "show_skolems" option by popular demand. - Renamed attribute: nitpick_def ~> nitpick_unfold. INCOMPATIBILITY. * Sledgehammer: - Use quasi-sound (and efficient) translations by default. - Added support for the following provers: E-ToFoF, LEO-II, Satallax, SNARK, Waldmeister, and Z3 with TPTP syntax. - Automatically preplay and minimize proofs before showing them if this can be done within reasonable time. - sledgehammer available_provers ~> sledgehammer supported_provers. INCOMPATIBILITY. - Added "preplay_timeout", "slicing", "type_enc", "sound", "max_mono_iters", and "max_new_mono_instances" options. - Removed "explicit_apply" and "full_types" options as well as "Full Types" Proof General menu item. INCOMPATIBILITY. * Metis: - Removed "metisF" -- use "metis" instead. INCOMPATIBILITY. - Obsoleted "metisFT" -- use "metis (full_types)" instead. INCOMPATIBILITY. * Command 'try': - Renamed 'try_methods' and added "simp:", "intro:", "dest:", and "elim:" options. INCOMPATIBILITY. - Introduced 'try' that not only runs 'try_methods' but also 'solve_direct', 'sledgehammer', 'quickcheck', and 'nitpick'. * Quickcheck: - Added "eval" option to evaluate terms for the found counterexample (currently only supported by the default (exhaustive) tester). - Added post-processing of terms to obtain readable counterexamples (currently only supported by the default (exhaustive) tester). - New counterexample generator quickcheck[narrowing] enables narrowing-based testing. Requires the Glasgow Haskell compiler with its installation location defined in the Isabelle settings environment as ISABELLE_GHC. - Removed quickcheck tester "SML" based on the SML code generator (formly in HOL/Library). * Function package: discontinued option "tailrec". INCOMPATIBILITY, use 'partial_function' instead. * Theory Library/Extended_Reals replaces now the positive extended reals found in probability theory. This file is extended by Multivariate_Analysis/Extended_Real_Limits. * Theory Library/Old_Recdef: old 'recdef' package has been moved here, from where it must be imported explicitly if it is really required. INCOMPATIBILITY. * Theory Library/Wfrec: well-founded recursion combinator "wfrec" has been moved here. INCOMPATIBILITY. * Theory Library/Saturated provides type of numbers with saturated arithmetic. * Theory Library/Product_Lattice defines a pointwise ordering for the product type 'a * 'b, and provides instance proofs for various order and lattice type classes. * Theory Library/Countable now provides the "countable_datatype" proof method for proving "countable" class instances for datatypes. * Theory Library/Cset_Monad allows do notation for computable sets (cset) via the generic monad ad-hoc overloading facility. * Library: Theories of common data structures are split into theories for implementation, an invariant-ensuring type, and connection to an abstract type. INCOMPATIBILITY. - RBT is split into RBT and RBT_Mapping. - AssocList is split and renamed into AList and AList_Mapping. - DList is split into DList_Impl, DList, and DList_Cset. - Cset is split into Cset and List_Cset. * Theory Library/Nat_Infinity has been renamed to Library/Extended_Nat, with name changes of the following types and constants: type inat ~> type enat Fin ~> enat Infty ~> infinity (overloaded) iSuc ~> eSuc the_Fin ~> the_enat Every theorem name containing "inat", "Fin", "Infty", or "iSuc" has been renamed accordingly. INCOMPATIBILITY. * Session Multivariate_Analysis: The euclidean_space type class now fixes a constant "Basis :: 'a set" consisting of the standard orthonormal basis for the type. Users now have the option of quantifying over this set instead of using the "basis" function, e.g. "ALL x:Basis. P x" vs "ALL i vec_eq_iff dist_nth_le_cart ~> dist_vec_nth_le tendsto_vector ~> vec_tendstoI Cauchy_vector ~> vec_CauchyI * Session Multivariate_Analysis: Several duplicate theorems have been removed, and other theorems have been renamed or replaced with more general versions. INCOMPATIBILITY. finite_choice ~> finite_set_choice eventually_conjI ~> eventually_conj eventually_and ~> eventually_conj_iff eventually_false ~> eventually_False setsum_norm ~> norm_setsum Lim_sequentially ~> LIMSEQ_def Lim_ident_at ~> LIM_ident Lim_const ~> tendsto_const Lim_cmul ~> tendsto_scaleR [OF tendsto_const] Lim_neg ~> tendsto_minus Lim_add ~> tendsto_add Lim_sub ~> tendsto_diff Lim_mul ~> tendsto_scaleR Lim_vmul ~> tendsto_scaleR [OF _ tendsto_const] Lim_null_norm ~> tendsto_norm_zero_iff [symmetric] Lim_linear ~> bounded_linear.tendsto Lim_component ~> tendsto_euclidean_component Lim_component_cart ~> tendsto_vec_nth Lim_inner ~> tendsto_inner [OF tendsto_const] dot_lsum ~> inner_setsum_left dot_rsum ~> inner_setsum_right continuous_cmul ~> continuous_scaleR [OF continuous_const] continuous_neg ~> continuous_minus continuous_sub ~> continuous_diff continuous_vmul ~> continuous_scaleR [OF _ continuous_const] continuous_mul ~> continuous_scaleR continuous_inv ~> continuous_inverse continuous_at_within_inv ~> continuous_at_within_inverse continuous_at_inv ~> continuous_at_inverse continuous_at_norm ~> continuous_norm [OF continuous_at_id] continuous_at_infnorm ~> continuous_infnorm [OF continuous_at_id] continuous_at_component ~> continuous_component [OF continuous_at_id] continuous_on_neg ~> continuous_on_minus continuous_on_sub ~> continuous_on_diff continuous_on_cmul ~> continuous_on_scaleR [OF continuous_on_const] continuous_on_vmul ~> continuous_on_scaleR [OF _ continuous_on_const] continuous_on_mul ~> continuous_on_scaleR continuous_on_mul_real ~> continuous_on_mult continuous_on_inner ~> continuous_on_inner [OF continuous_on_const] continuous_on_norm ~> continuous_on_norm [OF continuous_on_id] continuous_on_inverse ~> continuous_on_inv uniformly_continuous_on_neg ~> uniformly_continuous_on_minus uniformly_continuous_on_sub ~> uniformly_continuous_on_diff subset_interior ~> interior_mono subset_closure ~> closure_mono closure_univ ~> closure_UNIV real_arch_lt ~> reals_Archimedean2 real_arch ~> reals_Archimedean3 real_abs_norm ~> abs_norm_cancel real_abs_sub_norm ~> norm_triangle_ineq3 norm_cauchy_schwarz_abs ~> Cauchy_Schwarz_ineq2 * Session HOL-Probability: - Caratheodory's extension lemma is now proved for ring_of_sets. - Infinite products of probability measures are now available. - Sigma closure is independent, if the generator is independent - Use extended reals instead of positive extended reals. INCOMPATIBILITY. * Session HOLCF: Discontinued legacy theorem names, INCOMPATIBILITY. expand_fun_below ~> fun_below_iff below_fun_ext ~> fun_belowI expand_cfun_eq ~> cfun_eq_iff ext_cfun ~> cfun_eqI expand_cfun_below ~> cfun_below_iff below_cfun_ext ~> cfun_belowI monofun_fun_fun ~> fun_belowD monofun_fun_arg ~> monofunE monofun_lub_fun ~> adm_monofun [THEN admD] cont_lub_fun ~> adm_cont [THEN admD] cont2cont_Rep_CFun ~> cont2cont_APP cont_Rep_CFun_app ~> cont_APP_app cont_Rep_CFun_app_app ~> cont_APP_app_app cont_cfun_fun ~> cont_Rep_cfun1 [THEN contE] cont_cfun_arg ~> cont_Rep_cfun2 [THEN contE] contlub_cfun ~> lub_APP [symmetric] contlub_LAM ~> lub_LAM [symmetric] thelubI ~> lub_eqI UU_I ~> bottomI lift_distinct1 ~> lift.distinct(1) lift_distinct2 ~> lift.distinct(2) Def_not_UU ~> lift.distinct(2) Def_inject ~> lift.inject below_UU_iff ~> below_bottom_iff eq_UU_iff ~> eq_bottom_iff *** Document preparation *** * Antiquotation @{rail} layouts railroad syntax diagrams, see also isar-ref manual, both for description and actual application of the same. * Antiquotation @{value} evaluates the given term and presents its result. * Antiquotations: term style "isub" provides ad-hoc conversion of variables x1, y23 into subscripted form x\<^isub>1, y\<^isub>2\<^isub>3. * Predefined LaTeX macros for Isabelle symbols \ and \ (e.g. see ~~/src/HOL/Library/Monad_Syntax.thy). * Localized \isabellestyle switch can be used within blocks or groups like this: \isabellestyle{it} %preferred default {\isabellestylett @{text "typewriter stuff"}} * Discontinued special treatment of hard tabulators. Implicit tab-width is now defined as 1. Potential INCOMPATIBILITY for visual layouts. *** ML *** * The inner syntax of sort/type/term/prop supports inlined YXML representations within quoted string tokens. By encoding logical entities via Term_XML (in ML or Scala) concrete syntax can be bypassed, which is particularly useful for producing bits of text under external program control. * Antiquotations for ML and document preparation are managed as theory data, which requires explicit setup. * Isabelle_Process.is_active allows tools to check if the official process wrapper is running (Isabelle/Scala/jEdit) or the old TTY loop (better known as Proof General). * Structure Proof_Context follows standard naming scheme. Old ProofContext is still available for some time as legacy alias. * Structure Timing provides various operations for timing; supersedes former start_timing/end_timing etc. * Path.print is the official way to show file-system paths to users (including quotes etc.). * Inner syntax: identifiers in parse trees of generic categories "logic", "aprop", "idt" etc. carry position information (disguised as type constraints). Occasional INCOMPATIBILITY with non-compliant translations that choke on unexpected type constraints. Positions can be stripped in ML translations via Syntax.strip_positions / Syntax.strip_positions_ast, or via the syntax constant "_strip_positions" within parse trees. As last resort, positions can be disabled via the configuration option Syntax.positions, which is called "syntax_positions" in Isar attribute syntax. * Discontinued special status of various ML structures that contribute to structure Syntax (Ast, Lexicon, Mixfix, Parser, Printer etc.): less pervasive content, no inclusion in structure Syntax. INCOMPATIBILITY, refer directly to Ast.Constant, Lexicon.is_identifier, Syntax_Trans.mk_binder_tr etc. * Typed print translation: discontinued show_sorts argument, which is already available via context of "advanced" translation. * Refined PARALLEL_GOALS tactical: degrades gracefully for schematic goal states; body tactic needs to address all subgoals uniformly. * Slightly more special eq_list/eq_set, with shortcut involving pointer equality (assumes that eq relation is reflexive). * Classical tactics use proper Proof.context instead of historic types claset/clasimpset. Old-style declarations like addIs, addEs, addDs operate directly on Proof.context. Raw type claset retains its use as snapshot of the classical context, which can be recovered via (put_claset HOL_cs) etc. Type clasimpset has been discontinued. INCOMPATIBILITY, classical tactics and derived proof methods require proper Proof.context. *** System *** * Discontinued support for Poly/ML 5.2, which was the last version without proper multithreading and TimeLimit implementation. * Discontinued old lib/scripts/polyml-platform, which has been obsolete since Isabelle2009-2. * Various optional external tools are referenced more robustly and uniformly by explicit Isabelle settings as follows: ISABELLE_CSDP (formerly CSDP_EXE) ISABELLE_GHC (formerly EXEC_GHC or GHC_PATH) ISABELLE_OCAML (formerly EXEC_OCAML) ISABELLE_SWIPL (formerly EXEC_SWIPL) ISABELLE_YAP (formerly EXEC_YAP) Note that automated detection from the file-system or search path has been discontinued. INCOMPATIBILITY. * Scala layer provides JVM method invocation service for static methods of type (String)String, see Invoke_Scala.method in ML. For example: Invoke_Scala.method "java.lang.System.getProperty" "java.home" Together with YXML.string_of_body/parse_body and XML.Encode/Decode this allows to pass structured values between ML and Scala. * The IsabelleText fonts includes some further glyphs to support the Prover IDE. Potential INCOMPATIBILITY: users who happen to have installed a local copy (which is normally *not* required) need to delete or update it from ~~/lib/fonts/. New in Isabelle2011 (January 2011) ---------------------------------- *** General *** * Experimental Prover IDE based on Isabelle/Scala and jEdit (see src/Tools/jEdit). This also serves as IDE for Isabelle/ML, with useful tooltips and hyperlinks produced from its static analysis. The bundled component provides an executable Isabelle tool that can be run like this: Isabelle2011/bin/isabelle jedit * Significantly improved Isabelle/Isar implementation manual. * System settings: ISABELLE_HOME_USER now includes ISABELLE_IDENTIFIER (and thus refers to something like $HOME/.isabelle/Isabelle2011), while the default heap location within that directory lacks that extra suffix. This isolates multiple Isabelle installations from each other, avoiding problems with old settings in new versions. INCOMPATIBILITY, need to copy/upgrade old user settings manually. * Source files are always encoded as UTF-8, instead of old-fashioned ISO-Latin-1. INCOMPATIBILITY. Isabelle LaTeX documents might require the following package declarations: \usepackage[utf8]{inputenc} \usepackage{textcomp} * Explicit treatment of UTF-8 sequences as Isabelle symbols, such that a Unicode character is treated as a single symbol, not a sequence of non-ASCII bytes as before. Since Isabelle/ML string literals may contain symbols without further backslash escapes, Unicode can now be used here as well. Recall that Symbol.explode in ML provides a consistent view on symbols, while raw explode (or String.explode) merely give a byte-oriented representation. * Theory loader: source files are primarily located via the master directory of each theory node (where the .thy file itself resides). The global load path is still partially available as legacy feature. Minor INCOMPATIBILITY due to subtle change in file lookup: use explicit paths, relatively to the theory. * Special treatment of ML file names has been discontinued. Historically, optional extensions .ML or .sml were added on demand -- at the cost of clarity of file dependencies. Recall that Isabelle/ML files exclusively use the .ML extension. Minor INCOMPATIBILITY. * Various options that affect pretty printing etc. are now properly handled within the context via configuration options, instead of unsynchronized references or print modes. There are both ML Config.T entities and Isar declaration attributes to access these. ML (Config.T) Isar (attribute) eta_contract eta_contract show_brackets show_brackets show_sorts show_sorts show_types show_types show_question_marks show_question_marks show_consts show_consts show_abbrevs show_abbrevs Syntax.ast_trace syntax_ast_trace Syntax.ast_stat syntax_ast_stat Syntax.ambiguity_level syntax_ambiguity_level Goal_Display.goals_limit goals_limit Goal_Display.show_main_goal show_main_goal Method.rule_trace rule_trace Thy_Output.display thy_output_display Thy_Output.quotes thy_output_quotes Thy_Output.indent thy_output_indent Thy_Output.source thy_output_source Thy_Output.break thy_output_break Note that corresponding "..._default" references in ML may only be changed globally at the ROOT session setup, but *not* within a theory. The option "show_abbrevs" supersedes the former print mode "no_abbrevs" with inverted meaning. * More systematic naming of some configuration options. INCOMPATIBILITY. trace_simp ~> simp_trace debug_simp ~> simp_debug * Support for real valued configuration options, using simplistic floating-point notation that coincides with the inner syntax for float_token. * Support for real valued preferences (with approximative PGIP type): front-ends need to accept "pgint" values in float notation. INCOMPATIBILITY. * The IsabelleText font now includes Cyrillic, Hebrew, Arabic from DejaVu Sans. * Discontinued support for Poly/ML 5.0 and 5.1 versions. *** Pure *** * Command 'type_synonym' (with single argument) replaces somewhat outdated 'types', which is still available as legacy feature for some time. * Command 'nonterminal' (with 'and' separated list of arguments) replaces somewhat outdated 'nonterminals'. INCOMPATIBILITY. * Command 'notepad' replaces former 'example_proof' for experimentation in Isar without any result. INCOMPATIBILITY. * Locale interpretation commands 'interpret' and 'sublocale' accept lists of equations to map definitions in a locale to appropriate entities in the context of the interpretation. The 'interpretation' command already provided this functionality. * Diagnostic command 'print_dependencies' prints the locale instances that would be activated if the specified expression was interpreted in the current context. Variant "print_dependencies!" assumes a context without interpretations. * Diagnostic command 'print_interps' prints interpretations in proofs in addition to interpretations in theories. * Discontinued obsolete 'global' and 'local' commands to manipulate the theory name space. Rare INCOMPATIBILITY. The ML functions Sign.root_path and Sign.local_path may be applied directly where this feature is still required for historical reasons. * Discontinued obsolete 'constdefs' command. INCOMPATIBILITY, use 'definition' instead. * The "prems" fact, which refers to the accidental collection of foundational premises in the context, is now explicitly marked as legacy feature and will be discontinued soon. Consider using "assms" of the head statement or reference facts by explicit names. * Document antiquotations @{class} and @{type} print classes and type constructors. * Document antiquotation @{file} checks file/directory entries within the local file system. *** HOL *** * Coercive subtyping: functions can be declared as coercions and type inference will add them as necessary upon input of a term. Theory Complex_Main declares real :: nat => real and real :: int => real as coercions. A coercion function f is declared like this: declare [[coercion f]] To lift coercions through type constructors (e.g. from nat => real to nat list => real list), map functions can be declared, e.g. declare [[coercion_map map]] Currently coercion inference is activated only in theories including real numbers, i.e. descendants of Complex_Main. This is controlled by the configuration option "coercion_enabled", e.g. it can be enabled in other theories like this: declare [[coercion_enabled]] * Command 'partial_function' provides basic support for recursive function definitions over complete partial orders. Concrete instances are provided for i) the option type, ii) tail recursion on arbitrary types, and iii) the heap monad of Imperative_HOL. See src/HOL/ex/Fundefs.thy and src/HOL/Imperative_HOL/ex/Linked_Lists.thy for examples. * Function package: f.psimps rules are no longer implicitly declared as [simp]. INCOMPATIBILITY. * Datatype package: theorems generated for executable equality (class "eq") carry proper names and are treated as default code equations. * Inductive package: now offers command 'inductive_simps' to automatically derive instantiated and simplified equations for inductive predicates, similar to 'inductive_cases'. * Command 'enriched_type' allows to register properties of the functorial structure of types. * Improved infrastructure for term evaluation using code generator techniques, in particular static evaluation conversions. * Code generator: Scala (2.8 or higher) has been added to the target languages. * Code generator: globbing constant expressions "*" and "Theory.*" have been replaced by the more idiomatic "_" and "Theory._". INCOMPATIBILITY. * Code generator: export_code without explicit file declaration prints to standard output. INCOMPATIBILITY. * Code generator: do not print function definitions for case combinators any longer. * Code generator: simplification with rules determined with src/Tools/Code/code_simp.ML and method "code_simp". * Code generator for records: more idiomatic representation of record types. Warning: records are not covered by ancient SML code generation any longer. INCOMPATIBILITY. In cases of need, a suitable rep_datatype declaration helps to succeed then: record 'a foo = ... ... rep_datatype foo_ext ... * Records: logical foundation type for records does not carry a '_type' suffix any longer (obsolete due to authentic syntax). INCOMPATIBILITY. * Quickcheck now by default uses exhaustive testing instead of random testing. Random testing can be invoked by "quickcheck [random]", exhaustive testing by "quickcheck [exhaustive]". * Quickcheck instantiates polymorphic types with small finite datatypes by default. This enables a simple execution mechanism to handle quantifiers and function equality over the finite datatypes. * Quickcheck random generator has been renamed from "code" to "random". INCOMPATIBILITY. * Quickcheck now has a configurable time limit which is set to 30 seconds by default. This can be changed by adding [timeout = n] to the quickcheck command. The time limit for Auto Quickcheck is still set independently. * Quickcheck in locales considers interpretations of that locale for counter example search. * Sledgehammer: - Added "smt" and "remote_smt" provers based on the "smt" proof method. See the Sledgehammer manual for details ("isabelle doc sledgehammer"). - Renamed commands: sledgehammer atp_info ~> sledgehammer running_provers sledgehammer atp_kill ~> sledgehammer kill_provers sledgehammer available_atps ~> sledgehammer available_provers INCOMPATIBILITY. - Renamed options: sledgehammer [atps = ...] ~> sledgehammer [provers = ...] sledgehammer [atp = ...] ~> sledgehammer [prover = ...] sledgehammer [timeout = 77 s] ~> sledgehammer [timeout = 77] (and "ms" and "min" are no longer supported) INCOMPATIBILITY. * Nitpick: - Renamed options: nitpick [timeout = 77 s] ~> nitpick [timeout = 77] nitpick [tac_timeout = 777 ms] ~> nitpick [tac_timeout = 0.777] INCOMPATIBILITY. - Added support for partial quotient types. - Added local versions of the "Nitpick.register_xxx" functions. - Added "whack" option. - Allow registration of quotient types as codatatypes. - Improved "merge_type_vars" option to merge more types. - Removed unsound "fast_descrs" option. - Added custom symmetry breaking for datatypes, making it possible to reach higher cardinalities. - Prevent the expansion of too large definitions. * Proof methods "metis" and "meson" now have configuration options "meson_trace", "metis_trace", and "metis_verbose" that can be enabled to diagnose these tools. E.g. using [[metis_trace = true]] * Auto Solve: Renamed "Auto Solve Direct". The tool is now available manually as command 'solve_direct'. * The default SMT solver Z3 must be enabled explicitly (due to licensing issues) by setting the environment variable Z3_NON_COMMERCIAL in etc/settings of the component, for example. For commercial applications, the SMT solver CVC3 is provided as fall-back; changing the SMT solver is done via the configuration option "smt_solver". * Remote SMT solvers need to be referred to by the "remote_" prefix, i.e. "remote_cvc3" and "remote_z3". * Added basic SMT support for datatypes, records, and typedefs using the oracle mode (no proofs). Direct support of pairs has been dropped in exchange (pass theorems fst_conv snd_conv pair_collapse to the SMT support for a similar behavior). Minor INCOMPATIBILITY. * Changed SMT configuration options: - Renamed: z3_proofs ~> smt_oracle (with inverted meaning) z3_trace_assms ~> smt_trace_used_facts INCOMPATIBILITY. - Added: smt_verbose smt_random_seed smt_datatypes smt_infer_triggers smt_monomorph_limit cvc3_options remote_cvc3_options remote_z3_options yices_options * Boogie output files (.b2i files) need to be declared in the theory header. * Simplification procedure "list_to_set_comprehension" rewrites list comprehensions applied to List.set to set comprehensions. Occasional INCOMPATIBILITY, may be deactivated like this: declare [[simproc del: list_to_set_comprehension]] * Removed old version of primrec package. INCOMPATIBILITY. * Removed simplifier congruence rule of "prod_case", as has for long been the case with "split". INCOMPATIBILITY. * String.literal is a type, but not a datatype. INCOMPATIBILITY. * Removed [split_format ... and ... and ...] version of [split_format]. Potential INCOMPATIBILITY. * Predicate "sorted" now defined inductively, with nice induction rules. INCOMPATIBILITY: former sorted.simps now named sorted_simps. * Constant "contents" renamed to "the_elem", to free the generic name contents for other uses. INCOMPATIBILITY. * Renamed class eq and constant eq (for code generation) to class equal and constant equal, plus renaming of related facts and various tuning. INCOMPATIBILITY. * Dropped type classes mult_mono and mult_mono1. INCOMPATIBILITY. * Removed output syntax "'a ~=> 'b" for "'a => 'b option". INCOMPATIBILITY. * Renamed theory Fset to Cset, type Fset.fset to Cset.set, in order to avoid confusion with finite sets. INCOMPATIBILITY. * Abandoned locales equiv, congruent and congruent2 for equivalence relations. INCOMPATIBILITY: use equivI rather than equiv_intro (same for congruent(2)). * Some previously unqualified names have been qualified: types bool ~> HOL.bool nat ~> Nat.nat constants Trueprop ~> HOL.Trueprop True ~> HOL.True False ~> HOL.False op & ~> HOL.conj op | ~> HOL.disj op --> ~> HOL.implies op = ~> HOL.eq Not ~> HOL.Not The ~> HOL.The All ~> HOL.All Ex ~> HOL.Ex Ex1 ~> HOL.Ex1 Let ~> HOL.Let If ~> HOL.If Ball ~> Set.Ball Bex ~> Set.Bex Suc ~> Nat.Suc Pair ~> Product_Type.Pair fst ~> Product_Type.fst snd ~> Product_Type.snd curry ~> Product_Type.curry op : ~> Set.member Collect ~> Set.Collect INCOMPATIBILITY. * More canonical naming convention for some fundamental definitions: bot_bool_eq ~> bot_bool_def top_bool_eq ~> top_bool_def inf_bool_eq ~> inf_bool_def sup_bool_eq ~> sup_bool_def bot_fun_eq ~> bot_fun_def top_fun_eq ~> top_fun_def inf_fun_eq ~> inf_fun_def sup_fun_eq ~> sup_fun_def INCOMPATIBILITY. * More stylized fact names: expand_fun_eq ~> fun_eq_iff expand_set_eq ~> set_eq_iff set_ext ~> set_eqI nat_number ~> eval_nat_numeral INCOMPATIBILITY. * Refactoring of code-generation specific operations in theory List: constants null ~> List.null facts mem_iff ~> member_def null_empty ~> null_def INCOMPATIBILITY. Note that these were not supposed to be used regularly unless for striking reasons; their main purpose was code generation. Various operations from the Haskell prelude are used for generating Haskell code. * Term "bij f" is now an abbreviation of "bij_betw f UNIV UNIV". Term "surj f" is now an abbreviation of "range f = UNIV". The theorems bij_def and surj_def are unchanged. INCOMPATIBILITY. * Abolished some non-alphabetic type names: "prod" and "sum" replace "*" and "+" respectively. INCOMPATIBILITY. * Name "Plus" of disjoint sum operator "<+>" is now hidden. Write "Sum_Type.Plus" instead. * Constant "split" has been merged with constant "prod_case"; names of ML functions, facts etc. involving split have been retained so far, though. INCOMPATIBILITY. * Dropped old infix syntax "_ mem _" for List.member; use "_ : set _" instead. INCOMPATIBILITY. * Removed lemma "Option.is_none_none" which duplicates "is_none_def". INCOMPATIBILITY. * Former theory Library/Enum is now part of the HOL-Main image. INCOMPATIBILITY: all constants of the Enum theory now have to be referred to by its qualified name. enum ~> Enum.enum nlists ~> Enum.nlists product ~> Enum.product * Theory Library/Monad_Syntax provides do-syntax for monad types. Syntax in Library/State_Monad has been changed to avoid ambiguities. INCOMPATIBILITY. * Theory Library/SetsAndFunctions has been split into Library/Function_Algebras and Library/Set_Algebras; canonical names for instance definitions for functions; various improvements. INCOMPATIBILITY. * Theory Library/Multiset provides stable quicksort implementation of sort_key. * Theory Library/Multiset: renamed empty_idemp ~> empty_neutral. INCOMPATIBILITY. * Session Multivariate_Analysis: introduced a type class for euclidean space. Most theorems are now stated in terms of euclidean spaces instead of finite cartesian products. types real ^ 'n ~> 'a::real_vector ~> 'a::euclidean_space ~> 'a::ordered_euclidean_space (depends on your needs) constants _ $ _ ~> _ $$ _ \ x. _ ~> \\ x. _ CARD('n) ~> DIM('a) Also note that the indices are now natural numbers and not from some finite type. Finite cartesian products of euclidean spaces, products of euclidean spaces the real and complex numbers are instantiated to be euclidean_spaces. INCOMPATIBILITY. * Session Probability: introduced pextreal as positive extended real numbers. Use pextreal as value for measures. Introduce the Radon-Nikodym derivative, product spaces and Fubini's theorem for arbitrary sigma finite measures. Introduces Lebesgue measure based on the integral in Multivariate Analysis. INCOMPATIBILITY. * Session Imperative_HOL: revamped, corrected dozens of inadequacies. INCOMPATIBILITY. * Session SPARK (with image HOL-SPARK) provides commands to load and prove verification conditions generated by the SPARK Ada program verifier. See also src/HOL/SPARK and src/HOL/SPARK/Examples. *** HOL-Algebra *** * Theorems for additive ring operations (locale abelian_monoid and descendants) are generated by interpretation from their multiplicative counterparts. Names (in particular theorem names) have the mandatory qualifier 'add'. Previous theorem names are redeclared for compatibility. * Structure "int_ring" is now an abbreviation (previously a definition). This fits more natural with advanced interpretations. *** HOLCF *** * The domain package now runs in definitional mode by default: The former command 'new_domain' is now called 'domain'. To use the domain package in its original axiomatic mode, use 'domain (unsafe)'. INCOMPATIBILITY. * The new class "domain" is now the default sort. Class "predomain" is an unpointed version of "domain". Theories can be updated by replacing sort annotations as shown below. INCOMPATIBILITY. 'a::type ~> 'a::countable 'a::cpo ~> 'a::predomain 'a::pcpo ~> 'a::domain * The old type class "rep" has been superseded by class "domain". Accordingly, users of the definitional package must remove any "default_sort rep" declarations. INCOMPATIBILITY. * The domain package (definitional mode) now supports unpointed predomain argument types, as long as they are marked 'lazy'. (Strict arguments must be in class "domain".) For example, the following domain definition now works: domain natlist = nil | cons (lazy "nat discr") (lazy "natlist") * Theory HOLCF/Library/HOL_Cpo provides cpo and predomain class instances for types from main HOL: bool, nat, int, char, 'a + 'b, 'a option, and 'a list. Additionally, it configures fixrec and the domain package to work with these types. For example: fixrec isInl :: "('a + 'b) u -> tr" where "isInl$(up$(Inl x)) = TT" | "isInl$(up$(Inr y)) = FF" domain V = VFun (lazy "V -> V") | VCon (lazy "nat") (lazy "V list") * The "(permissive)" option of fixrec has been replaced with a per-equation "(unchecked)" option. See src/HOL/HOLCF/Tutorial/Fixrec_ex.thy for examples. INCOMPATIBILITY. * The "bifinite" class no longer fixes a constant "approx"; the class now just asserts that such a function exists. INCOMPATIBILITY. * Former type "alg_defl" has been renamed to "defl". HOLCF no longer defines an embedding of type 'a defl into udom by default; instances of "bifinite" and "domain" classes are available in src/HOL/HOLCF/Library/Defl_Bifinite.thy. * The syntax "REP('a)" has been replaced with "DEFL('a)". * The predicate "directed" has been removed. INCOMPATIBILITY. * The type class "finite_po" has been removed. INCOMPATIBILITY. * The function "cprod_map" has been renamed to "prod_map". INCOMPATIBILITY. * The monadic bind operator on each powerdomain has new binder syntax similar to sets, e.g. "\\x\xs. t" represents "upper_bind\xs\(\ x. t)". * The infix syntax for binary union on each powerdomain has changed from e.g. "+\" to "\\", for consistency with set syntax. INCOMPATIBILITY. * The constant "UU" has been renamed to "bottom". The syntax "UU" is still supported as an input translation. * Renamed some theorems (the original names are also still available). expand_fun_below ~> fun_below_iff below_fun_ext ~> fun_belowI expand_cfun_eq ~> cfun_eq_iff ext_cfun ~> cfun_eqI expand_cfun_below ~> cfun_below_iff below_cfun_ext ~> cfun_belowI cont2cont_Rep_CFun ~> cont2cont_APP * The Abs and Rep functions for various types have changed names. Related theorem names have also changed to match. INCOMPATIBILITY. Rep_CFun ~> Rep_cfun Abs_CFun ~> Abs_cfun Rep_Sprod ~> Rep_sprod Abs_Sprod ~> Abs_sprod Rep_Ssum ~> Rep_ssum Abs_Ssum ~> Abs_ssum * Lemmas with names of the form *_defined_iff or *_strict_iff have been renamed to *_bottom_iff. INCOMPATIBILITY. * Various changes to bisimulation/coinduction with domain package: - Definitions of "bisim" constants no longer mention definedness. - With mutual recursion, "bisim" predicate is now curried. - With mutual recursion, each type gets a separate coind theorem. - Variable names in bisim_def and coinduct rules have changed. INCOMPATIBILITY. * Case combinators generated by the domain package for type "foo" are now named "foo_case" instead of "foo_when". INCOMPATIBILITY. * Several theorems have been renamed to more accurately reflect the names of constants and types involved. INCOMPATIBILITY. thelub_const ~> lub_const lub_const ~> is_lub_const thelubI ~> lub_eqI is_lub_lub ~> is_lubD2 lubI ~> is_lub_lub unique_lub ~> is_lub_unique is_ub_lub ~> is_lub_rangeD1 lub_bin_chain ~> is_lub_bin_chain lub_fun ~> is_lub_fun thelub_fun ~> lub_fun thelub_cfun ~> lub_cfun thelub_Pair ~> lub_Pair lub_cprod ~> is_lub_prod thelub_cprod ~> lub_prod minimal_cprod ~> minimal_prod inst_cprod_pcpo ~> inst_prod_pcpo UU_I ~> bottomI compact_UU ~> compact_bottom deflation_UU ~> deflation_bottom finite_deflation_UU ~> finite_deflation_bottom * Many legacy theorem names have been discontinued. INCOMPATIBILITY. sq_ord_less_eq_trans ~> below_eq_trans sq_ord_eq_less_trans ~> eq_below_trans refl_less ~> below_refl trans_less ~> below_trans antisym_less ~> below_antisym antisym_less_inverse ~> po_eq_conv [THEN iffD1] box_less ~> box_below rev_trans_less ~> rev_below_trans not_less2not_eq ~> not_below2not_eq less_UU_iff ~> below_UU_iff flat_less_iff ~> flat_below_iff adm_less ~> adm_below adm_not_less ~> adm_not_below adm_compact_not_less ~> adm_compact_not_below less_fun_def ~> below_fun_def expand_fun_less ~> fun_below_iff less_fun_ext ~> fun_belowI less_discr_def ~> below_discr_def discr_less_eq ~> discr_below_eq less_unit_def ~> below_unit_def less_cprod_def ~> below_prod_def prod_lessI ~> prod_belowI Pair_less_iff ~> Pair_below_iff fst_less_iff ~> fst_below_iff snd_less_iff ~> snd_below_iff expand_cfun_less ~> cfun_below_iff less_cfun_ext ~> cfun_belowI injection_less ~> injection_below less_up_def ~> below_up_def not_Iup_less ~> not_Iup_below Iup_less ~> Iup_below up_less ~> up_below Def_inject_less_eq ~> Def_below_Def Def_less_is_eq ~> Def_below_iff spair_less_iff ~> spair_below_iff less_sprod ~> below_sprod spair_less ~> spair_below sfst_less_iff ~> sfst_below_iff ssnd_less_iff ~> ssnd_below_iff fix_least_less ~> fix_least_below dist_less_one ~> dist_below_one less_ONE ~> below_ONE ONE_less_iff ~> ONE_below_iff less_sinlD ~> below_sinlD less_sinrD ~> below_sinrD *** FOL and ZF *** * All constant names are now qualified internally and use proper identifiers, e.g. "IFOL.eq" instead of "op =". INCOMPATIBILITY. *** ML *** * Antiquotation @{assert} inlines a function bool -> unit that raises Fail if the argument is false. Due to inlining the source position of failed assertions is included in the error output. * Discontinued antiquotation @{theory_ref}, which is obsolete since ML text is in practice always evaluated with a stable theory checkpoint. Minor INCOMPATIBILITY, use (Theory.check_thy @{theory}) instead. * Antiquotation @{theory A} refers to theory A from the ancestry of the current context, not any accidental theory loader state as before. Potential INCOMPATIBILITY, subtle change in semantics. * Syntax.pretty_priority (default 0) configures the required priority of pretty-printed output and thus affects insertion of parentheses. * Syntax.default_root (default "any") configures the inner syntax category (nonterminal symbol) for parsing of terms. * Former exception Library.UnequalLengths now coincides with ListPair.UnequalLengths. * Renamed structure MetaSimplifier to Raw_Simplifier. Note that the main functionality is provided by structure Simplifier. * Renamed raw "explode" function to "raw_explode" to emphasize its meaning. Note that internally to Isabelle, Symbol.explode is used in almost all situations. * Discontinued obsolete function sys_error and exception SYS_ERROR. See implementation manual for further details on exceptions in Isabelle/ML. * Renamed setmp_noncritical to Unsynchronized.setmp to emphasize its meaning. * Renamed structure PureThy to Pure_Thy and moved most of its operations to structure Global_Theory, to emphasize that this is rarely-used global-only stuff. * Discontinued Output.debug. Minor INCOMPATIBILITY, use plain writeln instead (or tracing for high-volume output). * Configuration option show_question_marks only affects regular pretty printing of types and terms, not raw Term.string_of_vname. * ML_Context.thm and ML_Context.thms are no longer pervasive. Rare INCOMPATIBILITY, superseded by static antiquotations @{thm} and @{thms} for most purposes. * ML structure Unsynchronized is never opened, not even in Isar interaction mode as before. Old Unsynchronized.set etc. have been discontinued -- use plain := instead. This should be *rare* anyway, since modern tools always work via official context data, notably configuration options. * Parallel and asynchronous execution requires special care concerning interrupts. Structure Exn provides some convenience functions that avoid working directly with raw Interrupt. User code must not absorb interrupts -- intermediate handling (for cleanup etc.) needs to be followed by re-raising of the original exception. Another common source of mistakes are "handle _" patterns, which make the meaning of the program subject to physical effects of the environment. New in Isabelle2009-2 (June 2010) --------------------------------- *** General *** * Authentic syntax for *all* logical entities (type classes, type constructors, term constants): provides simple and robust correspondence between formal entities and concrete syntax. Within the parse tree / AST representations, "constants" are decorated by their category (class, type, const) and spelled out explicitly with their full internal name. Substantial INCOMPATIBILITY concerning low-level syntax declarations and translations (translation rules and translation functions in ML). Some hints on upgrading: - Many existing uses of 'syntax' and 'translations' can be replaced by more modern 'type_notation', 'notation' and 'abbreviation', which are independent of this issue. - 'translations' require markup within the AST; the term syntax provides the following special forms: CONST c -- produces syntax version of constant c from context XCONST c -- literally c, checked as constant from context c -- literally c, if declared by 'syntax' Plain identifiers are treated as AST variables -- occasionally the system indicates accidental variables via the error "rhs contains extra variables". Type classes and type constructors are marked according to their concrete syntax. Some old translations rules need to be written for the "type" category, using type constructor application instead of pseudo-term application of the default category "logic". - 'parse_translation' etc. in ML may use the following antiquotations: @{class_syntax c} -- type class c within parse tree / AST @{term_syntax c} -- type constructor c within parse tree / AST @{const_syntax c} -- ML version of "CONST c" above @{syntax_const c} -- literally c (checked wrt. 'syntax' declarations) - Literal types within 'typed_print_translations', i.e. those *not* represented as pseudo-terms are represented verbatim. Use @{class c} or @{type_name c} here instead of the above syntax antiquotations. Note that old non-authentic syntax was based on unqualified base names, so all of the above "constant" names would coincide. Recall that 'print_syntax' and ML_command "set Syntax.trace_ast" help to diagnose syntax problems. * Type constructors admit general mixfix syntax, not just infix. * Concrete syntax may be attached to local entities without a proof body, too. This works via regular mixfix annotations for 'fix', 'def', 'obtain' etc. or via the explicit 'write' command, which is similar to the 'notation' command in theory specifications. * Discontinued unnamed infix syntax (legacy feature for many years) -- need to specify constant name and syntax separately. Internal ML datatype constructors have been renamed from InfixName to Infix etc. Minor INCOMPATIBILITY. * Schematic theorem statements need to be explicitly markup as such, via commands 'schematic_lemma', 'schematic_theorem', 'schematic_corollary'. Thus the relevance of the proof is made syntactically clear, which impacts performance in a parallel or asynchronous interactive environment. Minor INCOMPATIBILITY. * Use of cumulative prems via "!" in some proof methods has been discontinued (old legacy feature). * References 'trace_simp' and 'debug_simp' have been replaced by configuration options stored in the context. Enabling tracing (the case of debugging is similar) in proofs works via using [[trace_simp = true]] Tracing is then active for all invocations of the simplifier in subsequent goal refinement steps. Tracing may also still be enabled or disabled via the ProofGeneral settings menu. * Separate commands 'hide_class', 'hide_type', 'hide_const', 'hide_fact' replace the former 'hide' KIND command. Minor INCOMPATIBILITY. * Improved parallelism of proof term normalization: usedir -p2 -q0 is more efficient than combinations with -q1 or -q2. *** Pure *** * Proofterms record type-class reasoning explicitly, using the "unconstrain" operation internally. This eliminates all sort constraints from a theorem and proof, introducing explicit OFCLASS-premises. On the proof term level, this operation is automatically applied at theorem boundaries, such that closed proofs are always free of sort constraints. INCOMPATIBILITY for tools that inspect proof terms. * Local theory specifications may depend on extra type variables that are not present in the result type -- arguments TYPE('a) :: 'a itself are added internally. For example: definition unitary :: bool where "unitary = (ALL (x::'a) y. x = y)" * Predicates of locales introduced by classes carry a mandatory "class" prefix. INCOMPATIBILITY. * Vacuous class specifications observe default sort. INCOMPATIBILITY. * Old 'axclass' command has been discontinued. INCOMPATIBILITY, use 'class' instead. * Command 'code_reflect' allows to incorporate generated ML code into runtime environment; replaces immature code_datatype antiquotation. INCOMPATIBILITY. * Code generator: simple concept for abstract datatypes obeying invariants. * Code generator: details of internal data cache have no impact on the user space functionality any longer. * Methods "unfold_locales" and "intro_locales" ignore non-locale subgoals. This is more appropriate for interpretations with 'where'. INCOMPATIBILITY. * Command 'example_proof' opens an empty proof body. This allows to experiment with Isar, without producing any persistent result. * Commands 'type_notation' and 'no_type_notation' declare type syntax within a local theory context, with explicit checking of the constructors involved (in contrast to the raw 'syntax' versions). * Commands 'types' and 'typedecl' now work within a local theory context -- without introducing dependencies on parameters or assumptions, which is not possible in Isabelle/Pure. * Command 'defaultsort' has been renamed to 'default_sort', it works within a local theory context. Minor INCOMPATIBILITY. *** HOL *** * Command 'typedef' now works within a local theory context -- without introducing dependencies on parameters or assumptions, which is not possible in Isabelle/Pure/HOL. Note that the logical environment may contain multiple interpretations of local typedefs (with different non-emptiness proofs), even in a global theory context. * New package for quotient types. Commands 'quotient_type' and 'quotient_definition' may be used for defining types and constants by quotient constructions. An example is the type of integers created by quotienting pairs of natural numbers: fun intrel :: "(nat * nat) => (nat * nat) => bool" where "intrel (x, y) (u, v) = (x + v = u + y)" quotient_type int = "nat * nat" / intrel by (auto simp add: equivp_def expand_fun_eq) quotient_definition "0::int" is "(0::nat, 0::nat)" The method "lifting" can be used to lift of theorems from the underlying "raw" type to the quotient type. The example src/HOL/Quotient_Examples/FSet.thy includes such a quotient construction and provides a reasoning infrastructure for finite sets. * Renamed Library/Quotient.thy to Library/Quotient_Type.thy to avoid clash with new theory Quotient in Main HOL. * Moved the SMT binding into the main HOL session, eliminating separate HOL-SMT session. * List membership infix mem operation is only an input abbreviation. INCOMPATIBILITY. * Theory Library/Word.thy has been removed. Use library Word/Word.thy for future developements; former Library/Word.thy is still present in the AFP entry RSAPPS. * Theorem Int.int_induct renamed to Int.int_of_nat_induct and is no longer shadowed. INCOMPATIBILITY. * Dropped theorem duplicate comp_arith; use semiring_norm instead. INCOMPATIBILITY. * Dropped theorem RealPow.real_sq_order; use power2_le_imp_le instead. INCOMPATIBILITY. * Dropped normalizing_semiring etc; use the facts in semiring classes instead. INCOMPATIBILITY. * Dropped several real-specific versions of lemmas about floor and ceiling; use the generic lemmas from theory "Archimedean_Field" instead. INCOMPATIBILITY. floor_number_of_eq ~> floor_number_of le_floor_eq_number_of ~> number_of_le_floor le_floor_eq_zero ~> zero_le_floor le_floor_eq_one ~> one_le_floor floor_less_eq_number_of ~> floor_less_number_of floor_less_eq_zero ~> floor_less_zero floor_less_eq_one ~> floor_less_one less_floor_eq_number_of ~> number_of_less_floor less_floor_eq_zero ~> zero_less_floor less_floor_eq_one ~> one_less_floor floor_le_eq_number_of ~> floor_le_number_of floor_le_eq_zero ~> floor_le_zero floor_le_eq_one ~> floor_le_one floor_subtract_number_of ~> floor_diff_number_of floor_subtract_one ~> floor_diff_one ceiling_number_of_eq ~> ceiling_number_of ceiling_le_eq_number_of ~> ceiling_le_number_of ceiling_le_zero_eq ~> ceiling_le_zero ceiling_le_eq_one ~> ceiling_le_one less_ceiling_eq_number_of ~> number_of_less_ceiling less_ceiling_eq_zero ~> zero_less_ceiling less_ceiling_eq_one ~> one_less_ceiling ceiling_less_eq_number_of ~> ceiling_less_number_of ceiling_less_eq_zero ~> ceiling_less_zero ceiling_less_eq_one ~> ceiling_less_one le_ceiling_eq_number_of ~> number_of_le_ceiling le_ceiling_eq_zero ~> zero_le_ceiling le_ceiling_eq_one ~> one_le_ceiling ceiling_subtract_number_of ~> ceiling_diff_number_of ceiling_subtract_one ~> ceiling_diff_one * Theory "Finite_Set": various folding_XXX locales facilitate the application of the various fold combinators on finite sets. * Library theory "RBT" renamed to "RBT_Impl"; new library theory "RBT" provides abstract red-black tree type which is backed by "RBT_Impl" as implementation. INCOMPATIBILITY. * Theory Library/Coinductive_List has been removed -- superseded by AFP/thys/Coinductive. * Theory PReal, including the type "preal" and related operations, has been removed. INCOMPATIBILITY. * Real: new development using Cauchy Sequences. * Split off theory "Big_Operators" containing setsum, setprod, Inf_fin, Sup_fin, Min, Max from theory Finite_Set. INCOMPATIBILITY. * Theory "Rational" renamed to "Rat", for consistency with "Nat", "Int" etc. INCOMPATIBILITY. * Constant Rat.normalize needs to be qualified. INCOMPATIBILITY. * New set of rules "ac_simps" provides combined assoc / commute rewrites for all interpretations of the appropriate generic locales. * Renamed theory "OrderedGroup" to "Groups" and split theory "Ring_and_Field" into theories "Rings" and "Fields"; for more appropriate and more consistent names suitable for name prefixes within the HOL theories. INCOMPATIBILITY. * Some generic constants have been put to appropriate theories: - less_eq, less: Orderings - zero, one, plus, minus, uminus, times, abs, sgn: Groups - inverse, divide: Rings INCOMPATIBILITY. * More consistent naming of type classes involving orderings (and lattices): lower_semilattice ~> semilattice_inf upper_semilattice ~> semilattice_sup dense_linear_order ~> dense_linorder pordered_ab_group_add ~> ordered_ab_group_add pordered_ab_group_add_abs ~> ordered_ab_group_add_abs pordered_ab_semigroup_add ~> ordered_ab_semigroup_add pordered_ab_semigroup_add_imp_le ~> ordered_ab_semigroup_add_imp_le pordered_cancel_ab_semigroup_add ~> ordered_cancel_ab_semigroup_add pordered_cancel_comm_semiring ~> ordered_cancel_comm_semiring pordered_cancel_semiring ~> ordered_cancel_semiring pordered_comm_monoid_add ~> ordered_comm_monoid_add pordered_comm_ring ~> ordered_comm_ring pordered_comm_semiring ~> ordered_comm_semiring pordered_ring ~> ordered_ring pordered_ring_abs ~> ordered_ring_abs pordered_semiring ~> ordered_semiring ordered_ab_group_add ~> linordered_ab_group_add ordered_ab_semigroup_add ~> linordered_ab_semigroup_add ordered_cancel_ab_semigroup_add ~> linordered_cancel_ab_semigroup_add ordered_comm_semiring_strict ~> linordered_comm_semiring_strict ordered_field ~> linordered_field ordered_field_no_lb ~> linordered_field_no_lb ordered_field_no_ub ~> linordered_field_no_ub ordered_field_dense_linear_order ~> dense_linordered_field ordered_idom ~> linordered_idom ordered_ring ~> linordered_ring ordered_ring_le_cancel_factor ~> linordered_ring_le_cancel_factor ordered_ring_less_cancel_factor ~> linordered_ring_less_cancel_factor ordered_ring_strict ~> linordered_ring_strict ordered_semidom ~> linordered_semidom ordered_semiring ~> linordered_semiring ordered_semiring_1 ~> linordered_semiring_1 ordered_semiring_1_strict ~> linordered_semiring_1_strict ordered_semiring_strict ~> linordered_semiring_strict The following slightly odd type classes have been moved to a separate theory Library/Lattice_Algebras: lordered_ab_group_add ~> lattice_ab_group_add lordered_ab_group_add_abs ~> lattice_ab_group_add_abs lordered_ab_group_add_meet ~> semilattice_inf_ab_group_add lordered_ab_group_add_join ~> semilattice_sup_ab_group_add lordered_ring ~> lattice_ring INCOMPATIBILITY. * Refined field classes: - classes division_ring_inverse_zero, field_inverse_zero, linordered_field_inverse_zero include rule inverse 0 = 0 -- subsumes former division_by_zero class; - numerous lemmas have been ported from field to division_ring. INCOMPATIBILITY. * Refined algebra theorem collections: - dropped theorem group group_simps, use algebra_simps instead; - dropped theorem group ring_simps, use field_simps instead; - proper theorem collection field_simps subsumes former theorem groups field_eq_simps and field_simps; - dropped lemma eq_minus_self_iff which is a duplicate for equal_neg_zero. INCOMPATIBILITY. * Theory Finite_Set and List: some lemmas have been generalized from sets to lattices: fun_left_comm_idem_inter ~> fun_left_comm_idem_inf fun_left_comm_idem_union ~> fun_left_comm_idem_sup inter_Inter_fold_inter ~> inf_Inf_fold_inf union_Union_fold_union ~> sup_Sup_fold_sup Inter_fold_inter ~> Inf_fold_inf Union_fold_union ~> Sup_fold_sup inter_INTER_fold_inter ~> inf_INFI_fold_inf union_UNION_fold_union ~> sup_SUPR_fold_sup INTER_fold_inter ~> INFI_fold_inf UNION_fold_union ~> SUPR_fold_sup * Theory "Complete_Lattice": lemmas top_def and bot_def have been replaced by the more convenient lemmas Inf_empty and Sup_empty. Dropped lemmas Inf_insert_simp and Sup_insert_simp, which are subsumed by Inf_insert and Sup_insert. Lemmas Inf_UNIV and Sup_UNIV replace former Inf_Univ and Sup_Univ. Lemmas inf_top_right and sup_bot_right subsume inf_top and sup_bot respectively. INCOMPATIBILITY. * Reorganized theory Multiset: swapped notation of pointwise and multiset order: - pointwise ordering is instance of class order with standard syntax <= and <; - multiset ordering has syntax <=# and <#; partial order properties are provided by means of interpretation with prefix multiset_order; - less duplication, less historical organization of sections, conversion from associations lists to multisets, rudimentary code generation; - use insert_DiffM2 [symmetric] instead of elem_imp_eq_diff_union, if needed. Renamed: multiset_eq_conv_count_eq ~> multiset_ext_iff multi_count_ext ~> multiset_ext diff_union_inverse2 ~> diff_union_cancelR INCOMPATIBILITY. * Theory Permutation: replaced local "remove" by List.remove1. * Code generation: ML and OCaml code is decorated with signatures. * Theory List: added transpose. * Library/Nat_Bijection.thy is a collection of bijective functions between nat and other types, which supersedes the older libraries Library/Nat_Int_Bij.thy and HOLCF/NatIso.thy. INCOMPATIBILITY. Constants: Nat_Int_Bij.nat2_to_nat ~> prod_encode Nat_Int_Bij.nat_to_nat2 ~> prod_decode Nat_Int_Bij.int_to_nat_bij ~> int_encode Nat_Int_Bij.nat_to_int_bij ~> int_decode Countable.pair_encode ~> prod_encode NatIso.prod2nat ~> prod_encode NatIso.nat2prod ~> prod_decode NatIso.sum2nat ~> sum_encode NatIso.nat2sum ~> sum_decode NatIso.list2nat ~> list_encode NatIso.nat2list ~> list_decode NatIso.set2nat ~> set_encode NatIso.nat2set ~> set_decode Lemmas: Nat_Int_Bij.bij_nat_to_int_bij ~> bij_int_decode Nat_Int_Bij.nat2_to_nat_inj ~> inj_prod_encode Nat_Int_Bij.nat2_to_nat_surj ~> surj_prod_encode Nat_Int_Bij.nat_to_nat2_inj ~> inj_prod_decode Nat_Int_Bij.nat_to_nat2_surj ~> surj_prod_decode Nat_Int_Bij.i2n_n2i_id ~> int_encode_inverse Nat_Int_Bij.n2i_i2n_id ~> int_decode_inverse Nat_Int_Bij.surj_nat_to_int_bij ~> surj_int_encode Nat_Int_Bij.surj_int_to_nat_bij ~> surj_int_decode Nat_Int_Bij.inj_nat_to_int_bij ~> inj_int_encode Nat_Int_Bij.inj_int_to_nat_bij ~> inj_int_decode Nat_Int_Bij.bij_nat_to_int_bij ~> bij_int_encode Nat_Int_Bij.bij_int_to_nat_bij ~> bij_int_decode * Sledgehammer: - Renamed ATP commands: atp_info ~> sledgehammer running_atps atp_kill ~> sledgehammer kill_atps atp_messages ~> sledgehammer messages atp_minimize ~> sledgehammer minimize print_atps ~> sledgehammer available_atps INCOMPATIBILITY. - Added user's manual ("isabelle doc sledgehammer"). - Added option syntax and "sledgehammer_params" to customize Sledgehammer's behavior. See the manual for details. - Modified the Isar proof reconstruction code so that it produces direct proofs rather than proofs by contradiction. (This feature is still experimental.) - Made Isar proof reconstruction work for SPASS, remote ATPs, and in full-typed mode. - Added support for TPTP syntax for SPASS via the "spass_tptp" ATP. * Nitpick: - Added and implemented "binary_ints" and "bits" options. - Added "std" option and implemented support for nonstandard models. - Added and implemented "finitize" option to improve the precision of infinite datatypes based on a monotonicity analysis. - Added support for quotient types. - Added support for "specification" and "ax_specification" constructs. - Added support for local definitions (for "function" and "termination" proofs). - Added support for term postprocessors. - Optimized "Multiset.multiset" and "FinFun.finfun". - Improved efficiency of "destroy_constrs" optimization. - Fixed soundness bugs related to "destroy_constrs" optimization and record getters. - Fixed soundness bug related to higher-order constructors. - Fixed soundness bug when "full_descrs" is enabled. - Improved precision of set constructs. - Added "atoms" option. - Added cache to speed up repeated Kodkod invocations on the same problems. - Renamed "MiniSatJNI", "zChaffJNI", "BerkMinAlloy", and "SAT4JLight" to "MiniSat_JNI", "zChaff_JNI", "BerkMin_Alloy", and "SAT4J_Light". INCOMPATIBILITY. - Removed "skolemize", "uncurry", "sym_break", "flatten_prop", "sharing_depth", and "show_skolems" options. INCOMPATIBILITY. - Removed "nitpick_intro" attribute. INCOMPATIBILITY. * Method "induct" now takes instantiations of the form t, where t is not a variable, as a shorthand for "x == t", where x is a fresh variable. If this is not intended, t has to be enclosed in parentheses. By default, the equalities generated by definitional instantiations are pre-simplified, which may cause parameters of inductive cases to disappear, or may even delete some of the inductive cases. Use "induct (no_simp)" instead of "induct" to restore the old behaviour. The (no_simp) option is also understood by the "cases" and "nominal_induct" methods, which now perform pre-simplification, too. INCOMPATIBILITY. *** HOLCF *** * Variable names in lemmas generated by the domain package have changed; the naming scheme is now consistent with the HOL datatype package. Some proof scripts may be affected, INCOMPATIBILITY. * The domain package no longer defines the function "foo_copy" for recursive domain "foo". The reach lemma is now stated directly in terms of "foo_take". Lemmas and proofs that mention "foo_copy" must be reformulated in terms of "foo_take", INCOMPATIBILITY. * Most definedness lemmas generated by the domain package (previously of the form "x ~= UU ==> foo$x ~= UU") now have an if-and-only-if form like "foo$x = UU <-> x = UU", which works better as a simp rule. Proofs that used definedness lemmas as intro rules may break, potential INCOMPATIBILITY. * Induction and casedist rules generated by the domain package now declare proper case_names (one called "bottom", and one named for each constructor). INCOMPATIBILITY. * For mutually-recursive domains, separate "reach" and "take_lemma" rules are generated for each domain, INCOMPATIBILITY. foo_bar.reach ~> foo.reach bar.reach foo_bar.take_lemmas ~> foo.take_lemma bar.take_lemma * Some lemmas generated by the domain package have been renamed for consistency with the datatype package, INCOMPATIBILITY. foo.ind ~> foo.induct foo.finite_ind ~> foo.finite_induct foo.coind ~> foo.coinduct foo.casedist ~> foo.exhaust foo.exhaust ~> foo.nchotomy * For consistency with other definition packages, the fixrec package now generates qualified theorem names, INCOMPATIBILITY. foo_simps ~> foo.simps foo_unfold ~> foo.unfold foo_induct ~> foo.induct * The "fixrec_simp" attribute has been removed. The "fixrec_simp" method and internal fixrec proofs now use the default simpset instead. INCOMPATIBILITY. * The "contlub" predicate has been removed. Proof scripts should use lemma contI2 in place of monocontlub2cont, INCOMPATIBILITY. * The "admw" predicate has been removed, INCOMPATIBILITY. * The constants cpair, cfst, and csnd have been removed in favor of Pair, fst, and snd from Isabelle/HOL, INCOMPATIBILITY. *** ML *** * Antiquotations for basic formal entities: @{class NAME} -- type class @{class_syntax NAME} -- syntax representation of the above @{type_name NAME} -- logical type @{type_abbrev NAME} -- type abbreviation @{nonterminal NAME} -- type of concrete syntactic category @{type_syntax NAME} -- syntax representation of any of the above @{const_name NAME} -- logical constant (INCOMPATIBILITY) @{const_abbrev NAME} -- abbreviated constant @{const_syntax NAME} -- syntax representation of any of the above * Antiquotation @{syntax_const NAME} ensures that NAME refers to a raw syntax constant (cf. 'syntax' command). * Antiquotation @{make_string} inlines a function to print arbitrary values similar to the ML toplevel. The result is compiler dependent and may fall back on "?" in certain situations. * Diagnostic commands 'ML_val' and 'ML_command' may refer to antiquotations @{Isar.state} and @{Isar.goal}. This replaces impure Isar.state() and Isar.goal(), which belong to the old TTY loop and do not work with the asynchronous Isar document model. * Configuration options now admit dynamic default values, depending on the context or even global references. * SHA1.digest digests strings according to SHA-1 (see RFC 3174). It uses an efficient external library if available (for Poly/ML). * Renamed some important ML structures, while keeping the old names for some time as aliases within the structure Legacy: OuterKeyword ~> Keyword OuterLex ~> Token OuterParse ~> Parse OuterSyntax ~> Outer_Syntax PrintMode ~> Print_Mode SpecParse ~> Parse_Spec ThyInfo ~> Thy_Info ThyLoad ~> Thy_Load ThyOutput ~> Thy_Output TypeInfer ~> Type_Infer Note that "open Legacy" simplifies porting of sources, but forgetting to remove it again will complicate porting again in the future. * Most operations that refer to a global context are named accordingly, e.g. Simplifier.global_context or ProofContext.init_global. There are some situations where a global context actually works, but under normal circumstances one needs to pass the proper local context through the code! * Discontinued old TheoryDataFun with its copy/init operation -- data needs to be pure. Functor Theory_Data_PP retains the traditional Pretty.pp argument to merge, which is absent in the standard Theory_Data version. * Sorts.certify_sort and derived "cert" operations for types and terms no longer minimize sorts. Thus certification at the boundary of the inference kernel becomes invariant under addition of class relations, which is an important monotonicity principle. Sorts are now minimized in the syntax layer only, at the boundary between the end-user and the system. Subtle INCOMPATIBILITY, may have to use Sign.minimize_sort explicitly in rare situations. * Renamed old-style Drule.standard to Drule.export_without_context, to emphasize that this is in no way a standard operation. INCOMPATIBILITY. * Subgoal.FOCUS (and variants): resulting goal state is normalized as usual for resolution. Rare INCOMPATIBILITY. * Renamed varify/unvarify operations to varify_global/unvarify_global to emphasize that these only work in a global situation (which is quite rare). * Curried take and drop in library.ML; negative length is interpreted as infinity (as in chop). Subtle INCOMPATIBILITY. * Proof terms: type substitutions on proof constants now use canonical order of type variables. INCOMPATIBILITY for tools working with proof terms. * Raw axioms/defs may no longer carry sort constraints, and raw defs may no longer carry premises. User-level specifications are transformed accordingly by Thm.add_axiom/add_def. *** System *** * Discontinued special HOL_USEDIR_OPTIONS for the main HOL image; ISABELLE_USEDIR_OPTIONS applies uniformly to all sessions. Note that proof terms are enabled unconditionally in the new HOL-Proofs image. * Discontinued old ISABELLE and ISATOOL environment settings (legacy feature since Isabelle2009). Use ISABELLE_PROCESS and ISABELLE_TOOL, respectively. * Old lib/scripts/polyml-platform is superseded by the ISABELLE_PLATFORM setting variable, which defaults to the 32 bit variant, even on a 64 bit machine. The following example setting prefers 64 bit if available: ML_PLATFORM="${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM}" * The preliminary Isabelle/jEdit application demonstrates the emerging Isabelle/Scala layer for advanced prover interaction and integration. See src/Tools/jEdit or "isabelle jedit" provided by the properly built component. * "IsabelleText" is a Unicode font derived from Bitstream Vera Mono and Bluesky TeX fonts. It provides the usual Isabelle symbols, similar to the default assignment of the document preparation system (cf. isabellesym.sty). The Isabelle/Scala class Isabelle_System provides some operations for direct access to the font without asking the user for manual installation. New in Isabelle2009-1 (December 2009) ------------------------------------- *** General *** * Discontinued old form of "escaped symbols" such as \\. Only one backslash should be used, even in ML sources. *** Pure *** * Locale interpretation propagates mixins along the locale hierarchy. The currently only available mixins are the equations used to map local definitions to terms of the target domain of an interpretation. * Reactivated diagnostic command 'print_interps'. Use "print_interps loc" to print all interpretations of locale "loc" in the theory. Interpretations in proofs are not shown. * Thoroughly revised locales tutorial. New section on conditional interpretation. * On instantiation of classes, remaining undefined class parameters are formally declared. INCOMPATIBILITY. *** Document preparation *** * New generalized style concept for printing terms: @{foo (style) ...} instead of @{foo_style style ...} (old form is still retained for backward compatibility). Styles can be also applied for antiquotations prop, term_type and typeof. *** HOL *** * New proof method "smt" for a combination of first-order logic with equality, linear and nonlinear (natural/integer/real) arithmetic, and fixed-size bitvectors; there is also basic support for higher-order features (esp. lambda abstractions). It is an incomplete decision procedure based on external SMT solvers using the oracle mechanism; for the SMT solver Z3, this method is proof-producing. Certificates are provided to avoid calling the external solvers solely for re-checking proofs. Due to a remote SMT service there is no need for installing SMT solvers locally. See src/HOL/SMT. * New commands to load and prove verification conditions generated by the Boogie program verifier or derived systems (e.g. the Verifying C Compiler (VCC) or Spec#). See src/HOL/Boogie. * New counterexample generator tool 'nitpick' based on the Kodkod relational model finder. See src/HOL/Tools/Nitpick and src/HOL/Nitpick_Examples. * New commands 'code_pred' and 'values' to invoke the predicate compiler and to enumerate values of inductive predicates. * A tabled implementation of the reflexive transitive closure. * New implementation of quickcheck uses generic code generator; default generators are provided for all suitable HOL types, records and datatypes. Old quickcheck can be re-activated importing theory Library/SML_Quickcheck. * New testing tool Mirabelle for automated proof tools. Applies several tools and tactics like sledgehammer, metis, or quickcheck, to every proof step in a theory. To be used in batch mode via the "mirabelle" utility. * New proof method "sos" (sum of squares) for nonlinear real arithmetic (originally due to John Harison). It requires theory Library/Sum_Of_Squares. It is not a complete decision procedure but works well in practice on quantifier-free real arithmetic with +, -, *, ^, =, <= and <, i.e. boolean combinations of equalities and inequalities between polynomials. It makes use of external semidefinite programming solvers. Method "sos" generates a certificate that can be pasted into the proof thus avoiding the need to call an external tool every time the proof is checked. See src/HOL/Library/Sum_Of_Squares. * New method "linarith" invokes existing linear arithmetic decision procedure only. * New command 'atp_minimal' reduces result produced by Sledgehammer. * New Sledgehammer option "Full Types" in Proof General settings menu. Causes full type information to be output to the ATPs. This slows ATPs down considerably but eliminates a source of unsound "proofs" that fail later. * New method "metisFT": A version of metis that uses full type information in order to avoid failures of proof reconstruction. * New evaluator "approximate" approximates an real valued term using the same method as the approximation method. * Method "approximate" now supports arithmetic expressions as boundaries of intervals and implements interval splitting and Taylor series expansion. * ML antiquotation @{code_datatype} inserts definition of a datatype generated by the code generator; e.g. see src/HOL/Predicate.thy. * New theory SupInf of the supremum and infimum operators for sets of reals. * New theory Probability, which contains a development of measure theory, eventually leading to Lebesgue integration and probability. * Extended Multivariate Analysis to include derivation and Brouwer's fixpoint theorem. * Reorganization of number theory, INCOMPATIBILITY: - new number theory development for nat and int, in theories Divides and GCD as well as in new session Number_Theory - some constants and facts now suffixed with _nat and _int accordingly - former session NumberTheory now named Old_Number_Theory, including theories Legacy_GCD and Primes (prefer Number_Theory if possible) - moved theory Pocklington from src/HOL/Library to src/HOL/Old_Number_Theory * Theory GCD includes functions Gcd/GCD and Lcm/LCM for the gcd and lcm of finite and infinite sets. It is shown that they form a complete lattice. * Class semiring_div requires superclass no_zero_divisors and proof of div_mult_mult1; theorems div_mult_mult1, div_mult_mult2, div_mult_mult1_if, div_mult_mult1 and div_mult_mult2 have been generalized to class semiring_div, subsuming former theorems zdiv_zmult_zmult1, zdiv_zmult_zmult1_if, zdiv_zmult_zmult1 and zdiv_zmult_zmult2. div_mult_mult1 is now [simp] by default. INCOMPATIBILITY. * Refinements to lattice classes and sets: - less default intro/elim rules in locale variant, more default intro/elim rules in class variant: more uniformity - lemma ge_sup_conv renamed to le_sup_iff, in accordance with le_inf_iff - dropped lemma alias inf_ACI for inf_aci (same for sup_ACI and sup_aci) - renamed ACI to inf_sup_aci - new class "boolean_algebra" - class "complete_lattice" moved to separate theory "Complete_Lattice"; corresponding constants (and abbreviations) renamed and with authentic syntax: Set.Inf ~> Complete_Lattice.Inf Set.Sup ~> Complete_Lattice.Sup Set.INFI ~> Complete_Lattice.INFI Set.SUPR ~> Complete_Lattice.SUPR Set.Inter ~> Complete_Lattice.Inter Set.Union ~> Complete_Lattice.Union Set.INTER ~> Complete_Lattice.INTER Set.UNION ~> Complete_Lattice.UNION - authentic syntax for Set.Pow Set.image - mere abbreviations: Set.empty (for bot) Set.UNIV (for top) Set.inter (for inf, formerly Set.Int) Set.union (for sup, formerly Set.Un) Complete_Lattice.Inter (for Inf) Complete_Lattice.Union (for Sup) Complete_Lattice.INTER (for INFI) Complete_Lattice.UNION (for SUPR) - object-logic definitions as far as appropriate INCOMPATIBILITY. Care is required when theorems Int_subset_iff or Un_subset_iff are explicitly deleted as default simp rules; then also their lattice counterparts le_inf_iff and le_sup_iff have to be deleted to achieve the desired effect. * Rules inf_absorb1, inf_absorb2, sup_absorb1, sup_absorb2 are no simp rules by default any longer; the same applies to min_max.inf_absorb1 etc. INCOMPATIBILITY. * Rules sup_Int_eq and sup_Un_eq are no longer declared as pred_set_conv by default. INCOMPATIBILITY. * Power operations on relations and functions are now one dedicated constant "compow" with infix syntax "^^". Power operation on multiplicative monoids retains syntax "^" and is now defined generic in class power. INCOMPATIBILITY. * Relation composition "R O S" now has a more standard argument order: "R O S = {(x, z). EX y. (x, y) : R & (y, z) : S}". INCOMPATIBILITY, rewrite propositions with "S O R" --> "R O S". Proofs may occasionally break, since the O_assoc rule was not rewritten like this. Fix using O_assoc[symmetric]. The same applies to the curried version "R OO S". * Function "Inv" is renamed to "inv_into" and function "inv" is now an abbreviation for "inv_into UNIV". Lemmas are renamed accordingly. INCOMPATIBILITY. * Most rules produced by inductive and datatype package have mandatory prefixes. INCOMPATIBILITY. * Changed "DERIV_intros" to a dynamic fact, which can be augmented by the attribute of the same name. Each of the theorems in the list DERIV_intros assumes composition with an additional function and matches a variable to the derivative, which has to be solved by the Simplifier. Hence (auto intro!: DERIV_intros) computes the derivative of most elementary terms. Former Maclauren.DERIV_tac and Maclauren.deriv_tac should be replaced by (auto intro!: DERIV_intros). INCOMPATIBILITY. * Code generator attributes follow the usual underscore convention: code_unfold replaces code unfold code_post replaces code post etc. INCOMPATIBILITY. * Renamed methods: sizechange -> size_change induct_scheme -> induction_schema INCOMPATIBILITY. * Discontinued abbreviation "arbitrary" of constant "undefined". INCOMPATIBILITY, use "undefined" directly. * Renamed theorems: Suc_eq_add_numeral_1 -> Suc_eq_plus1 Suc_eq_add_numeral_1_left -> Suc_eq_plus1_left Suc_plus1 -> Suc_eq_plus1 *anti_sym -> *antisym* vector_less_eq_def -> vector_le_def INCOMPATIBILITY. * Added theorem List.map_map as [simp]. Removed List.map_compose. INCOMPATIBILITY. * Removed predicate "M hassize n" (<--> card M = n & finite M). INCOMPATIBILITY. *** HOLCF *** * Theory Representable defines a class "rep" of domains that are representable (via an ep-pair) in the universal domain type "udom". Instances are provided for all type constructors defined in HOLCF. * The 'new_domain' command is a purely definitional version of the domain package, for representable domains. Syntax is identical to the old domain package. The 'new_domain' package also supports indirect recursion using previously-defined type constructors. See src/HOLCF/ex/New_Domain.thy for examples. * Method "fixrec_simp" unfolds one step of a fixrec-defined constant on the left-hand side of an equation, and then performs simplification. Rewriting is done using rules declared with the "fixrec_simp" attribute. The "fixrec_simp" method is intended as a replacement for "fixpat"; see src/HOLCF/ex/Fixrec_ex.thy for examples. * The pattern-match compiler in 'fixrec' can now handle constructors with HOL function types. Pattern-match combinators for the Pair constructor are pre-configured. * The 'fixrec' package now produces better fixed-point induction rules for mutually-recursive definitions: Induction rules have conclusions of the form "P foo bar" instead of "P ". * The constant "sq_le" (with infix syntax "<<" or "\") has been renamed to "below". The name "below" now replaces "less" in many theorem names. (Legacy theorem names using "less" are still supported as well.) * The 'fixrec' package now supports "bottom patterns". Bottom patterns can be used to generate strictness rules, or to make functions more strict (much like the bang-patterns supported by the Glasgow Haskell Compiler). See src/HOLCF/ex/Fixrec_ex.thy for examples. *** ML *** * Support for Poly/ML 5.3.0, with improved reporting of compiler errors and run-time exceptions, including detailed source positions. * Structure Name_Space (formerly NameSpace) now manages uniquely identified entries, with some additional information such as source position, logical grouping etc. * Theory and context data is now introduced by the simplified and modernized functors Theory_Data, Proof_Data, Generic_Data. Data needs to be pure, but the old TheoryDataFun for mutable data (with explicit copy operation) is still available for some time. * Structure Synchronized (cf. src/Pure/Concurrent/synchronized.ML) provides a high-level programming interface to synchronized state variables with atomic update. This works via pure function application within a critical section -- its runtime should be as short as possible; beware of deadlocks if critical code is nested, either directly or indirectly via other synchronized variables! * Structure Unsynchronized (cf. src/Pure/ML-Systems/unsynchronized.ML) wraps raw ML references, explicitly indicating their non-thread-safe behaviour. The Isar toplevel keeps this structure open, to accommodate Proof General as well as quick and dirty interactive experiments with references. * PARALLEL_CHOICE and PARALLEL_GOALS provide basic support for parallel tactical reasoning. * Tacticals Subgoal.FOCUS, Subgoal.FOCUS_PREMS, Subgoal.FOCUS_PARAMS are similar to SUBPROOF, but are slightly more flexible: only the specified parts of the subgoal are imported into the context, and the body tactic may introduce new subgoals and schematic variables. * Old tactical METAHYPS, which does not observe the proof context, has been renamed to Old_Goals.METAHYPS and awaits deletion. Use SUBPROOF or Subgoal.FOCUS etc. * Renamed functor TableFun to Table, and GraphFun to Graph. (Since functors have their own ML name space there is no point to mark them separately.) Minor INCOMPATIBILITY. * Renamed NamedThmsFun to Named_Thms. INCOMPATIBILITY. * Renamed several structures FooBar to Foo_Bar. Occasional, INCOMPATIBILITY. * Operations of structure Skip_Proof no longer require quick_and_dirty mode, which avoids critical setmp. * Eliminated old Attrib.add_attributes, Method.add_methods and related combinators for "args". INCOMPATIBILITY, need to use simplified Attrib/Method.setup introduced in Isabelle2009. * Proper context for simpset_of, claset_of, clasimpset_of. May fall back on global_simpset_of, global_claset_of, global_clasimpset_of as last resort. INCOMPATIBILITY. * Display.pretty_thm now requires a proper context (cf. former ProofContext.pretty_thm). May fall back on Display.pretty_thm_global or even Display.pretty_thm_without_context as last resort. INCOMPATIBILITY. * Discontinued Display.pretty_ctyp/cterm etc. INCOMPATIBILITY, use Syntax.pretty_typ/term directly, preferably with proper context instead of global theory. *** System *** * Further fine tuning of parallel proof checking, scales up to 8 cores (max. speedup factor 5.0). See also Goal.parallel_proofs in ML and usedir option -q. * Support for additional "Isabelle components" via etc/components, see also the system manual. * The isabelle makeall tool now operates on all components with IsaMakefile, not just hardwired "logics". * Removed "compress" option from isabelle-process and isabelle usedir; this is always enabled. * Discontinued support for Poly/ML 4.x versions. * Isabelle tool "wwwfind" provides web interface for 'find_theorems' on a given logic image. This requires the lighttpd webserver and is currently supported on Linux only. New in Isabelle2009 (April 2009) -------------------------------- *** General *** * Simplified main Isabelle executables, with less surprises on case-insensitive file-systems (such as Mac OS). - The main Isabelle tool wrapper is now called "isabelle" instead of "isatool." - The former "isabelle" alias for "isabelle-process" has been removed (should rarely occur to regular users). - The former "isabelle-interface" and its alias "Isabelle" have been removed (interfaces are now regular Isabelle tools). Within scripts and make files, the Isabelle environment variables ISABELLE_TOOL and ISABELLE_PROCESS replace old ISATOOL and ISABELLE, respectively. (The latter are still available as legacy feature.) The old isabelle-interface wrapper could react in confusing ways if the interface was uninstalled or changed otherwise. Individual interface tool configuration is now more explicit, see also the Isabelle system manual. In particular, Proof General is now available via "isabelle emacs". INCOMPATIBILITY, need to adapt derivative scripts. Users may need to purge installed copies of Isabelle executables and re-run "isabelle install -p ...", or use symlinks. * The default for ISABELLE_HOME_USER is now ~/.isabelle instead of the old ~/isabelle, which was slightly non-standard and apt to cause surprises on case-insensitive file-systems (such as Mac OS). INCOMPATIBILITY, need to move existing ~/isabelle/etc, ~/isabelle/heaps, ~/isabelle/browser_info to the new place. Special care is required when using older releases of Isabelle. Note that ISABELLE_HOME_USER can be changed in Isabelle/etc/settings of any Isabelle distribution, in order to use the new ~/.isabelle uniformly. * Proofs of fully specified statements are run in parallel on multi-core systems. A speedup factor of 2.5 to 3.2 can be expected on a regular 4-core machine, if the initial heap space is made reasonably large (cf. Poly/ML option -H). (Requires Poly/ML 5.2.1 or later.) * The main reference manuals ("isar-ref", "implementation", and "system") have been updated and extended. Formally checked references as hyperlinks are now available uniformly. *** Pure *** * Complete re-implementation of locales. INCOMPATIBILITY in several respects. The most important changes are listed below. See the Tutorial on Locales ("locales" manual) for details. - In locale expressions, instantiation replaces renaming. Parameters must be declared in a for clause. To aid compatibility with previous parameter inheritance, in locale declarations, parameters that are not 'touched' (instantiation position "_" or omitted) are implicitly added with their syntax at the beginning of the for clause. - Syntax from abbreviations and definitions in locales is available in locale expressions and context elements. The latter is particularly useful in locale declarations. - More flexible mechanisms to qualify names generated by locale expressions. Qualifiers (prefixes) may be specified in locale expressions, and can be marked as mandatory (syntax: "name!:") or optional (syntax "name?:"). The default depends for plain "name:" depends on the situation where a locale expression is used: in commands 'locale' and 'sublocale' prefixes are optional, in 'interpretation' and 'interpret' prefixes are mandatory. The old implicit qualifiers derived from the parameter names of a locale are no longer generated. - Command "sublocale l < e" replaces "interpretation l < e". The instantiation clause in "interpretation" and "interpret" (square brackets) is no longer available. Use locale expressions. - When converting proof scripts, mandatory qualifiers in 'interpretation' and 'interpret' should be retained by default, even if this is an INCOMPATIBILITY compared to former behavior. In the worst case, use the "name?:" form for non-mandatory ones. Qualifiers in locale expressions range over a single locale instance only. - Dropped locale element "includes". This is a major INCOMPATIBILITY. In existing theorem specifications replace the includes element by the respective context elements of the included locale, omitting those that are already present in the theorem specification. Multiple assume elements of a locale should be replaced by a single one involving the locale predicate. In the proof body, declarations (most notably theorems) may be regained by interpreting the respective locales in the proof context as required (command "interpret"). If using "includes" in replacement of a target solely because the parameter types in the theorem are not as general as in the target, consider declaring a new locale with additional type constraints on the parameters (context element "constrains"). - Discontinued "locale (open)". INCOMPATIBILITY. - Locale interpretation commands no longer attempt to simplify goal. INCOMPATIBILITY: in rare situations the generated goal differs. Use methods intro_locales and unfold_locales to clarify. - Locale interpretation commands no longer accept interpretation attributes. INCOMPATIBILITY. * Class declaration: so-called "base sort" must not be given in import list any longer, but is inferred from the specification. Particularly in HOL, write class foo = ... instead of class foo = type + ... * Class target: global versions of theorems stemming do not carry a parameter prefix any longer. INCOMPATIBILITY. * Class 'instance' command no longer accepts attached definitions. INCOMPATIBILITY, use proper 'instantiation' target instead. * Recovered hiding of consts, which was accidentally broken in Isabelle2007. Potential INCOMPATIBILITY, ``hide const c'' really makes c inaccessible; consider using ``hide (open) const c'' instead. * Slightly more coherent Pure syntax, with updated documentation in isar-ref manual. Removed locales meta_term_syntax and meta_conjunction_syntax: TERM and &&& (formerly &&) are now permanent, INCOMPATIBILITY in rare situations. Note that &&& should not be used directly in regular applications. * There is a new syntactic category "float_const" for signed decimal fractions (e.g. 123.45 or -123.45). * Removed exotic 'token_translation' command. INCOMPATIBILITY, use ML interface with 'setup' command instead. * Command 'local_setup' is similar to 'setup', but operates on a local theory context. * The 'axiomatization' command now only works within a global theory context. INCOMPATIBILITY. * Goal-directed proof now enforces strict proof irrelevance wrt. sort hypotheses. Sorts required in the course of reasoning need to be covered by the constraints in the initial statement, completed by the type instance information of the background theory. Non-trivial sort hypotheses, which rarely occur in practice, may be specified via vacuous propositions of the form SORT_CONSTRAINT('a::c). For example: lemma assumes "SORT_CONSTRAINT('a::empty)" shows False ... The result contains an implicit sort hypotheses as before -- SORT_CONSTRAINT premises are eliminated as part of the canonical rule normalization. * Generalized Isar history, with support for linear undo, direct state addressing etc. * Changed defaults for unify configuration options: unify_trace_bound = 50 (formerly 25) unify_search_bound = 60 (formerly 30) * Different bookkeeping for code equations (INCOMPATIBILITY): a) On theory merge, the last set of code equations for a particular constant is taken (in accordance with the policy applied by other parts of the code generator framework). b) Code equations stemming from explicit declarations (e.g. code attribute) gain priority over default code equations stemming from definition, primrec, fun etc. * Keyword 'code_exception' now named 'code_abort'. INCOMPATIBILITY. * Unified theorem tables for both code generators. Thus [code func] has disappeared and only [code] remains. INCOMPATIBILITY. * Command 'find_consts' searches for constants based on type and name patterns, e.g. find_consts "_ => bool" By default, matching is against subtypes, but it may be restricted to the whole type. Searching by name is possible. Multiple queries are conjunctive and queries may be negated by prefixing them with a hyphen: find_consts strict: "_ => bool" name: "Int" -"int => int" * New 'find_theorems' criterion "solves" matches theorems that directly solve the current goal (modulo higher-order unification). * Auto solve feature for main theorem statements: whenever a new goal is stated, "find_theorems solves" is called; any theorems that could solve the lemma directly are listed as part of the goal state. Cf. associated options in Proof General Isabelle settings menu, enabled by default, with reasonable timeout for pathological cases of higher-order unification. *** Document preparation *** * Antiquotation @{lemma} now imitates a regular terminal proof, demanding keyword 'by' and supporting the full method expression syntax just like the Isar command 'by'. *** HOL *** * Integrated main parts of former image HOL-Complex with HOL. Entry points Main and Complex_Main remain as before. * Logic image HOL-Plain provides a minimal HOL with the most important tools available (inductive, datatype, primrec, ...). This facilitates experimentation and tool development. Note that user applications (and library theories) should never refer to anything below theory Main, as before. * Logic image HOL-Main stops at theory Main, and thus facilitates experimentation due to shorter build times. * Logic image HOL-NSA contains theories of nonstandard analysis which were previously part of former HOL-Complex. Entry point Hyperreal remains valid, but theories formerly using Complex_Main should now use new entry point Hypercomplex. * Generic ATP manager for Sledgehammer, based on ML threads instead of Posix processes. Avoids potentially expensive forking of the ML process. New thread-based implementation also works on non-Unix platforms (Cygwin). Provers are no longer hardwired, but defined within the theory via plain ML wrapper functions. Basic Sledgehammer commands are covered in the isar-ref manual. * Wrapper scripts for remote SystemOnTPTP service allows to use sledgehammer without local ATP installation (Vampire etc.). Other provers may be included via suitable ML wrappers, see also src/HOL/ATP_Linkup.thy. * ATP selection (E/Vampire/Spass) is now via Proof General's settings menu. * The metis method no longer fails because the theorem is too trivial (contains the empty clause). * The metis method now fails in the usual manner, rather than raising an exception, if it determines that it cannot prove the theorem. * Method "coherent" implements a prover for coherent logic (see also src/Tools/coherent.ML). * Constants "undefined" and "default" replace "arbitrary". Usually "undefined" is the right choice to replace "arbitrary", though logically there is no difference. INCOMPATIBILITY. * Command "value" now integrates different evaluation mechanisms. The result of the first successful evaluation mechanism is printed. In square brackets a particular named evaluation mechanisms may be specified (currently, [SML], [code] or [nbe]). See further src/HOL/ex/Eval_Examples.thy. * Normalization by evaluation now allows non-leftlinear equations. Declare with attribute [code nbe]. * Methods "case_tac" and "induct_tac" now refer to the very same rules as the structured Isar versions "cases" and "induct", cf. the corresponding "cases" and "induct" attributes. Mutual induction rules are now presented as a list of individual projections (e.g. foo_bar.inducts for types foo and bar); the old format with explicit HOL conjunction is no longer supported. INCOMPATIBILITY, in rare situations a different rule is selected --- notably nested tuple elimination instead of former prod.exhaust: use explicit (case_tac t rule: prod.exhaust) here. * Attributes "cases", "induct", "coinduct" support "del" option. * Removed fact "case_split_thm", which duplicates "case_split". * The option datatype has been moved to a new theory Option. Renamed option_map to Option.map, and o2s to Option.set, INCOMPATIBILITY. * New predicate "strict_mono" classifies strict functions on partial orders. With strict functions on linear orders, reasoning about (in)equalities is facilitated by theorems "strict_mono_eq", "strict_mono_less_eq" and "strict_mono_less". * Some set operations are now proper qualified constants with authentic syntax. INCOMPATIBILITY: op Int ~> Set.Int op Un ~> Set.Un INTER ~> Set.INTER UNION ~> Set.UNION Inter ~> Set.Inter Union ~> Set.Union {} ~> Set.empty UNIV ~> Set.UNIV * Class complete_lattice with operations Inf, Sup, INFI, SUPR now in theory Set. * Auxiliary class "itself" has disappeared -- classes without any parameter are treated as expected by the 'class' command. * Leibnitz's Series for Pi and the arcus tangens and logarithm series. * Common decision procedures (Cooper, MIR, Ferrack, Approximation, Dense_Linear_Order) are now in directory HOL/Decision_Procs. * Theory src/HOL/Decision_Procs/Approximation provides the new proof method "approximation". It proves formulas on real values by using interval arithmetic. In the formulas are also the transcendental functions sin, cos, tan, atan, ln, exp and the constant pi are allowed. For examples see src/HOL/Descision_Procs/ex/Approximation_Ex.thy. * Theory "Reflection" now resides in HOL/Library. * Entry point to Word library now simply named "Word". INCOMPATIBILITY. * Made source layout more coherent with logical distribution structure: src/HOL/Library/RType.thy ~> src/HOL/Typerep.thy src/HOL/Library/Code_Message.thy ~> src/HOL/ src/HOL/Library/GCD.thy ~> src/HOL/ src/HOL/Library/Order_Relation.thy ~> src/HOL/ src/HOL/Library/Parity.thy ~> src/HOL/ src/HOL/Library/Univ_Poly.thy ~> src/HOL/ src/HOL/Real/ContNotDenum.thy ~> src/HOL/Library/ src/HOL/Real/Lubs.thy ~> src/HOL/ src/HOL/Real/PReal.thy ~> src/HOL/ src/HOL/Real/Rational.thy ~> src/HOL/ src/HOL/Real/RComplete.thy ~> src/HOL/ src/HOL/Real/RealDef.thy ~> src/HOL/ src/HOL/Real/RealPow.thy ~> src/HOL/ src/HOL/Real/Real.thy ~> src/HOL/ src/HOL/Complex/Complex_Main.thy ~> src/HOL/ src/HOL/Complex/Complex.thy ~> src/HOL/ src/HOL/Complex/FrechetDeriv.thy ~> src/HOL/Library/ src/HOL/Complex/Fundamental_Theorem_Algebra.thy ~> src/HOL/Library/ src/HOL/Hyperreal/Deriv.thy ~> src/HOL/ src/HOL/Hyperreal/Fact.thy ~> src/HOL/ src/HOL/Hyperreal/Integration.thy ~> src/HOL/ src/HOL/Hyperreal/Lim.thy ~> src/HOL/ src/HOL/Hyperreal/Ln.thy ~> src/HOL/ src/HOL/Hyperreal/Log.thy ~> src/HOL/ src/HOL/Hyperreal/MacLaurin.thy ~> src/HOL/ src/HOL/Hyperreal/NthRoot.thy ~> src/HOL/ src/HOL/Hyperreal/Series.thy ~> src/HOL/ src/HOL/Hyperreal/SEQ.thy ~> src/HOL/ src/HOL/Hyperreal/Taylor.thy ~> src/HOL/ src/HOL/Hyperreal/Transcendental.thy ~> src/HOL/ src/HOL/Real/Float ~> src/HOL/Library/ src/HOL/Real/HahnBanach ~> src/HOL/HahnBanach src/HOL/Real/RealVector.thy ~> src/HOL/ src/HOL/arith_data.ML ~> src/HOL/Tools src/HOL/hologic.ML ~> src/HOL/Tools src/HOL/simpdata.ML ~> src/HOL/Tools src/HOL/int_arith1.ML ~> src/HOL/Tools/int_arith.ML src/HOL/int_factor_simprocs.ML ~> src/HOL/Tools src/HOL/nat_simprocs.ML ~> src/HOL/Tools src/HOL/Real/float_arith.ML ~> src/HOL/Tools src/HOL/Real/float_syntax.ML ~> src/HOL/Tools src/HOL/Real/rat_arith.ML ~> src/HOL/Tools src/HOL/Real/real_arith.ML ~> src/HOL/Tools src/HOL/Library/Array.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Heap_Monad.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Heap.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Imperative_HOL.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Ref.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Relational.thy ~> src/HOL/Imperative_HOL * If methods "eval" and "evaluation" encounter a structured proof state with !!/==>, only the conclusion is evaluated to True (if possible), avoiding strange error messages. * Method "sizechange" automates termination proofs using (a modification of) the size-change principle. Requires SAT solver. See src/HOL/ex/Termination.thy for examples. * Simplifier: simproc for let expressions now unfolds if bound variable occurs at most once in let expression body. INCOMPATIBILITY. * Method "arith": Linear arithmetic now ignores all inequalities when fast_arith_neq_limit is exceeded, instead of giving up entirely. * New attribute "arith" for facts that should always be used automatically by arithmetic. It is intended to be used locally in proofs, e.g. assumes [arith]: "x > 0" Global usage is discouraged because of possible performance impact. * New classes "top" and "bot" with corresponding operations "top" and "bot" in theory Orderings; instantiation of class "complete_lattice" requires instantiation of classes "top" and "bot". INCOMPATIBILITY. * Changed definition lemma "less_fun_def" in order to provide an instance for preorders on functions; use lemma "less_le" instead. INCOMPATIBILITY. * Theory Orderings: class "wellorder" moved here, with explicit induction rule "less_induct" as assumption. For instantiation of "wellorder" by means of predicate "wf", use rule wf_wellorderI. INCOMPATIBILITY. * Theory Orderings: added class "preorder" as superclass of "order". INCOMPATIBILITY: Instantiation proofs for order, linorder etc. slightly changed. Some theorems named order_class.* now named preorder_class.*. * Theory Relation: renamed "refl" to "refl_on", "reflexive" to "refl, "diag" to "Id_on". * Theory Finite_Set: added a new fold combinator of type ('a => 'b => 'b) => 'b => 'a set => 'b Occasionally this is more convenient than the old fold combinator which is now defined in terms of the new one and renamed to fold_image. * Theories Ring_and_Field and OrderedGroup: The lemmas "group_simps" and "ring_simps" have been replaced by "algebra_simps" (which can be extended with further lemmas!). At the moment both still exist but the former will disappear at some point. * Theory Power: Lemma power_Suc is now declared as a simp rule in class recpower. Type-specific simp rules for various recpower types have been removed. INCOMPATIBILITY, rename old lemmas as follows: rat_power_0 -> power_0 rat_power_Suc -> power_Suc realpow_0 -> power_0 realpow_Suc -> power_Suc complexpow_0 -> power_0 complexpow_Suc -> power_Suc power_poly_0 -> power_0 power_poly_Suc -> power_Suc * Theories Ring_and_Field and Divides: Definition of "op dvd" has been moved to separate class dvd in Ring_and_Field; a couple of lemmas on dvd has been generalized to class comm_semiring_1. Likewise a bunch of lemmas from Divides has been generalized from nat to class semiring_div. INCOMPATIBILITY. This involves the following theorem renames resulting from duplicate elimination: dvd_def_mod ~> dvd_eq_mod_eq_0 zero_dvd_iff ~> dvd_0_left_iff dvd_0 ~> dvd_0_right DIVISION_BY_ZERO_DIV ~> div_by_0 DIVISION_BY_ZERO_MOD ~> mod_by_0 mult_div ~> div_mult_self2_is_id mult_mod ~> mod_mult_self2_is_0 * Theory IntDiv: removed many lemmas that are instances of class-based generalizations (from Divides and Ring_and_Field). INCOMPATIBILITY, rename old lemmas as follows: dvd_diff -> nat_dvd_diff dvd_zminus_iff -> dvd_minus_iff mod_add1_eq -> mod_add_eq mod_mult1_eq -> mod_mult_right_eq mod_mult1_eq' -> mod_mult_left_eq mod_mult_distrib_mod -> mod_mult_eq nat_mod_add_left_eq -> mod_add_left_eq nat_mod_add_right_eq -> mod_add_right_eq nat_mod_div_trivial -> mod_div_trivial nat_mod_mod_trivial -> mod_mod_trivial zdiv_zadd_self1 -> div_add_self1 zdiv_zadd_self2 -> div_add_self2 zdiv_zmult_self1 -> div_mult_self2_is_id zdiv_zmult_self2 -> div_mult_self1_is_id zdvd_triv_left -> dvd_triv_left zdvd_triv_right -> dvd_triv_right zdvd_zmult_cancel_disj -> dvd_mult_cancel_left zmod_eq0_zdvd_iff -> dvd_eq_mod_eq_0[symmetric] zmod_zadd_left_eq -> mod_add_left_eq zmod_zadd_right_eq -> mod_add_right_eq zmod_zadd_self1 -> mod_add_self1 zmod_zadd_self2 -> mod_add_self2 zmod_zadd1_eq -> mod_add_eq zmod_zdiff1_eq -> mod_diff_eq zmod_zdvd_zmod -> mod_mod_cancel zmod_zmod_cancel -> mod_mod_cancel zmod_zmult_self1 -> mod_mult_self2_is_0 zmod_zmult_self2 -> mod_mult_self1_is_0 zmod_1 -> mod_by_1 zdiv_1 -> div_by_1 zdvd_abs1 -> abs_dvd_iff zdvd_abs2 -> dvd_abs_iff zdvd_refl -> dvd_refl zdvd_trans -> dvd_trans zdvd_zadd -> dvd_add zdvd_zdiff -> dvd_diff zdvd_zminus_iff -> dvd_minus_iff zdvd_zminus2_iff -> minus_dvd_iff zdvd_zmultD -> dvd_mult_right zdvd_zmultD2 -> dvd_mult_left zdvd_zmult_mono -> mult_dvd_mono zdvd_0_right -> dvd_0_right zdvd_0_left -> dvd_0_left_iff zdvd_1_left -> one_dvd zminus_dvd_iff -> minus_dvd_iff * Theory Rational: 'Fract k 0' now equals '0'. INCOMPATIBILITY. * The real numbers offer decimal input syntax: 12.34 is translated into 1234/10^2. This translation is not reversed upon output. * Theory Library/Polynomial defines an abstract type 'a poly of univariate polynomials with coefficients of type 'a. In addition to the standard ring operations, it also supports div and mod. Code generation is also supported, using list-style constructors. * Theory Library/Inner_Product defines a class of real_inner for real inner product spaces, with an overloaded operation inner :: 'a => 'a => real. Class real_inner is a subclass of real_normed_vector from theory RealVector. * Theory Library/Product_Vector provides instances for the product type 'a * 'b of several classes from RealVector and Inner_Product. Definitions of addition, subtraction, scalar multiplication, norms, and inner products are included. * Theory Library/Bit defines the field "bit" of integers modulo 2. In addition to the field operations, numerals and case syntax are also supported. * Theory Library/Diagonalize provides constructive version of Cantor's first diagonalization argument. * Theory Library/GCD: Curried operations gcd, lcm (for nat) and zgcd, zlcm (for int); carried together from various gcd/lcm developements in the HOL Distribution. Constants zgcd and zlcm replace former igcd and ilcm; corresponding theorems renamed accordingly. INCOMPATIBILITY, may recover tupled syntax as follows: hide (open) const gcd abbreviation gcd where "gcd == (%(a, b). GCD.gcd a b)" notation (output) GCD.gcd ("gcd '(_, _')") The same works for lcm, zgcd, zlcm. * Theory Library/Nat_Infinity: added addition, numeral syntax and more instantiations for algebraic structures. Removed some duplicate theorems. Changes in simp rules. INCOMPATIBILITY. * ML antiquotation @{code} takes a constant as argument and generates corresponding code in background and inserts name of the corresponding resulting ML value/function/datatype constructor binding in place. All occurrences of @{code} with a single ML block are generated simultaneously. Provides a generic and safe interface for instrumentalizing code generation. See src/HOL/Decision_Procs/Ferrack.thy for a more ambitious application. In future you ought to refrain from ad-hoc compiling generated SML code on the ML toplevel. Note that (for technical reasons) @{code} cannot refer to constants for which user-defined serializations are set. Refer to the corresponding ML counterpart directly in that cases. * Command 'rep_datatype': instead of theorem names the command now takes a list of terms denoting the constructors of the type to be represented as datatype. The characteristic theorems have to be proven. INCOMPATIBILITY. Also observe that the following theorems have disappeared in favour of existing ones: unit_induct ~> unit.induct prod_induct ~> prod.induct sum_induct ~> sum.induct Suc_Suc_eq ~> nat.inject Suc_not_Zero Zero_not_Suc ~> nat.distinct *** HOL-Algebra *** * New locales for orders and lattices where the equivalence relation is not restricted to equality. INCOMPATIBILITY: all order and lattice locales use a record structure with field eq for the equivalence. * New theory of factorial domains. * Units_l_inv and Units_r_inv are now simp rules by default. INCOMPATIBILITY. Simplifier proof that require deletion of l_inv and/or r_inv will now also require deletion of these lemmas. * Renamed the following theorems, INCOMPATIBILITY: UpperD ~> Upper_memD LowerD ~> Lower_memD least_carrier ~> least_closed greatest_carrier ~> greatest_closed greatest_Lower_above ~> greatest_Lower_below one_zero ~> carrier_one_zero one_not_zero ~> carrier_one_not_zero (collision with assumption) *** HOL-Nominal *** * Nominal datatypes can now contain type-variables. * Commands 'nominal_inductive' and 'equivariance' work with local theory targets. * Nominal primrec can now works with local theory targets and its specification syntax now conforms to the general format as seen in 'inductive' etc. * Method "perm_simp" honours the standard simplifier attributes (no_asm), (no_asm_use) etc. * The new predicate #* is defined like freshness, except that on the left hand side can be a set or list of atoms. * Experimental command 'nominal_inductive2' derives strong induction principles for inductive definitions. In contrast to 'nominal_inductive', which can only deal with a fixed number of binders, it can deal with arbitrary expressions standing for sets of atoms to be avoided. The only inductive definition we have at the moment that needs this generalisation is the typing rule for Lets in the algorithm W: Gamma |- t1 : T1 (x,close Gamma T1)::Gamma |- t2 : T2 x#Gamma ----------------------------------------------------------------- Gamma |- Let x be t1 in t2 : T2 In this rule one wants to avoid all the binders that are introduced by "close Gamma T1". We are looking for other examples where this feature might be useful. Please let us know. *** HOLCF *** * Reimplemented the simplification procedure for proving continuity subgoals. The new simproc is extensible; users can declare additional continuity introduction rules with the attribute [cont2cont]. * The continuity simproc now uses a different introduction rule for solving continuity subgoals on terms with lambda abstractions. In some rare cases the new simproc may fail to solve subgoals that the old one could solve, and "simp add: cont2cont_LAM" may be necessary. Potential INCOMPATIBILITY. * Command 'fixrec': specification syntax now conforms to the general format as seen in 'inductive' etc. See src/HOLCF/ex/Fixrec_ex.thy for examples. INCOMPATIBILITY. *** ZF *** * Proof of Zorn's Lemma for partial orders. *** ML *** * Multithreading for Poly/ML 5.1/5.2 is no longer supported, only for Poly/ML 5.2.1 or later. Important note: the TimeLimit facility depends on multithreading, so timouts will not work before Poly/ML 5.2.1! * High-level support for concurrent ML programming, see src/Pure/Cuncurrent. The data-oriented model of "future values" is particularly convenient to organize independent functional computations. The concept of "synchronized variables" provides a higher-order interface for components with shared state, avoiding the delicate details of mutexes and condition variables. (Requires Poly/ML 5.2.1 or later.) * ML bindings produced via Isar commands are stored within the Isar context (theory or proof). Consequently, commands like 'use' and 'ML' become thread-safe and work with undo as expected (concerning top-level bindings, not side-effects on global references). INCOMPATIBILITY, need to provide proper Isar context when invoking the compiler at runtime; really global bindings need to be given outside a theory. (Requires Poly/ML 5.2 or later.) * Command 'ML_prf' is analogous to 'ML' but works within a proof context. Top-level ML bindings are stored within the proof context in a purely sequential fashion, disregarding the nested proof structure. ML bindings introduced by 'ML_prf' are discarded at the end of the proof. (Requires Poly/ML 5.2 or later.) * Simplified ML attribute and method setup, cf. functions Attrib.setup and Method.setup, as well as Isar commands 'attribute_setup' and 'method_setup'. INCOMPATIBILITY for 'method_setup', need to simplify existing code accordingly, or use plain 'setup' together with old Method.add_method. * Simplified ML oracle interface Thm.add_oracle promotes 'a -> cterm to 'a -> thm, while results are always tagged with an authentic oracle name. The Isar command 'oracle' is now polymorphic, no argument type is specified. INCOMPATIBILITY, need to simplify existing oracle code accordingly. Note that extra performance may be gained by producing the cterm carefully, avoiding slow Thm.cterm_of. * Simplified interface for defining document antiquotations via ThyOutput.antiquotation, ThyOutput.output, and optionally ThyOutput.maybe_pretty_source. INCOMPATIBILITY, need to simplify user antiquotations accordingly, see src/Pure/Thy/thy_output.ML for common examples. * More systematic treatment of long names, abstract name bindings, and name space operations. Basic operations on qualified names have been move from structure NameSpace to Long_Name, e.g. Long_Name.base_name, Long_Name.append. Old type bstring has been mostly replaced by abstract type binding (see structure Binding), which supports precise qualification by packages and local theory targets, as well as proper tracking of source positions. INCOMPATIBILITY, need to wrap old bstring values into Binding.name, or better pass through abstract bindings everywhere. See further src/Pure/General/long_name.ML, src/Pure/General/binding.ML and src/Pure/General/name_space.ML * Result facts (from PureThy.note_thms, ProofContext.note_thms, LocalTheory.note etc.) now refer to the *full* internal name, not the bstring as before. INCOMPATIBILITY, not detected by ML type-checking! * Disposed old type and term read functions (Sign.read_def_typ, Sign.read_typ, Sign.read_def_terms, Sign.read_term, Thm.read_def_cterms, Thm.read_cterm etc.). INCOMPATIBILITY, should use regular Syntax.read_typ, Syntax.read_term, Syntax.read_typ_global, Syntax.read_term_global etc.; see also OldGoals.read_term as last resort for legacy applications. * Disposed old declarations, tactics, tactic combinators that refer to the simpset or claset of an implicit theory (such as Addsimps, Simp_tac, SIMPSET). INCOMPATIBILITY, should use @{simpset} etc. in embedded ML text, or local_simpset_of with a proper context passed as explicit runtime argument. * Rules and tactics that read instantiations (read_instantiate, res_inst_tac, thin_tac, subgoal_tac etc.) now demand a proper proof context, which is required for parsing and type-checking. Moreover, the variables are specified as plain indexnames, not string encodings thereof. INCOMPATIBILITY. * Generic Toplevel.add_hook interface allows to analyze the result of transactions. E.g. see src/Pure/ProofGeneral/proof_general_pgip.ML for theorem dependency output of transactions resulting in a new theory state. * ML antiquotations: block-structured compilation context indicated by \ ... \; additional antiquotation forms: @{binding name} - basic name binding @{let ?pat = term} - term abbreviation (HO matching) @{note name = fact} - fact abbreviation @{thm fact} - singleton fact (with attributes) @{thms fact} - general fact (with attributes) @{lemma prop by method} - singleton goal @{lemma prop by meth1 meth2} - singleton goal @{lemma prop1 ... propN by method} - general goal @{lemma prop1 ... propN by meth1 meth2} - general goal @{lemma (open) ...} - open derivation *** System *** * The Isabelle "emacs" tool provides a specific interface to invoke Proof General / Emacs, with more explicit failure if that is not installed (the old isabelle-interface script silently falls back on isabelle-process). The PROOFGENERAL_HOME setting determines the installation location of the Proof General distribution. * Isabelle/lib/classes/Pure.jar provides basic support to integrate the Isabelle process into a JVM/Scala application. See Isabelle/lib/jedit/plugin for a minimal example. (The obsolete Java process wrapper has been discontinued.) * Added homegrown Isabelle font with unicode layout, see lib/fonts. * Various status messages (with exact source position information) are emitted, if proper markup print mode is enabled. This allows user-interface components to provide detailed feedback on internal prover operations. New in Isabelle2008 (June 2008) ------------------------------- *** General *** * The Isabelle/Isar Reference Manual (isar-ref) has been reorganized and updated, with formally checked references as hyperlinks. * Theory loader: use_thy (and similar operations) no longer set the implicit ML context, which was occasionally hard to predict and in conflict with concurrency. INCOMPATIBILITY, use ML within Isar which provides a proper context already. * Theory loader: old-style ML proof scripts being *attached* to a thy file are no longer supported. INCOMPATIBILITY, regular 'uses' and 'use' within a theory file will do the job. * Name space merge now observes canonical order, i.e. the second space is inserted into the first one, while existing entries in the first space take precedence. INCOMPATIBILITY in rare situations, may try to swap theory imports. * Syntax: symbol \ is now considered a letter. Potential INCOMPATIBILITY in identifier syntax etc. * Outer syntax: string tokens no longer admit escaped white space, which was an accidental (undocumented) feature. INCOMPATIBILITY, use white space without escapes. * Outer syntax: string tokens may contain arbitrary character codes specified via 3 decimal digits (as in SML). E.g. "foo\095bar" for "foo_bar". *** Pure *** * Context-dependent token translations. Default setup reverts locally fixed variables, and adds hilite markup for undeclared frees. * Unused theorems can be found using the new command 'unused_thms'. There are three ways of invoking it: (1) unused_thms Only finds unused theorems in the current theory. (2) unused_thms thy_1 ... thy_n - Finds unused theorems in the current theory and all of its ancestors, excluding the theories thy_1 ... thy_n and all of their ancestors. (3) unused_thms thy_1 ... thy_n - thy'_1 ... thy'_m Finds unused theorems in the theories thy'_1 ... thy'_m and all of their ancestors, excluding the theories thy_1 ... thy_n and all of their ancestors. In order to increase the readability of the list produced by unused_thms, theorems that have been created by a particular instance of a theory command such as 'inductive' or 'function' are considered to belong to the same "group", meaning that if at least one theorem in this group is used, the other theorems in the same group are no longer reported as unused. Moreover, if all theorems in the group are unused, only one theorem in the group is displayed. Note that proof objects have to be switched on in order for unused_thms to work properly (i.e. !proofs must be >= 1, which is usually the case when using Proof General with the default settings). * Authentic naming of facts disallows ad-hoc overwriting of previous theorems within the same name space. INCOMPATIBILITY, need to remove duplicate fact bindings, or even accidental fact duplications. Note that tools may maintain dynamically scoped facts systematically, using PureThy.add_thms_dynamic. * Command 'hide' now allows to hide from "fact" name space as well. * Eliminated destructive theorem database, simpset, claset, and clasimpset. Potential INCOMPATIBILITY, really need to observe linear update of theories within ML code. * Eliminated theory ProtoPure and CPure, leaving just one Pure theory. INCOMPATIBILITY, object-logics depending on former Pure require additional setup PureThy.old_appl_syntax_setup; object-logics depending on former CPure need to refer to Pure. * Commands 'use' and 'ML' are now purely functional, operating on theory/local_theory. Removed former 'ML_setup' (on theory), use 'ML' instead. Added 'ML_val' as mere diagnostic replacement for 'ML'. INCOMPATIBILITY. * Command 'setup': discontinued implicit version with ML reference. * Instantiation target allows for simultaneous specification of class instance operations together with an instantiation proof. Type-checking phase allows to refer to class operations uniformly. See src/HOL/Complex/Complex.thy for an Isar example and src/HOL/Library/Eval.thy for an ML example. * Indexing of literal facts: be more serious about including only facts from the visible specification/proof context, but not the background context (locale etc.). Affects `prop` notation and method "fact". INCOMPATIBILITY: need to name facts explicitly in rare situations. * Method "cases", "induct", "coinduct": removed obsolete/undocumented "(open)" option, which used to expose internal bound variables to the proof text. * Isar statements: removed obsolete case "rule_context". INCOMPATIBILITY, better use explicit fixes/assumes. * Locale proofs: default proof step now includes 'unfold_locales'; hence 'proof' without argument may be used to unfold locale predicates. *** Document preparation *** * Simplified pdfsetup.sty: color/hyperref is used unconditionally for both pdf and dvi (hyperlinks usually work in xdvi as well); removed obsolete thumbpdf setup (contemporary PDF viewers do this on the spot); renamed link color from "darkblue" to "linkcolor" (default value unchanged, can be redefined via \definecolor); no longer sets "a4paper" option (unnecessary or even intrusive). * Antiquotation @{lemma A method} proves proposition A by the given method (either a method name or a method name plus (optional) method arguments in parentheses) and prints A just like @{prop A}. *** HOL *** * New primrec package. Specification syntax conforms in style to definition/function/.... No separate induction rule is provided. The "primrec" command distinguishes old-style and new-style specifications by syntax. The former primrec package is now named OldPrimrecPackage. When adjusting theories, beware: constants stemming from new-style primrec specifications have authentic syntax. * Metis prover is now an order of magnitude faster, and also works with multithreading. * Metis: the maximum number of clauses that can be produced from a theorem is now given by the attribute max_clauses. Theorems that exceed this number are ignored, with a warning printed. * Sledgehammer no longer produces structured proofs by default. To enable, declare [[sledgehammer_full = true]]. Attributes reconstruction_modulus, reconstruction_sorts renamed sledgehammer_modulus, sledgehammer_sorts. INCOMPATIBILITY. * Method "induct_scheme" derives user-specified induction rules from well-founded induction and completeness of patterns. This factors out some operations that are done internally by the function package and makes them available separately. See src/HOL/ex/Induction_Scheme.thy for examples. * More flexible generation of measure functions for termination proofs: Measure functions can be declared by proving a rule of the form "is_measure f" and giving it the [measure_function] attribute. The "is_measure" predicate is logically meaningless (always true), and just guides the heuristic. To find suitable measure functions, the termination prover sets up the goal "is_measure ?f" of the appropriate type and generates all solutions by Prolog-style backward proof using the declared rules. This setup also deals with rules like "is_measure f ==> is_measure (list_size f)" which accommodates nested datatypes that recurse through lists. Similar rules are predeclared for products and option types. * Turned the type of sets "'a set" into an abbreviation for "'a => bool" INCOMPATIBILITIES: - Definitions of overloaded constants on sets have to be replaced by definitions on => and bool. - Some definitions of overloaded operators on sets can now be proved using the definitions of the operators on => and bool. Therefore, the following theorems have been renamed: subset_def -> subset_eq psubset_def -> psubset_eq set_diff_def -> set_diff_eq Compl_def -> Compl_eq Sup_set_def -> Sup_set_eq Inf_set_def -> Inf_set_eq sup_set_def -> sup_set_eq inf_set_def -> inf_set_eq - Due to the incompleteness of the HO unification algorithm, some rules such as subst may require manual instantiation, if some of the unknowns in the rule is a set. - Higher order unification and forward proofs: The proof pattern have "P (S::'a set)" <...> then have "EX S. P S" .. no longer works (due to the incompleteness of the HO unification algorithm) and must be replaced by the pattern have "EX S. P S" proof show "P S" <...> qed - Calculational reasoning with subst (or similar rules): The proof pattern have "P (S::'a set)" <...> also have "S = T" <...> finally have "P T" . no longer works (for similar reasons as the previous example) and must be replaced by something like have "P (S::'a set)" <...> moreover have "S = T" <...> ultimately have "P T" by simp - Tactics or packages written in ML code: Code performing pattern matching on types via Type ("set", [T]) => ... must be rewritten. Moreover, functions like strip_type or binder_types no longer return the right value when applied to a type of the form T1 => ... => Tn => U => bool rather than T1 => ... => Tn => U set * Merged theories Wellfounded_Recursion, Accessible_Part and Wellfounded_Relations to theory Wellfounded. * Explicit class "eq" for executable equality. INCOMPATIBILITY. * Class finite no longer treats UNIV as class parameter. Use class enum from theory Library/Enum instead to achieve a similar effect. INCOMPATIBILITY. * Theory List: rule list_induct2 now has explicitly named cases "Nil" and "Cons". INCOMPATIBILITY. * HOL (and FOL): renamed variables in rules imp_elim and swap. Potential INCOMPATIBILITY. * Theory Product_Type: duplicated lemmas split_Pair_apply and injective_fst_snd removed, use split_eta and prod_eqI instead. Renamed upd_fst to apfst and upd_snd to apsnd. INCOMPATIBILITY. * Theory Nat: removed redundant lemmas that merely duplicate lemmas of the same name in theory Orderings: less_trans less_linear le_imp_less_or_eq le_less_trans less_le_trans less_not_sym less_asym Renamed less_imp_le to less_imp_le_nat, and less_irrefl to less_irrefl_nat. Potential INCOMPATIBILITY due to more general types and different variable names. * Library/Option_ord.thy: Canonical order on option type. * Library/RBT.thy: Red-black trees, an efficient implementation of finite maps. * Library/Countable.thy: Type class for countable types. * Theory Int: The representation of numerals has changed. The infix operator BIT and the bit datatype with constructors B0 and B1 have disappeared. INCOMPATIBILITY, use "Int.Bit0 x" and "Int.Bit1 y" in place of "x BIT bit.B0" and "y BIT bit.B1", respectively. Theorems involving BIT, B0, or B1 have been renamed with "Bit0" or "Bit1" accordingly. * Theory Nat: definition of <= and < on natural numbers no longer depend on well-founded relations. INCOMPATIBILITY. Definitions le_def and less_def have disappeared. Consider lemmas not_less [symmetric, where ?'a = nat] and less_eq [symmetric] instead. * Theory Finite_Set: locales ACf, ACe, ACIf, ACIfSL and ACIfSLlin (whose purpose mainly is for various fold_set functionals) have been abandoned in favor of the existing algebraic classes ab_semigroup_mult, comm_monoid_mult, ab_semigroup_idem_mult, lower_semilattice (resp. upper_semilattice) and linorder. INCOMPATIBILITY. * Theory Transitive_Closure: induct and cases rules now declare proper case_names ("base" and "step"). INCOMPATIBILITY. * Theorem Inductive.lfp_ordinal_induct generalized to complete lattices. The form set-specific version is available as Inductive.lfp_ordinal_induct_set. * Renamed theorems "power.simps" to "power_int.simps". INCOMPATIBILITY. * Class semiring_div provides basic abstract properties of semirings with division and modulo operations. Subsumes former class dvd_mod. * Merged theories IntDef, Numeral and IntArith into unified theory Int. INCOMPATIBILITY. * Theory Library/Code_Index: type "index" now represents natural numbers rather than integers. INCOMPATIBILITY. * New class "uminus" with operation "uminus" (split of from class "minus" which now only has operation "minus", binary). INCOMPATIBILITY. * Constants "card", "internal_split", "option_map" now with authentic syntax. INCOMPATIBILITY. * Definitions subset_def, psubset_def, set_diff_def, Compl_def, le_bool_def, less_bool_def, le_fun_def, less_fun_def, inf_bool_def, sup_bool_def, Inf_bool_def, Sup_bool_def, inf_fun_def, sup_fun_def, Inf_fun_def, Sup_fun_def, inf_set_def, sup_set_def, Inf_set_def, Sup_set_def, le_def, less_def, option_map_def now with object equality. INCOMPATIBILITY. * Records. Removed K_record, and replaced it by pure lambda term %x. c. The simplifier setup is now more robust against eta expansion. INCOMPATIBILITY: in cases explicitly referring to K_record. * Library/Multiset: {#a, b, c#} abbreviates {#a#} + {#b#} + {#c#}. * Library/ListVector: new theory of arithmetic vector operations. * Library/Order_Relation: new theory of various orderings as sets of pairs. Defines preorders, partial orders, linear orders and well-orders on sets and on types. *** ZF *** * Renamed some theories to allow to loading both ZF and HOL in the same session: Datatype -> Datatype_ZF Inductive -> Inductive_ZF Int -> Int_ZF IntDiv -> IntDiv_ZF Nat -> Nat_ZF List -> List_ZF Main -> Main_ZF INCOMPATIBILITY: ZF theories that import individual theories below Main might need to be adapted. Regular theory Main is still available, as trivial extension of Main_ZF. *** ML *** * ML within Isar: antiquotation @{const name} or @{const name(typargs)} produces statically-checked Const term. * Functor NamedThmsFun: data is available to the user as dynamic fact (of the same name). Removed obsolete print command. * Removed obsolete "use_legacy_bindings" function. * The ``print mode'' is now a thread-local value derived from a global template (the former print_mode reference), thus access becomes non-critical. The global print_mode reference is for session management only; user-code should use print_mode_value, print_mode_active, PrintMode.setmp etc. INCOMPATIBILITY. * Functions system/system_out provide a robust way to invoke external shell commands, with propagation of interrupts (requires Poly/ML 5.2.1). Do not use OS.Process.system etc. from the basis library! *** System *** * Default settings: PROOFGENERAL_OPTIONS no longer impose xemacs --- in accordance with Proof General 3.7, which prefers GNU emacs. * isatool tty runs Isabelle process with plain tty interaction; optional line editor may be specified via ISABELLE_LINE_EDITOR setting, the default settings attempt to locate "ledit" and "rlwrap". * isatool browser now works with Cygwin as well, using general "javapath" function defined in Isabelle process environment. * YXML notation provides a simple and efficient alternative to standard XML transfer syntax. See src/Pure/General/yxml.ML and isatool yxml as described in the Isabelle system manual. * JVM class isabelle.IsabelleProcess (located in Isabelle/lib/classes) provides general wrapper for managing an Isabelle process in a robust fashion, with ``cooked'' output from stdin/stderr. * Rudimentary Isabelle plugin for jEdit (see Isabelle/lib/jedit), based on Isabelle/JVM process wrapper (see Isabelle/lib/classes). * Removed obsolete THIS_IS_ISABELLE_BUILD feature. NB: the documented way of changing the user's settings is via ISABELLE_HOME_USER/etc/settings, which is a fully featured bash script. * Multithreading.max_threads := 0 refers to the number of actual CPU cores of the underlying machine, which is a good starting point for optimal performance tuning. The corresponding usedir option -M allows "max" as an alias for "0". WARNING: does not work on certain versions of Mac OS (with Poly/ML 5.1). * isabelle-process: non-ML sessions are run with "nice", to reduce the adverse effect of Isabelle flooding interactive front-ends (notably ProofGeneral / XEmacs). New in Isabelle2007 (November 2007) ----------------------------------- *** General *** * More uniform information about legacy features, notably a warning/error of "Legacy feature: ...", depending on the state of the tolerate_legacy_features flag (default true). FUTURE INCOMPATIBILITY: legacy features will disappear eventually. * Theory syntax: the header format ``theory A = B + C:'' has been discontinued in favour of ``theory A imports B C begin''. Use isatool fixheaders to convert existing theory files. INCOMPATIBILITY. * Theory syntax: the old non-Isar theory file format has been discontinued altogether. Note that ML proof scripts may still be used with Isar theories; migration is usually quite simple with the ML function use_legacy_bindings. INCOMPATIBILITY. * Theory syntax: some popular names (e.g. 'class', 'declaration', 'fun', 'help', 'if') are now keywords. INCOMPATIBILITY, use double quotes. * Theory loader: be more serious about observing the static theory header specifications (including optional directories), but not the accidental file locations of previously successful loads. The strict update policy of former update_thy is now already performed by use_thy, so the former has been removed; use_thys updates several theories simultaneously, just as 'imports' within a theory header specification, but without merging the results. Potential INCOMPATIBILITY: may need to refine theory headers and commands ROOT.ML which depend on load order. * Theory loader: optional support for content-based file identification, instead of the traditional scheme of full physical path plus date stamp; configured by the ISABELLE_FILE_IDENT setting (cf. the system manual). The new scheme allows to work with non-finished theories in persistent session images, such that source files may be moved later on without requiring reloads. * Theory loader: old-style ML proof scripts being *attached* to a thy file (with the same base name as the theory) are considered a legacy feature, which will disappear eventually. Even now, the theory loader no longer maintains dependencies on such files. * Syntax: the scope for resolving ambiguities via type-inference is now limited to individual terms, instead of whole simultaneous specifications as before. This greatly reduces the complexity of the syntax module and improves flexibility by separating parsing and type-checking. INCOMPATIBILITY: additional type-constraints (explicit 'fixes' etc.) are required in rare situations. * Syntax: constants introduced by new-style packages ('definition', 'abbreviation' etc.) are passed through the syntax module in ``authentic mode''. This means that associated mixfix annotations really stick to such constants, independently of potential name space ambiguities introduced later on. INCOMPATIBILITY: constants in parse trees are represented slightly differently, may need to adapt syntax translations accordingly. Use CONST marker in 'translations' and @{const_syntax} antiquotation in 'parse_translation' etc. * Legacy goal package: reduced interface to the bare minimum required to keep existing proof scripts running. Most other user-level functions are now part of the OldGoals structure, which is *not* open by default (consider isatool expandshort before open OldGoals). Removed top_sg, prin, printyp, pprint_term/typ altogether, because these tend to cause confusion about the actual goal (!) context being used here, which is not necessarily the same as the_context(). * Command 'find_theorems': supports "*" wild-card in "name:" criterion; "with_dups" option. Certain ProofGeneral versions might support a specific search form (see ProofGeneral/CHANGES). * The ``prems limit'' option (cf. ProofContext.prems_limit) is now -1 by default, which means that "prems" (and also "fixed variables") are suppressed from proof state output. Note that the ProofGeneral settings mechanism allows to change and save options persistently, but older versions of Isabelle will fail to start up if a negative prems limit is imposed. * Local theory targets may be specified by non-nested blocks of ``context/locale/class ... begin'' followed by ``end''. The body may contain definitions, theorems etc., including any derived mechanism that has been implemented on top of these primitives. This concept generalizes the existing ``theorem (in ...)'' towards more versatility and scalability. * Proof General interface: proper undo of final 'end' command; discontinued Isabelle/classic mode (ML proof scripts). *** Document preparation *** * Added antiquotation @{theory name} which prints the given name, after checking that it refers to a valid ancestor theory in the current context. * Added antiquotations @{ML_type text} and @{ML_struct text} which check the given source text as ML type/structure, printing verbatim. * Added antiquotation @{abbrev "c args"} which prints the abbreviation "c args == rhs" given in the current context. (Any number of arguments may be given on the LHS.) *** Pure *** * The 'class' package offers a combination of axclass and locale to achieve Haskell-like type classes in Isabelle. Definitions and theorems within a class context produce both relative results (with implicit parameters according to the locale context), and polymorphic constants with qualified polymorphism (according to the class context). Within the body context of a 'class' target, a separate syntax layer ("user space type system") takes care of converting between global polymorphic consts and internal locale representation. See src/HOL/ex/Classpackage.thy for examples (as well as main HOL). "isatool doc classes" provides a tutorial. * Generic code generator framework allows to generate executable code for ML and Haskell (including Isabelle classes). A short usage sketch: internal compilation: export_code in SML writing SML code to a file: export_code in SML writing OCaml code to a file: export_code in OCaml writing Haskell code to a bunch of files: export_code in Haskell evaluating closed propositions to True/False using code generation: method ``eval'' Reasonable default setup of framework in HOL. Theorem attributs for selecting and transforming function equations theorems: [code fun]: select a theorem as function equation for a specific constant [code fun del]: deselect a theorem as function equation for a specific constant [code inline]: select an equation theorem for unfolding (inlining) in place [code inline del]: deselect an equation theorem for unfolding (inlining) in place User-defined serializations (target in {SML, OCaml, Haskell}): code_const {(target) }+ code_type {(target) }+ code_instance {(target)}+ where instance ::= :: code_class {(target) }+ where class target syntax ::= {where { == }+}? code_instance and code_class only are effective to target Haskell. For example usage see src/HOL/ex/Codegenerator.thy and src/HOL/ex/Codegenerator_Pretty.thy. A separate tutorial on code generation from Isabelle/HOL theories is available via "isatool doc codegen". * Code generator: consts in 'consts_code' Isar commands are now referred to by usual term syntax (including optional type annotations). * Command 'no_translations' removes translation rules from theory syntax. * Overloaded definitions are now actually checked for acyclic dependencies. The overloading scheme is slightly more general than that of Haskell98, although Isabelle does not demand an exact correspondence to type class and instance declarations. INCOMPATIBILITY, use ``defs (unchecked overloaded)'' to admit more exotic versions of overloading -- at the discretion of the user! Polymorphic constants are represented via type arguments, i.e. the instantiation that matches an instance against the most general declaration given in the signature. For example, with the declaration c :: 'a => 'a => 'a, an instance c :: nat => nat => nat is represented as c(nat). Overloading is essentially simultaneous structural recursion over such type arguments. Incomplete specification patterns impose global constraints on all occurrences, e.g. c('a * 'a) on the LHS means that more general c('a * 'b) will be disallowed on any RHS. Command 'print_theory' outputs the normalized system of recursive equations, see section "definitions". * Configuration options are maintained within the theory or proof context (with name and type bool/int/string), providing a very simple interface to a poor-man's version of general context data. Tools may declare options in ML (e.g. using Attrib.config_int) and then refer to these values using Config.get etc. Users may change options via an associated attribute of the same name. This form of context declaration works particularly well with commands 'declare' or 'using', for example ``declare [[foo = 42]]''. Thus it has become very easy to avoid global references, which would not observe Isar toplevel undo/redo and fail to work with multithreading. Various global ML references of Pure and HOL have been turned into configuration options: Unify.search_bound unify_search_bound Unify.trace_bound unify_trace_bound Unify.trace_simp unify_trace_simp Unify.trace_types unify_trace_types Simplifier.simp_depth_limit simp_depth_limit Blast.depth_limit blast_depth_limit DatatypeProp.dtK datatype_distinctness_limit fast_arith_neq_limit fast_arith_neq_limit fast_arith_split_limit fast_arith_split_limit * Named collections of theorems may be easily installed as context data using the functor NamedThmsFun (see also src/Pure/Tools/named_thms.ML). The user may add or delete facts via attributes; there is also a toplevel print command. This facility is just a common case of general context data, which is the preferred way for anything more complex than just a list of facts in canonical order. * Isar: command 'declaration' augments a local theory by generic declaration functions written in ML. This enables arbitrary content being added to the context, depending on a morphism that tells the difference of the original declaration context wrt. the application context encountered later on. * Isar: proper interfaces for simplification procedures. Command 'simproc_setup' declares named simprocs (with match patterns, and body text in ML). Attribute "simproc" adds/deletes simprocs in the current context. ML antiquotation @{simproc name} retrieves named simprocs. * Isar: an extra pair of brackets around attribute declarations abbreviates a theorem reference involving an internal dummy fact, which will be ignored later --- only the effect of the attribute on the background context will persist. This form of in-place declarations is particularly useful with commands like 'declare' and 'using', for example ``have A using [[simproc a]] by simp''. * Isar: method "assumption" (and implicit closing of subproofs) now takes simple non-atomic goal assumptions into account: after applying an assumption as a rule the resulting subgoals are solved by atomic assumption steps. This is particularly useful to finish 'obtain' goals, such as "!!x. (!!x. P x ==> thesis) ==> P x ==> thesis", without referring to the original premise "!!x. P x ==> thesis" in the Isar proof context. POTENTIAL INCOMPATIBILITY: method "assumption" is more permissive. * Isar: implicit use of prems from the Isar proof context is considered a legacy feature. Common applications like ``have A .'' may be replaced by ``have A by fact'' or ``note `A`''. In general, referencing facts explicitly here improves readability and maintainability of proof texts. * Isar: improper proof element 'guess' is like 'obtain', but derives the obtained context from the course of reasoning! For example: assume "EX x y. A x & B y" -- "any previous fact" then guess x and y by clarify This technique is potentially adventurous, depending on the facts and proof tools being involved here. * Isar: known facts from the proof context may be specified as literal propositions, using ASCII back-quote syntax. This works wherever named facts used to be allowed so far, in proof commands, proof methods, attributes etc. Literal facts are retrieved from the context according to unification of type and term parameters. For example, provided that "A" and "A ==> B" and "!!x. P x ==> Q x" are known theorems in the current context, then these are valid literal facts: `A` and `A ==> B` and `!!x. P x ==> Q x" as well as `P a ==> Q a` etc. There is also a proof method "fact" which does the same composition for explicit goal states, e.g. the following proof texts coincide with certain special cases of literal facts: have "A" by fact == note `A` have "A ==> B" by fact == note `A ==> B` have "!!x. P x ==> Q x" by fact == note `!!x. P x ==> Q x` have "P a ==> Q a" by fact == note `P a ==> Q a` * Isar: ":" (colon) is no longer a symbolic identifier character in outer syntax. Thus symbolic identifiers may be used without additional white space in declarations like this: ``assume *: A''. * Isar: 'print_facts' prints all local facts of the current context, both named and unnamed ones. * Isar: 'def' now admits simultaneous definitions, e.g.: def x == "t" and y == "u" * Isar: added command 'unfolding', which is structurally similar to 'using', but affects both the goal state and facts by unfolding given rewrite rules. Thus many occurrences of the 'unfold' method or 'unfolded' attribute may be replaced by first-class proof text. * Isar: methods 'unfold' / 'fold', attributes 'unfolded' / 'folded', and command 'unfolding' now all support object-level equalities (potentially conditional). The underlying notion of rewrite rule is analogous to the 'rule_format' attribute, but *not* that of the Simplifier (which is usually more generous). * Isar: the new attribute [rotated n] (default n = 1) rotates the premises of a theorem by n. Useful in conjunction with drule. * Isar: the goal restriction operator [N] (default N = 1) evaluates a method expression within a sandbox consisting of the first N sub-goals, which need to exist. For example, ``simp_all [3]'' simplifies the first three sub-goals, while (rule foo, simp_all)[] simplifies all new goals that emerge from applying rule foo to the originally first one. * Isar: schematic goals are no longer restricted to higher-order patterns; e.g. ``lemma "?P(?x)" by (rule TrueI)'' now works as expected. * Isar: the conclusion of a long theorem statement is now either 'shows' (a simultaneous conjunction, as before), or 'obtains' (essentially a disjunction of cases with local parameters and assumptions). The latter allows to express general elimination rules adequately; in this notation common elimination rules look like this: lemma exE: -- "EX x. P x ==> (!!x. P x ==> thesis) ==> thesis" assumes "EX x. P x" obtains x where "P x" lemma conjE: -- "A & B ==> (A ==> B ==> thesis) ==> thesis" assumes "A & B" obtains A and B lemma disjE: -- "A | B ==> (A ==> thesis) ==> (B ==> thesis) ==> thesis" assumes "A | B" obtains A | B The subsequent classical rules even refer to the formal "thesis" explicitly: lemma classical: -- "(~ thesis ==> thesis) ==> thesis" obtains "~ thesis" lemma Peirce's_Law: -- "((thesis ==> something) ==> thesis) ==> thesis" obtains "thesis ==> something" The actual proof of an 'obtains' statement is analogous to that of the Isar proof element 'obtain', only that there may be several cases. Optional case names may be specified in parentheses; these will be available both in the present proof and as annotations in the resulting rule, for later use with the 'cases' method (cf. attribute case_names). * Isar: the assumptions of a long theorem statement are available as "assms" fact in the proof context. This is more appropriate than the (historical) "prems", which refers to all assumptions of the current context, including those from the target locale, proof body etc. * Isar: 'print_statement' prints theorems from the current theory or proof context in long statement form, according to the syntax of a top-level lemma. * Isar: 'obtain' takes an optional case name for the local context introduction rule (default "that"). * Isar: removed obsolete 'concl is' patterns. INCOMPATIBILITY, use explicit (is "_ ==> ?foo") in the rare cases where this still happens to occur. * Pure: syntax "CONST name" produces a fully internalized constant according to the current context. This is particularly useful for syntax translations that should refer to internal constant representations independently of name spaces. * Pure: syntax constant for foo (binder "FOO ") is called "foo_binder" instead of "FOO ". This allows multiple binder declarations to coexist in the same context. INCOMPATIBILITY. * Isar/locales: 'notation' provides a robust interface to the 'syntax' primitive that also works in a locale context (both for constants and fixed variables). Type declaration and internal syntactic representation of given constants retrieved from the context. Likewise, the 'no_notation' command allows to remove given syntax annotations from the current context. * Isar/locales: new derived specification elements 'axiomatization', 'definition', 'abbreviation', which support type-inference, admit object-level specifications (equality, equivalence). See also the isar-ref manual. Examples: axiomatization eq (infix "===" 50) where eq_refl: "x === x" and eq_subst: "x === y ==> P x ==> P y" definition "f x y = x + y + 1" definition g where "g x = f x x" abbreviation neq (infix "=!=" 50) where "x =!= y == ~ (x === y)" These specifications may be also used in a locale context. Then the constants being introduced depend on certain fixed parameters, and the constant name is qualified by the locale base name. An internal abbreviation takes care for convenient input and output, making the parameters implicit and using the original short name. See also src/HOL/ex/Abstract_NAT.thy for an example of deriving polymorphic entities from a monomorphic theory. Presently, abbreviations are only available 'in' a target locale, but not inherited by general import expressions. Also note that 'abbreviation' may be used as a type-safe replacement for 'syntax' + 'translations' in common applications. The "no_abbrevs" print mode prevents folding of abbreviations in term output. Concrete syntax is attached to specified constants in internal form, independently of name spaces. The parse tree representation is slightly different -- use 'notation' instead of raw 'syntax', and 'translations' with explicit "CONST" markup to accommodate this. * Pure/Isar: unified syntax for new-style specification mechanisms (e.g. 'definition', 'abbreviation', or 'inductive' in HOL) admits full type inference and dummy patterns ("_"). For example: definition "K x _ = x" inductive conj for A B where "A ==> B ==> conj A B" * Pure: command 'print_abbrevs' prints all constant abbreviations of the current context. Print mode "no_abbrevs" prevents inversion of abbreviations on output. * Isar/locales: improved parameter handling: use of locales "var" and "struct" no longer necessary; - parameter renamings are no longer required to be injective. For example, this allows to define endomorphisms as locale endom = homom mult mult h. * Isar/locales: changed the way locales with predicates are defined. Instead of accumulating the specification, the imported expression is now an interpretation. INCOMPATIBILITY: different normal form of locale expressions. In particular, in interpretations of locales with predicates, goals repesenting already interpreted fragments are not removed automatically. Use methods `intro_locales' and `unfold_locales'; see below. * Isar/locales: new methods `intro_locales' and `unfold_locales' provide backward reasoning on locales predicates. The methods are aware of interpretations and discharge corresponding goals. `intro_locales' is less aggressive then `unfold_locales' and does not unfold predicates to assumptions. * Isar/locales: the order in which locale fragments are accumulated has changed. This enables to override declarations from fragments due to interpretations -- for example, unwanted simp rules. * Isar/locales: interpretation in theories and proof contexts has been extended. One may now specify (and prove) equations, which are unfolded in interpreted theorems. This is useful for replacing defined concepts (constants depending on locale parameters) by concepts already existing in the target context. Example: interpretation partial_order ["op <= :: [int, int] => bool"] where "partial_order.less (op <=) (x::int) y = (x < y)" Typically, the constant `partial_order.less' is created by a definition specification element in the context of locale partial_order. * Method "induct": improved internal context management to support local fixes and defines on-the-fly. Thus explicit meta-level connectives !! and ==> are rarely required anymore in inductive goals (using object-logic connectives for this purpose has been long obsolete anyway). Common proof patterns are explained in src/HOL/Induct/Common_Patterns.thy, see also src/HOL/Isar_examples/Puzzle.thy and src/HOL/Lambda for realistic examples. * Method "induct": improved handling of simultaneous goals. Instead of introducing object-level conjunction, the statement is now split into several conclusions, while the corresponding symbolic cases are nested accordingly. INCOMPATIBILITY, proofs need to be structured explicitly, see src/HOL/Induct/Common_Patterns.thy, for example. * Method "induct": mutual induction rules are now specified as a list of rule sharing the same induction cases. HOL packages usually provide foo_bar.inducts for mutually defined items foo and bar (e.g. inductive predicates/sets or datatypes). INCOMPATIBILITY, users need to specify mutual induction rules differently, i.e. like this: (induct rule: foo_bar.inducts) (induct set: foo bar) (induct pred: foo bar) (induct type: foo bar) The ML function ProjectRule.projections turns old-style rules into the new format. * Method "coinduct": dual of induction, see src/HOL/Library/Coinductive_List.thy for various examples. * Method "cases", "induct", "coinduct": the ``(open)'' option is considered a legacy feature. * Attribute "symmetric" produces result with standardized schematic variables (index 0). Potential INCOMPATIBILITY. * Simplifier: by default the simplifier trace only shows top level rewrites now. That is, trace_simp_depth_limit is set to 1 by default. Thus there is less danger of being flooded by the trace. The trace indicates where parts have been suppressed. * Provers/classical: removed obsolete classical version of elim_format attribute; classical elim/dest rules are now treated uniformly when manipulating the claset. * Provers/classical: stricter checks to ensure that supplied intro, dest and elim rules are well-formed; dest and elim rules must have at least one premise. * Provers/classical: attributes dest/elim/intro take an optional weight argument for the rule (just as the Pure versions). Weights are ignored by automated tools, but determine the search order of single rule steps. * Syntax: input syntax now supports dummy variable binding "%_. b", where the body does not mention the bound variable. Note that dummy patterns implicitly depend on their context of bounds, which makes "{_. _}" match any set comprehension as expected. Potential INCOMPATIBILITY -- parse translations need to cope with syntactic constant "_idtdummy" in the binding position. * Syntax: removed obsolete syntactic constant "_K" and its associated parse translation. INCOMPATIBILITY -- use dummy abstraction instead, for example "A -> B" => "Pi A (%_. B)". * Pure: 'class_deps' command visualizes the subclass relation, using the graph browser tool. * Pure: 'print_theory' now suppresses certain internal declarations by default; use '!' option for full details. *** HOL *** * Method "metis" proves goals by applying the Metis general-purpose resolution prover (see also http://gilith.com/software/metis/). Examples are in the directory MetisExamples. WARNING: the Isabelle/HOL-Metis integration does not yet work properly with multi-threading. * Command 'sledgehammer' invokes external automatic theorem provers as background processes. It generates calls to the "metis" method if successful. These can be pasted into the proof. Users do not have to wait for the automatic provers to return. WARNING: does not really work with multi-threading. * New "auto_quickcheck" feature tests outermost goal statements for potential counter-examples. Controlled by ML references auto_quickcheck (default true) and auto_quickcheck_time_limit (default 5000 milliseconds). Fails silently if statements is outside of executable fragment, or any other codgenerator problem occurs. * New constant "undefined" with axiom "undefined x = undefined". * Added class "HOL.eq", allowing for code generation with polymorphic equality. * Some renaming of class constants due to canonical name prefixing in the new 'class' package: HOL.abs ~> HOL.abs_class.abs HOL.divide ~> HOL.divide_class.divide 0 ~> HOL.zero_class.zero 1 ~> HOL.one_class.one op + ~> HOL.plus_class.plus op - ~> HOL.minus_class.minus uminus ~> HOL.minus_class.uminus op * ~> HOL.times_class.times op < ~> HOL.ord_class.less op <= > HOL.ord_class.less_eq Nat.power ~> Power.power_class.power Nat.size ~> Nat.size_class.size Numeral.number_of ~> Numeral.number_class.number_of FixedPoint.Inf ~> Lattices.complete_lattice_class.Inf FixedPoint.Sup ~> Lattices.complete_lattice_class.Sup Orderings.min ~> Orderings.ord_class.min Orderings.max ~> Orderings.ord_class.max Divides.op div ~> Divides.div_class.div Divides.op mod ~> Divides.div_class.mod Divides.op dvd ~> Divides.div_class.dvd INCOMPATIBILITY. Adaptions may be required in the following cases: a) User-defined constants using any of the names "plus", "minus", "times", "less" or "less_eq". The standard syntax translations for "+", "-" and "*" may go wrong. INCOMPATIBILITY: use more specific names. b) Variables named "plus", "minus", "times", "less", "less_eq" INCOMPATIBILITY: use more specific names. c) Permutative equations (e.g. "a + b = b + a") Since the change of names also changes the order of terms, permutative rewrite rules may get applied in a different order. Experience shows that this is rarely the case (only two adaptions in the whole Isabelle distribution). INCOMPATIBILITY: rewrite proofs d) ML code directly refering to constant names This in general only affects hand-written proof tactics, simprocs and so on. INCOMPATIBILITY: grep your sourcecode and replace names. Consider using @{const_name} antiquotation. * New class "default" with associated constant "default". * Function "sgn" is now overloaded and available on int, real, complex (and other numeric types), using class "sgn". Two possible defs of sgn are given as equational assumptions in the classes sgn_if and sgn_div_norm; ordered_idom now also inherits from sgn_if. INCOMPATIBILITY. * Locale "partial_order" now unified with class "order" (cf. theory Orderings), added parameter "less". INCOMPATIBILITY. * Renamings in classes "order" and "linorder": facts "refl", "trans" and "cases" to "order_refl", "order_trans" and "linorder_cases", to avoid clashes with HOL "refl" and "trans". INCOMPATIBILITY. * Classes "order" and "linorder": potential INCOMPATIBILITY due to changed order of proof goals in instance proofs. * The transitivity reasoner for partial and linear orders is set up for classes "order" and "linorder". Instances of the reasoner are available in all contexts importing or interpreting the corresponding locales. Method "order" invokes the reasoner separately; the reasoner is also integrated with the Simplifier as a solver. Diagnostic command 'print_orders' shows the available instances of the reasoner in the current context. * Localized monotonicity predicate in theory "Orderings"; integrated lemmas max_of_mono and min_of_mono with this predicate. INCOMPATIBILITY. * Formulation of theorem "dense" changed slightly due to integration with new class dense_linear_order. * Uniform lattice theory development in HOL. constants "meet" and "join" now named "inf" and "sup" constant "Meet" now named "Inf" classes "meet_semilorder" and "join_semilorder" now named "lower_semilattice" and "upper_semilattice" class "lorder" now named "lattice" class "comp_lat" now named "complete_lattice" Instantiation of lattice classes allows explicit definitions for "inf" and "sup" operations (or "Inf" and "Sup" for complete lattices). INCOMPATIBILITY. Theorem renames: meet_left_le ~> inf_le1 meet_right_le ~> inf_le2 join_left_le ~> sup_ge1 join_right_le ~> sup_ge2 meet_join_le ~> inf_sup_ord le_meetI ~> le_infI join_leI ~> le_supI le_meet ~> le_inf_iff le_join ~> ge_sup_conv meet_idempotent ~> inf_idem join_idempotent ~> sup_idem meet_comm ~> inf_commute join_comm ~> sup_commute meet_leI1 ~> le_infI1 meet_leI2 ~> le_infI2 le_joinI1 ~> le_supI1 le_joinI2 ~> le_supI2 meet_assoc ~> inf_assoc join_assoc ~> sup_assoc meet_left_comm ~> inf_left_commute meet_left_idempotent ~> inf_left_idem join_left_comm ~> sup_left_commute join_left_idempotent ~> sup_left_idem meet_aci ~> inf_aci join_aci ~> sup_aci le_def_meet ~> le_iff_inf le_def_join ~> le_iff_sup join_absorp2 ~> sup_absorb2 join_absorp1 ~> sup_absorb1 meet_absorp1 ~> inf_absorb1 meet_absorp2 ~> inf_absorb2 meet_join_absorp ~> inf_sup_absorb join_meet_absorp ~> sup_inf_absorb distrib_join_le ~> distrib_sup_le distrib_meet_le ~> distrib_inf_le add_meet_distrib_left ~> add_inf_distrib_left add_join_distrib_left ~> add_sup_distrib_left is_join_neg_meet ~> is_join_neg_inf is_meet_neg_join ~> is_meet_neg_sup add_meet_distrib_right ~> add_inf_distrib_right add_join_distrib_right ~> add_sup_distrib_right add_meet_join_distribs ~> add_sup_inf_distribs join_eq_neg_meet ~> sup_eq_neg_inf meet_eq_neg_join ~> inf_eq_neg_sup add_eq_meet_join ~> add_eq_inf_sup meet_0_imp_0 ~> inf_0_imp_0 join_0_imp_0 ~> sup_0_imp_0 meet_0_eq_0 ~> inf_0_eq_0 join_0_eq_0 ~> sup_0_eq_0 neg_meet_eq_join ~> neg_inf_eq_sup neg_join_eq_meet ~> neg_sup_eq_inf join_eq_if ~> sup_eq_if mono_meet ~> mono_inf mono_join ~> mono_sup meet_bool_eq ~> inf_bool_eq join_bool_eq ~> sup_bool_eq meet_fun_eq ~> inf_fun_eq join_fun_eq ~> sup_fun_eq meet_set_eq ~> inf_set_eq join_set_eq ~> sup_set_eq meet1_iff ~> inf1_iff meet2_iff ~> inf2_iff meet1I ~> inf1I meet2I ~> inf2I meet1D1 ~> inf1D1 meet2D1 ~> inf2D1 meet1D2 ~> inf1D2 meet2D2 ~> inf2D2 meet1E ~> inf1E meet2E ~> inf2E join1_iff ~> sup1_iff join2_iff ~> sup2_iff join1I1 ~> sup1I1 join2I1 ~> sup2I1 join1I1 ~> sup1I1 join2I2 ~> sup1I2 join1CI ~> sup1CI join2CI ~> sup2CI join1E ~> sup1E join2E ~> sup2E is_meet_Meet ~> is_meet_Inf Meet_bool_def ~> Inf_bool_def Meet_fun_def ~> Inf_fun_def Meet_greatest ~> Inf_greatest Meet_lower ~> Inf_lower Meet_set_def ~> Inf_set_def Sup_def ~> Sup_Inf Sup_bool_eq ~> Sup_bool_def Sup_fun_eq ~> Sup_fun_def Sup_set_eq ~> Sup_set_def listsp_meetI ~> listsp_infI listsp_meet_eq ~> listsp_inf_eq meet_min ~> inf_min join_max ~> sup_max * Added syntactic class "size"; overloaded constant "size" now has type "'a::size ==> bool" * Internal reorganisation of `size' of datatypes: size theorems "foo.size" are no longer subsumed by "foo.simps" (but are still simplification rules by default!); theorems "prod.size" now named "*.size". * Class "div" now inherits from class "times" rather than "type". INCOMPATIBILITY. * HOL/Finite_Set: "name-space" locales Lattice, Distrib_lattice, Linorder etc. have disappeared; operations defined in terms of fold_set now are named Inf_fin, Sup_fin. INCOMPATIBILITY. * HOL/Nat: neq0_conv no longer declared as iff. INCOMPATIBILITY. * HOL-Word: New extensive library and type for generic, fixed size machine words, with arithmetic, bit-wise, shifting and rotating operations, reflection into int, nat, and bool lists, automation for linear arithmetic (by automatic reflection into nat or int), including lemmas on overflow and monotonicity. Instantiated to all appropriate arithmetic type classes, supporting automatic simplification of numerals on all operations. * Library/Boolean_Algebra: locales for abstract boolean algebras. * Library/Numeral_Type: numbers as types, e.g. TYPE(32). * Code generator library theories: - Code_Integer represents HOL integers by big integer literals in target languages. - Code_Char represents HOL characters by character literals in target languages. - Code_Char_chr like Code_Char, but also offers treatment of character codes; includes Code_Integer. - Executable_Set allows to generate code for finite sets using lists. - Executable_Rat implements rational numbers as triples (sign, enumerator, denominator). - Executable_Real implements a subset of real numbers, namly those representable by rational numbers. - Efficient_Nat implements natural numbers by integers, which in general will result in higher efficency; pattern matching with 0/Suc is eliminated; includes Code_Integer. - Code_Index provides an additional datatype index which is mapped to target-language built-in integers. - Code_Message provides an additional datatype message_string which is isomorphic to strings; messages are mapped to target-language strings. * New package for inductive predicates An n-ary predicate p with m parameters z_1, ..., z_m can now be defined via inductive p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" for z_1 :: U_1 and ... and z_n :: U_m where rule_1: "... ==> p z_1 ... z_m t_1_1 ... t_1_n" | ... with full support for type-inference, rather than consts s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" abbreviation p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" where "p z_1 ... z_m x_1 ... x_n == (x_1, ..., x_n) : s z_1 ... z_m" inductive "s z_1 ... z_m" intros rule_1: "... ==> (t_1_1, ..., t_1_n) : s z_1 ... z_m" ... For backward compatibility, there is a wrapper allowing inductive sets to be defined with the new package via inductive_set s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" for z_1 :: U_1 and ... and z_n :: U_m where rule_1: "... ==> (t_1_1, ..., t_1_n) : s z_1 ... z_m" | ... or inductive_set s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" and p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" for z_1 :: U_1 and ... and z_n :: U_m where "p z_1 ... z_m x_1 ... x_n == (x_1, ..., x_n) : s z_1 ... z_m" | rule_1: "... ==> p z_1 ... z_m t_1_1 ... t_1_n" | ... if the additional syntax "p ..." is required. Numerous examples can be found in the subdirectories src/HOL/Auth, src/HOL/Bali, src/HOL/Induct, and src/HOL/MicroJava. INCOMPATIBILITIES: - Since declaration and definition of inductive sets or predicates is no longer separated, abbreviations involving the newly introduced sets or predicates must be specified together with the introduction rules after the 'where' keyword (see above), rather than before the actual inductive definition. - The variables in induction and elimination rules are now quantified in the order of their occurrence in the introduction rules, rather than in alphabetical order. Since this may break some proofs, these proofs either have to be repaired, e.g. by reordering the variables a_i_1 ... a_i_{k_i} in Isar 'case' statements of the form case (rule_i a_i_1 ... a_i_{k_i}) or the old order of quantification has to be restored by explicitly adding meta-level quantifiers in the introduction rules, i.e. | rule_i: "!!a_i_1 ... a_i_{k_i}. ... ==> p z_1 ... z_m t_i_1 ... t_i_n" - The format of the elimination rules is now p z_1 ... z_m x_1 ... x_n ==> (!!a_1_1 ... a_1_{k_1}. x_1 = t_1_1 ==> ... ==> x_n = t_1_n ==> ... ==> P) ==> ... ==> P for predicates and (x_1, ..., x_n) : s z_1 ... z_m ==> (!!a_1_1 ... a_1_{k_1}. x_1 = t_1_1 ==> ... ==> x_n = t_1_n ==> ... ==> P) ==> ... ==> P for sets rather than x : s z_1 ... z_m ==> (!!a_1_1 ... a_1_{k_1}. x = (t_1_1, ..., t_1_n) ==> ... ==> P) ==> ... ==> P This may require terms in goals to be expanded to n-tuples (e.g. using case_tac or simplification with the split_paired_all rule) before the above elimination rule is applicable. - The elimination or case analysis rules for (mutually) inductive sets or predicates are now called "p_1.cases" ... "p_k.cases". The list of rules "p_1_..._p_k.elims" is no longer available. * New package "function"/"fun" for general recursive functions, supporting mutual and nested recursion, definitions in local contexts, more general pattern matching and partiality. See HOL/ex/Fundefs.thy for small examples, and the separate tutorial on the function package. The old recdef "package" is still available as before, but users are encouraged to use the new package. * Method "lexicographic_order" automatically synthesizes termination relations as lexicographic combinations of size measures. * Case-expressions allow arbitrary constructor-patterns (including "_") and take their order into account, like in functional programming. Internally, this is translated into nested case-expressions; missing cases are added and mapped to the predefined constant "undefined". In complicated cases printing may no longer show the original input but the internal form. Lambda-abstractions allow the same form of pattern matching: "% pat1 => e1 | ..." is an abbreviation for "%x. case x of pat1 => e1 | ..." where x is a new variable. * IntDef: The constant "int :: nat => int" has been removed; now "int" is an abbreviation for "of_nat :: nat => int". The simplification rules for "of_nat" have been changed to work like "int" did previously. Potential INCOMPATIBILITY: - "of_nat (Suc m)" simplifies to "1 + of_nat m" instead of "of_nat m + 1" - of_nat_diff and of_nat_mult are no longer default simp rules * Method "algebra" solves polynomial equations over (semi)rings using Groebner bases. The (semi)ring structure is defined by locales and the tool setup depends on that generic context. Installing the method for a specific type involves instantiating the locale and possibly adding declarations for computation on the coefficients. The method is already instantiated for natural numbers and for the axiomatic class of idoms with numerals. See also the paper by Chaieb and Wenzel at CALCULEMUS 2007 for the general principles underlying this architecture of context-aware proof-tools. * Method "ferrack" implements quantifier elimination over special-purpose dense linear orders using locales (analogous to "algebra"). The method is already installed for class {ordered_field,recpower,number_ring} which subsumes real, hyperreal, rat, etc. * Former constant "List.op @" now named "List.append". Use ML antiquotations @{const_name List.append} or @{term " ... @ ... "} to circumvent possible incompatibilities when working on ML level. * primrec: missing cases mapped to "undefined" instead of "arbitrary". * New function listsum :: 'a list => 'a for arbitrary monoids. Special syntax: "SUM x <- xs. f x" (and latex variants) * New syntax for Haskell-like list comprehension (input only), eg. [(x,y). x <- xs, y <- ys, x ~= y], see also src/HOL/List.thy. * The special syntax for function "filter" has changed from [x : xs. P] to [x <- xs. P] to avoid an ambiguity caused by list comprehension syntax, and for uniformity. INCOMPATIBILITY. * [a..b] is now defined for arbitrary linear orders. It used to be defined on nat only, as an abbreviation for [a.. B" for equality on bool (with priority 25 like -->); output depends on the "iff" print_mode, the default is "A = B" (with priority 50). * Relations less (<) and less_eq (<=) are also available on type bool. Modified syntax to disallow nesting without explicit parentheses, e.g. "(x < y) < z" or "x < (y < z)", but NOT "x < y < z". Potential INCOMPATIBILITY. * "LEAST x:A. P" expands to "LEAST x. x:A & P" (input only). * Relation composition operator "op O" now has precedence 75 and binds stronger than union and intersection. INCOMPATIBILITY. * The old set interval syntax "{m..n(}" (and relatives) has been removed. Use "{m.. ==> False", equivalences (i.e. "=" on type bool) are handled, variable names of the form "lit_" are no longer reserved, significant speedup. * Methods "sat" and "satx" can now replay MiniSat proof traces. zChaff is still supported as well. * 'inductive' and 'datatype': provide projections of mutual rules, bundled as foo_bar.inducts; * Library: moved theories Parity, GCD, Binomial, Infinite_Set to Library. * Library: moved theory Accessible_Part to main HOL. * Library: added theory Coinductive_List of potentially infinite lists as greatest fixed-point. * Library: added theory AssocList which implements (finite) maps as association lists. * Method "evaluation" solves goals (i.e. a boolean expression) efficiently by compiling it to ML. The goal is "proved" (via an oracle) if it evaluates to True. * Linear arithmetic now splits certain operators (e.g. min, max, abs) also when invoked by the simplifier. This results in the Simplifier being more powerful on arithmetic goals. INCOMPATIBILITY. Configuration option fast_arith_split_limit=0 recovers the old behavior. * Support for hex (0x20) and binary (0b1001) numerals. * New method: reify eqs (t), where eqs are equations for an interpretation I :: 'a list => 'b => 'c and t::'c is an optional parameter, computes a term s::'b and a list xs::'a list and proves the theorem I xs s = t. This is also known as reification or quoting. The resulting theorem is applied to the subgoal to substitute t with I xs s. If t is omitted, the subgoal itself is reified. * New method: reflection corr_thm eqs (t). The parameters eqs and (t) are as explained above. corr_thm is a theorem for I vs (f t) = I vs t, where f is supposed to be a computable function (in the sense of code generattion). The method uses reify to compute s and xs as above then applies corr_thm and uses normalization by evaluation to "prove" f s = r and finally gets the theorem t = r, which is again applied to the subgoal. An Example is available in src/HOL/ex/ReflectionEx.thy. * Reflection: Automatic reification now handels binding, an example is available in src/HOL/ex/ReflectionEx.thy * HOL-Statespace: ``State Spaces: The Locale Way'' introduces a command 'statespace' that is similar to 'record', but introduces an abstract specification based on the locale infrastructure instead of HOL types. This leads to extra flexibility in composing state spaces, in particular multiple inheritance and renaming of components. *** HOL-Complex *** * Hyperreal: Functions root and sqrt are now defined on negative real inputs so that root n (- x) = - root n x and sqrt (- x) = - sqrt x. Nonnegativity side conditions have been removed from many lemmas, so that more subgoals may now be solved by simplification; potential INCOMPATIBILITY. * Real: new type classes formalize real normed vector spaces and algebras, using new overloaded constants scaleR :: real => 'a => 'a and norm :: 'a => real. * Real: constant of_real :: real => 'a::real_algebra_1 injects from reals into other types. The overloaded constant Reals :: 'a set is now defined as range of_real; potential INCOMPATIBILITY. * Real: proper support for ML code generation, including 'quickcheck'. Reals are implemented as arbitrary precision rationals. * Hyperreal: Several constants that previously worked only for the reals have been generalized, so they now work over arbitrary vector spaces. Type annotations may need to be added in some cases; potential INCOMPATIBILITY. Infinitesimal :: ('a::real_normed_vector) star set HFinite :: ('a::real_normed_vector) star set HInfinite :: ('a::real_normed_vector) star set approx :: ('a::real_normed_vector) star => 'a star => bool monad :: ('a::real_normed_vector) star => 'a star set galaxy :: ('a::real_normed_vector) star => 'a star set (NS)LIMSEQ :: [nat => 'a::real_normed_vector, 'a] => bool (NS)convergent :: (nat => 'a::real_normed_vector) => bool (NS)Bseq :: (nat => 'a::real_normed_vector) => bool (NS)Cauchy :: (nat => 'a::real_normed_vector) => bool (NS)LIM :: ['a::real_normed_vector => 'b::real_normed_vector, 'a, 'b] => bool is(NS)Cont :: ['a::real_normed_vector => 'b::real_normed_vector, 'a] => bool deriv :: ['a::real_normed_field => 'a, 'a, 'a] => bool sgn :: 'a::real_normed_vector => 'a exp :: 'a::{recpower,real_normed_field,banach} => 'a * Complex: Some complex-specific constants are now abbreviations for overloaded ones: complex_of_real = of_real, cmod = norm, hcmod = hnorm. Other constants have been entirely removed in favor of the polymorphic versions (INCOMPATIBILITY): approx <-- capprox HFinite <-- CFinite HInfinite <-- CInfinite Infinitesimal <-- CInfinitesimal monad <-- cmonad galaxy <-- cgalaxy (NS)LIM <-- (NS)CLIM, (NS)CRLIM is(NS)Cont <-- is(NS)Contc, is(NS)contCR (ns)deriv <-- (ns)cderiv *** HOL-Algebra *** * Formalisation of ideals and the quotient construction over rings. * Order and lattice theory no longer based on records. INCOMPATIBILITY. * Renamed lemmas least_carrier -> least_closed and greatest_carrier -> greatest_closed. INCOMPATIBILITY. * Method algebra is now set up via an attribute. For examples see Ring.thy. INCOMPATIBILITY: the method is now weaker on combinations of algebraic structures. * Renamed theory CRing to Ring. *** HOL-Nominal *** * Substantial, yet incomplete support for nominal datatypes (binding structures) based on HOL-Nominal logic. See src/HOL/Nominal and src/HOL/Nominal/Examples. Prospective users should consult http://isabelle.in.tum.de/nominal/ *** ML *** * ML basics: just one true type int, which coincides with IntInf.int (even on SML/NJ). * ML within Isar: antiquotations allow to embed statically-checked formal entities in the source, referring to the context available at compile-time. For example: ML {* @{sort "{zero,one}"} *} ML {* @{typ "'a => 'b"} *} ML {* @{term "%x. x"} *} ML {* @{prop "x == y"} *} ML {* @{ctyp "'a => 'b"} *} ML {* @{cterm "%x. x"} *} ML {* @{cprop "x == y"} *} ML {* @{thm asm_rl} *} ML {* @{thms asm_rl} *} ML {* @{type_name c} *} ML {* @{type_syntax c} *} ML {* @{const_name c} *} ML {* @{const_syntax c} *} ML {* @{context} *} ML {* @{theory} *} ML {* @{theory Pure} *} ML {* @{theory_ref} *} ML {* @{theory_ref Pure} *} ML {* @{simpset} *} ML {* @{claset} *} ML {* @{clasimpset} *} The same works for sources being ``used'' within an Isar context. * ML in Isar: improved error reporting; extra verbosity with ML_Context.trace enabled. * Pure/General/table.ML: the join operations now works via exceptions DUP/SAME instead of type option. This is simpler in simple cases, and admits slightly more efficient complex applications. * Pure: 'advanced' translation functions (parse_translation etc.) now use Context.generic instead of just theory. * Pure: datatype Context.generic joins theory/Proof.context and provides some facilities for code that works in either kind of context, notably GenericDataFun for uniform theory and proof data. * Pure: simplified internal attribute type, which is now always Context.generic * thm -> Context.generic * thm. Global (theory) vs. local (Proof.context) attributes have been discontinued, while minimizing code duplication. Thm.rule_attribute and Thm.declaration_attribute build canonical attributes; see also structure Context for further operations on Context.generic, notably GenericDataFun. INCOMPATIBILITY, need to adapt attribute type declarations and definitions. * Context data interfaces (Theory/Proof/GenericDataFun): removed name/print, uninitialized data defaults to ad-hoc copy of empty value, init only required for impure data. INCOMPATIBILITY: empty really need to be empty (no dependencies on theory content!) * Pure/kernel: consts certification ignores sort constraints given in signature declarations. (This information is not relevant to the logic, but only for type inference.) SIGNIFICANT INTERNAL CHANGE, potential INCOMPATIBILITY. * Pure: axiomatic type classes are now purely definitional, with explicit proofs of class axioms and super class relations performed internally. See Pure/axclass.ML for the main internal interfaces -- notably AxClass.define_class supercedes AxClass.add_axclass, and AxClass.axiomatize_class/classrel/arity supersede Sign.add_classes/classrel/arities. * Pure/Isar: Args/Attrib parsers operate on Context.generic -- global/local versions on theory vs. Proof.context have been discontinued; Attrib.syntax and Method.syntax have been adapted accordingly. INCOMPATIBILITY, need to adapt parser expressions for attributes, methods, etc. * Pure: several functions of signature "... -> theory -> theory * ..." have been reoriented to "... -> theory -> ... * theory" in order to allow natural usage in combination with the ||>, ||>>, |-> and fold_map combinators. * Pure: official theorem names (closed derivations) and additional comments (tags) are now strictly separate. Name hints -- which are maintained as tags -- may be attached any time without affecting the derivation. * Pure: primitive rule lift_rule now takes goal cterm instead of an actual goal state (thm). Use Thm.lift_rule (Thm.cprem_of st i) to achieve the old behaviour. * Pure: the "Goal" constant is now called "prop", supporting a slightly more general idea of ``protecting'' meta-level rule statements. * Pure: Logic.(un)varify only works in a global context, which is now enforced instead of silently assumed. INCOMPATIBILITY, may use Logic.legacy_(un)varify as temporary workaround. * Pure: structure Name provides scalable operations for generating internal variable names, notably Name.variants etc. This replaces some popular functions from term.ML: Term.variant -> Name.variant Term.variantlist -> Name.variant_list Term.invent_names -> Name.invent_list Note that low-level renaming rarely occurs in new code -- operations from structure Variable are used instead (see below). * Pure: structure Variable provides fundamental operations for proper treatment of fixed/schematic variables in a context. For example, Variable.import introduces fixes for schematics of given facts and Variable.export reverses the effect (up to renaming) -- this replaces various freeze_thaw operations. * Pure: structure Goal provides simple interfaces for init/conclude/finish and tactical prove operations (replacing former Tactic.prove). Goal.prove is the canonical way to prove results within a given context; Goal.prove_global is a degraded version for theory level goals, including a global Drule.standard. Note that OldGoals.prove_goalw_cterm has long been obsolete, since it is ill-behaved in a local proof context (e.g. with local fixes/assumes or in a locale context). * Pure/Syntax: generic interfaces for parsing (Syntax.parse_term etc.) and type checking (Syntax.check_term etc.), with common combinations (Syntax.read_term etc.). These supersede former Sign.read_term etc. which are considered legacy and await removal. * Pure/Syntax: generic interfaces for type unchecking (Syntax.uncheck_terms etc.) and unparsing (Syntax.unparse_term etc.), with common combinations (Syntax.pretty_term, Syntax.string_of_term etc.). Former Sign.pretty_term, Sign.string_of_term etc. are still available for convenience, but refer to the very same operations using a mere theory instead of a full context. * Isar: simplified treatment of user-level errors, using exception ERROR of string uniformly. Function error now merely raises ERROR, without any side effect on output channels. The Isar toplevel takes care of proper display of ERROR exceptions. ML code may use plain handle/can/try; cat_error may be used to concatenate errors like this: ... handle ERROR msg => cat_error msg "..." Toplevel ML code (run directly or through the Isar toplevel) may be embedded into the Isar toplevel with exception display/debug like this: Isar.toplevel (fn () => ...) INCOMPATIBILITY, removed special transform_error facilities, removed obsolete variants of user-level exceptions (ERROR_MESSAGE, Context.PROOF, ProofContext.CONTEXT, Proof.STATE, ProofHistory.FAIL) -- use plain ERROR instead. * Isar: theory setup now has type (theory -> theory), instead of a list. INCOMPATIBILITY, may use #> to compose setup functions. * Isar: ML toplevel pretty printer for type Proof.context, subject to ProofContext.debug/verbose flags. * Isar: Toplevel.theory_to_proof admits transactions that modify the theory before entering a proof state. Transactions now always see a quasi-functional intermediate checkpoint, both in interactive and batch mode. * Isar: simplified interfaces for outer syntax. Renamed OuterSyntax.add_keywords to OuterSyntax.keywords. Removed OuterSyntax.add_parsers -- this functionality is now included in OuterSyntax.command etc. INCOMPATIBILITY. * Simplifier: the simpset of a running simplification process now contains a proof context (cf. Simplifier.the_context), which is the very context that the initial simpset has been retrieved from (by simpset_of/local_simpset_of). Consequently, all plug-in components (solver, looper etc.) may depend on arbitrary proof data. * Simplifier.inherit_context inherits the proof context (plus the local bounds) of the current simplification process; any simproc etc. that calls the Simplifier recursively should do this! Removed former Simplifier.inherit_bounds, which is already included here -- INCOMPATIBILITY. Tools based on low-level rewriting may even have to specify an explicit context using Simplifier.context/theory_context. * Simplifier/Classical Reasoner: more abstract interfaces change_simpset/claset for modifying the simpset/claset reference of a theory; raw versions simpset/claset_ref etc. have been discontinued -- INCOMPATIBILITY. * Provers: more generic wrt. syntax of object-logics, avoid hardwired "Trueprop" etc. *** System *** * settings: the default heap location within ISABELLE_HOME_USER now includes ISABELLE_IDENTIFIER. This simplifies use of multiple Isabelle installations. * isabelle-process: option -S (secure mode) disables some critical operations, notably runtime compilation and evaluation of ML source code. * Basic Isabelle mode for jEdit, see Isabelle/lib/jedit/. * Support for parallel execution, using native multicore support of Poly/ML 5.1. The theory loader exploits parallelism when processing independent theories, according to the given theory header specifications. The maximum number of worker threads is specified via usedir option -M or the "max-threads" setting in Proof General. A speedup factor of 1.5--3.5 can be expected on a 4-core machine, and up to 6 on a 8-core machine. User-code needs to observe certain guidelines for thread-safe programming, see appendix A in the Isar Implementation manual. New in Isabelle2005 (October 2005) ---------------------------------- *** General *** * Theory headers: the new header syntax for Isar theories is theory imports ... uses ... begin where the 'uses' part is optional. The previous syntax theory = + ... + : will disappear in the next release. Use isatool fixheaders to convert existing theory files. Note that there is no change in ancient non-Isar theories now, but these will disappear soon. * Theory loader: parent theories can now also be referred to via relative and absolute paths. * Command 'find_theorems' searches for a list of criteria instead of a list of constants. Known criteria are: intro, elim, dest, name:string, simp:term, and any term. Criteria can be preceded by '-' to select theorems that do not match. Intro, elim, dest select theorems that match the current goal, name:s selects theorems whose fully qualified name contain s, and simp:term selects all simplification rules whose lhs match term. Any other term is interpreted as pattern and selects all theorems matching the pattern. Available in ProofGeneral under 'ProofGeneral -> Find Theorems' or C-c C-f. Example: C-c C-f (100) "(_::nat) + _ + _" intro -name: "HOL." prints the last 100 theorems matching the pattern "(_::nat) + _ + _", matching the current goal as introduction rule and not having "HOL." in their name (i.e. not being defined in theory HOL). * Command 'thms_containing' has been discontinued in favour of 'find_theorems'; INCOMPATIBILITY. * Communication with Proof General is now 8bit clean, which means that Unicode text in UTF-8 encoding may be used within theory texts (both formal and informal parts). Cf. option -U of the Isabelle Proof General interface. Here are some simple examples (cf. src/HOL/ex): http://isabelle.in.tum.de/library/HOL/ex/Hebrew.html http://isabelle.in.tum.de/library/HOL/ex/Chinese.html * Improved efficiency of the Simplifier and, to a lesser degree, the Classical Reasoner. Typical big applications run around 2 times faster. *** Document preparation *** * Commands 'display_drafts' and 'print_drafts' perform simple output of raw sources. Only those symbols that do not require additional LaTeX packages (depending on comments in isabellesym.sty) are displayed properly, everything else is left verbatim. isatool display and isatool print are used as front ends (these are subject to the DVI/PDF_VIEWER and PRINT_COMMAND settings, respectively). * Command tags control specific markup of certain regions of text, notably folding and hiding. Predefined tags include "theory" (for theory begin and end), "proof" for proof commands, and "ML" for commands involving ML code; the additional tags "visible" and "invisible" are unused by default. Users may give explicit tag specifications in the text, e.g. ''by %invisible (auto)''. The interpretation of tags is determined by the LaTeX job during document preparation: see option -V of isatool usedir, or options -n and -t of isatool document, or even the LaTeX macros \isakeeptag, \isafoldtag, \isadroptag. Several document versions may be produced at the same time via isatool usedir (the generated index.html will link all of them). Typical specifications include ''-V document=theory,proof,ML'' to present theory/proof/ML parts faithfully, ''-V outline=/proof,/ML'' to fold proof and ML commands, and ''-V mutilated=-theory,-proof,-ML'' to omit these parts without any formal replacement text. The Isabelle site default settings produce ''document'' and ''outline'' versions as specified above. * Several new antiquotations: @{term_type term} prints a term with its type annotated; @{typeof term} prints the type of a term; @{const const} is the same as @{term const}, but checks that the argument is a known logical constant; @{term_style style term} and @{thm_style style thm} print a term or theorem applying a "style" to it @{ML text} Predefined styles are 'lhs' and 'rhs' printing the lhs/rhs of definitions, equations, inequations etc., 'concl' printing only the conclusion of a meta-logical statement theorem, and 'prem1' .. 'prem19' to print the specified premise. TermStyle.add_style provides an ML interface for introducing further styles. See also the "LaTeX Sugar" document practical applications. The ML antiquotation prints type-checked ML expressions verbatim. * Markup commands 'chapter', 'section', 'subsection', 'subsubsection', and 'text' support optional locale specification '(in loc)', which specifies the default context for interpreting antiquotations. For example: 'text (in lattice) {* @{thm inf_assoc}*}'. * Option 'locale=NAME' of antiquotations specifies an alternative context interpreting the subsequent argument. For example: @{thm [locale=lattice] inf_assoc}. * Proper output of proof terms (@{prf ...} and @{full_prf ...}) within a proof context. * Proper output of antiquotations for theory commands involving a proof context (such as 'locale' or 'theorem (in loc) ...'). * Delimiters of outer tokens (string etc.) now produce separate LaTeX macros (\isachardoublequoteopen, isachardoublequoteclose etc.). * isatool usedir: new option -C (default true) controls whether option -D should include a copy of the original document directory; -C false prevents unwanted effects such as copying of administrative CVS data. *** Pure *** * Considerably improved version of 'constdefs' command. Now performs automatic type-inference of declared constants; additional support for local structure declarations (cf. locales and HOL records), see also isar-ref manual. Potential INCOMPATIBILITY: need to observe strictly sequential dependencies of definitions within a single 'constdefs' section; moreover, the declared name needs to be an identifier. If all fails, consider to fall back on 'consts' and 'defs' separately. * Improved indexed syntax and implicit structures. First of all, indexed syntax provides a notational device for subscripted application, using the new syntax \<^bsub>term\<^esub> for arbitrary expressions. Secondly, in a local context with structure declarations, number indexes \<^sub>n or the empty index (default number 1) refer to a certain fixed variable implicitly; option show_structs controls printing of implicit structures. Typical applications of these concepts involve record types and locales. * New command 'no_syntax' removes grammar declarations (and translations) resulting from the given syntax specification, which is interpreted in the same manner as for the 'syntax' command. * 'Advanced' translation functions (parse_translation etc.) may depend on the signature of the theory context being presently used for parsing/printing, see also isar-ref manual. * Improved 'oracle' command provides a type-safe interface to turn an ML expression of type theory -> T -> term into a primitive rule of type theory -> T -> thm (i.e. the functionality of Thm.invoke_oracle is already included here); see also FOL/ex/IffExample.thy; INCOMPATIBILITY. * axclass: name space prefix for class "c" is now "c_class" (was "c" before); "cI" is no longer bound, use "c.intro" instead. INCOMPATIBILITY. This change avoids clashes of fact bindings for axclasses vs. locales. * Improved internal renaming of symbolic identifiers -- attach primes instead of base 26 numbers. * New flag show_question_marks controls printing of leading question marks in schematic variable names. * In schematic variable names, *any* symbol following \<^isub> or \<^isup> is now treated as part of the base name. For example, the following works without printing of awkward ".0" indexes: lemma "x\<^isub>1 = x\<^isub>2 ==> x\<^isub>2 = x\<^isub>1" by simp * Inner syntax includes (*(*nested*) comments*). * Pretty printer now supports unbreakable blocks, specified in mixfix annotations as "(00...)". * Clear separation of logical types and nonterminals, where the latter may only occur in 'syntax' specifications or type abbreviations. Before that distinction was only partially implemented via type class "logic" vs. "{}". Potential INCOMPATIBILITY in rare cases of improper use of 'types'/'consts' instead of 'nonterminals'/'syntax'. Some very exotic syntax specifications may require further adaption (e.g. Cube/Cube.thy). * Removed obsolete type class "logic", use the top sort {} instead. Note that non-logical types should be declared as 'nonterminals' rather than 'types'. INCOMPATIBILITY for new object-logic specifications. * Attributes 'induct' and 'cases': type or set names may now be locally fixed variables as well. * Simplifier: can now control the depth to which conditional rewriting is traced via the PG menu Isabelle -> Settings -> Trace Simp Depth Limit. * Simplifier: simplification procedures may now take the current simpset into account (cf. Simplifier.simproc(_i) / mk_simproc interface), which is very useful for calling the Simplifier recursively. Minor INCOMPATIBILITY: the 'prems' argument of simprocs is gone -- use prems_of_ss on the simpset instead. Moreover, the low-level mk_simproc no longer applies Logic.varify internally, to allow for use in a context of fixed variables. * thin_tac now works even if the assumption being deleted contains !! or ==>. More generally, erule now works even if the major premise of the elimination rule contains !! or ==>. * Method 'rules' has been renamed to 'iprover'. INCOMPATIBILITY. * Reorganized bootstrapping of the Pure theories; CPure is now derived from Pure, which contains all common declarations already. Both theories are defined via plain Isabelle/Isar .thy files. INCOMPATIBILITY: elements of CPure (such as the CPure.intro / CPure.elim / CPure.dest attributes) now appear in the Pure name space; use isatool fixcpure to adapt your theory and ML sources. * New syntax 'name(i-j, i-, i, ...)' for referring to specific selections of theorems in named facts via index ranges. * 'print_theorems': in theory mode, really print the difference wrt. the last state (works for interactive theory development only), in proof mode print all local facts (cf. 'print_facts'); * 'hide': option '(open)' hides only base names. * More efficient treatment of intermediate checkpoints in interactive theory development. * Code generator is now invoked via code_module (incremental code generation) and code_library (modular code generation, ML structures for each theory). INCOMPATIBILITY: new keywords 'file' and 'contains' must be quoted when used as identifiers. * New 'value' command for reading, evaluating and printing terms using the code generator. INCOMPATIBILITY: command keyword 'value' must be quoted when used as identifier. *** Locales *** * New commands for the interpretation of locale expressions in theories (1), locales (2) and proof contexts (3). These generate proof obligations from the expression specification. After the obligations have been discharged, theorems of the expression are added to the theory, target locale or proof context. The synopsis of the commands is a follows: (1) interpretation expr inst (2) interpretation target < expr (3) interpret expr inst Interpretation in theories and proof contexts require a parameter instantiation of terms from the current context. This is applied to specifications and theorems of the interpreted expression. Interpretation in locales only permits parameter renaming through the locale expression. Interpretation is smart in that interpretations that are active already do not occur in proof obligations, neither are instantiated theorems stored in duplicate. Use 'print_interps' to inspect active interpretations of a particular locale. For details, see the Isar Reference manual. Examples can be found in HOL/Finite_Set.thy and HOL/Algebra/UnivPoly.thy. INCOMPATIBILITY: former 'instantiate' has been withdrawn, use 'interpret' instead. * New context element 'constrains' for adding type constraints to parameters. * Context expressions: renaming of parameters with syntax redeclaration. * Locale declaration: 'includes' disallowed. * Proper static binding of attribute syntax -- i.e. types / terms / facts mentioned as arguments are always those of the locale definition context, independently of the context of later invocations. Moreover, locale operations (renaming and type / term instantiation) are applied to attribute arguments as expected. INCOMPATIBILITY of the ML interface: always pass Attrib.src instead of actual attributes; rare situations may require Attrib.attribute to embed those attributes into Attrib.src that lack concrete syntax. Attribute implementations need to cooperate properly with the static binding mechanism. Basic parsers Args.XXX_typ/term/prop and Attrib.XXX_thm etc. already do the right thing without further intervention. Only unusual applications -- such as "where" or "of" (cf. src/Pure/Isar/attrib.ML), which process arguments depending both on the context and the facts involved -- may have to assign parsed values to argument tokens explicitly. * Changed parameter management in theorem generation for long goal statements with 'includes'. INCOMPATIBILITY: produces a different theorem statement in rare situations. * Locale inspection command 'print_locale' omits notes elements. Use 'print_locale!' to have them included in the output. *** Provers *** * Provers/hypsubst.ML: improved version of the subst method, for single-step rewriting: it now works in bound variable contexts. New is 'subst (asm)', for rewriting an assumption. INCOMPATIBILITY: may rewrite a different subterm than the original subst method, which is still available as 'simplesubst'. * Provers/quasi.ML: new transitivity reasoners for transitivity only and quasi orders. * Provers/trancl.ML: new transitivity reasoner for transitive and reflexive-transitive closure of relations. * Provers/blast.ML: new reference depth_limit to make blast's depth limit (previously hard-coded with a value of 20) user-definable. * Provers/simplifier.ML has been moved to Pure, where Simplifier.setup is peformed already. Object-logics merely need to finish their initial simpset configuration as before. INCOMPATIBILITY. *** HOL *** * Symbolic syntax of Hilbert Choice Operator is now as follows: syntax (epsilon) "_Eps" :: "[pttrn, bool] => 'a" ("(3\_./ _)" [0, 10] 10) The symbol \ is displayed as the alternative epsilon of LaTeX and x-symbol; use option '-m epsilon' to get it actually printed. Moreover, the mathematically important symbolic identifier \ becomes available as variable, constant etc. INCOMPATIBILITY, * "x > y" abbreviates "y < x" and "x >= y" abbreviates "y <= x". Similarly for all quantifiers: "ALL x > y" etc. The x-symbol for >= is \. New transitivity rules have been added to HOL/Orderings.thy to support corresponding Isar calculations. * "{x:A. P}" abbreviates "{x. x:A & P}", and similarly for "\" instead of ":". * theory SetInterval: changed the syntax for open intervals: Old New {..n(} {.. {\1<\.\.} \.\.\([^(}]*\)(} -> \.\.<\1} * Theory Commutative_Ring (in Library): method comm_ring for proving equalities in commutative rings; method 'algebra' provides a generic interface. * Theory Finite_Set: changed the syntax for 'setsum', summation over finite sets: "setsum (%x. e) A", which used to be "\x:A. e", is now either "SUM x:A. e" or "\x \ A. e". The bound variable can be a tuple pattern. Some new syntax forms are available: "\x | P. e" for "setsum (%x. e) {x. P}" "\x = a..b. e" for "setsum (%x. e) {a..b}" "\x = a..x < k. e" for "setsum (%x. e) {..x < k. e" used to be based on a separate function "Summation", which has been discontinued. * theory Finite_Set: in structured induction proofs, the insert case is now 'case (insert x F)' instead of the old counterintuitive 'case (insert F x)'. * The 'refute' command has been extended to support a much larger fragment of HOL, including axiomatic type classes, constdefs and typedefs, inductive datatypes and recursion. * New tactics 'sat' and 'satx' to prove propositional tautologies. Requires zChaff with proof generation to be installed. See HOL/ex/SAT_Examples.thy for examples. * Datatype induction via method 'induct' now preserves the name of the induction variable. For example, when proving P(xs::'a list) by induction on xs, the induction step is now P(xs) ==> P(a#xs) rather than P(list) ==> P(a#list) as previously. Potential INCOMPATIBILITY in unstructured proof scripts. * Reworked implementation of records. Improved scalability for records with many fields, avoiding performance problems for type inference. Records are no longer composed of nested field types, but of nested extension types. Therefore the record type only grows linear in the number of extensions and not in the number of fields. The top-level (users) view on records is preserved. Potential INCOMPATIBILITY only in strange cases, where the theory depends on the old record representation. The type generated for a record is called _ext_type. Flag record_quick_and_dirty_sensitive can be enabled to skip the proofs triggered by a record definition or a simproc (if quick_and_dirty is enabled). Definitions of large records can take quite long. New simproc record_upd_simproc for simplification of multiple record updates enabled by default. Moreover, trivial updates are also removed: r(|x := x r|) = r. INCOMPATIBILITY: old proofs break occasionally, since simplification is more powerful by default. * typedef: proper support for polymorphic sets, which contain extra type-variables in the term. * Simplifier: automatically reasons about transitivity chains involving "trancl" (r^+) and "rtrancl" (r^*) by setting up tactics provided by Provers/trancl.ML as additional solvers. INCOMPATIBILITY: old proofs break occasionally as simplification may now solve more goals than previously. * Simplifier: converts x <= y into x = y if assumption y <= x is present. Works for all partial orders (class "order"), in particular numbers and sets. For linear orders (e.g. numbers) it treats ~ x < y just like y <= x. * Simplifier: new simproc for "let x = a in f x". If a is a free or bound variable or a constant then the let is unfolded. Otherwise first a is simplified to b, and then f b is simplified to g. If possible we abstract b from g arriving at "let x = b in h x", otherwise we unfold the let and arrive at g. The simproc can be enabled/disabled by the reference use_let_simproc. Potential INCOMPATIBILITY since simplification is more powerful by default. * Classical reasoning: the meson method now accepts theorems as arguments. * Prover support: pre-release of the Isabelle-ATP linkup, which runs background jobs to provide advice on the provability of subgoals. * Theory OrderedGroup and Ring_and_Field: various additions and improvements to faciliate calculations involving equalities and inequalities. The following theorems have been eliminated or modified (INCOMPATIBILITY): abs_eq now named abs_of_nonneg abs_of_ge_0 now named abs_of_nonneg abs_minus_eq now named abs_of_nonpos imp_abs_id now named abs_of_nonneg imp_abs_neg_id now named abs_of_nonpos mult_pos now named mult_pos_pos mult_pos_le now named mult_nonneg_nonneg mult_pos_neg_le now named mult_nonneg_nonpos mult_pos_neg2_le now named mult_nonneg_nonpos2 mult_neg now named mult_neg_neg mult_neg_le now named mult_nonpos_nonpos * The following lemmas in Ring_and_Field have been added to the simplifier: zero_le_square not_square_less_zero The following lemmas have been deleted from Real/RealPow: realpow_zero_zero realpow_two realpow_less zero_le_power realpow_two_le abs_realpow_two realpow_two_abs * Theory Parity: added rules for simplifying exponents. * Theory List: The following theorems have been eliminated or modified (INCOMPATIBILITY): list_all_Nil now named list_all.simps(1) list_all_Cons now named list_all.simps(2) list_all_conv now named list_all_iff set_mem_eq now named mem_iff * Theories SetsAndFunctions and BigO (see HOL/Library) support asymptotic "big O" calculations. See the notes in BigO.thy. *** HOL-Complex *** * Theory RealDef: better support for embedding natural numbers and integers in the reals. The following theorems have been eliminated or modified (INCOMPATIBILITY): exp_ge_add_one_self now requires no hypotheses real_of_int_add reversed direction of equality (use [symmetric]) real_of_int_minus reversed direction of equality (use [symmetric]) real_of_int_diff reversed direction of equality (use [symmetric]) real_of_int_mult reversed direction of equality (use [symmetric]) * Theory RComplete: expanded support for floor and ceiling functions. * Theory Ln is new, with properties of the natural logarithm * Hyperreal: There is a new type constructor "star" for making nonstandard types. The old type names are now type synonyms: hypreal = real star hypnat = nat star hcomplex = complex star * Hyperreal: Many groups of similarly-defined constants have been replaced by polymorphic versions (INCOMPATIBILITY): star_of <-- hypreal_of_real, hypnat_of_nat, hcomplex_of_complex starset <-- starsetNat, starsetC *s* <-- *sNat*, *sc* starset_n <-- starsetNat_n, starsetC_n *sn* <-- *sNatn*, *scn* InternalSets <-- InternalNatSets, InternalCSets starfun <-- starfun{Nat,Nat2,C,RC,CR} *f* <-- *fNat*, *fNat2*, *fc*, *fRc*, *fcR* starfun_n <-- starfun{Nat,Nat2,C,RC,CR}_n *fn* <-- *fNatn*, *fNat2n*, *fcn*, *fRcn*, *fcRn* InternalFuns <-- InternalNatFuns, InternalNatFuns2, Internal{C,RC,CR}Funs * Hyperreal: Many type-specific theorems have been removed in favor of theorems specific to various axiomatic type classes (INCOMPATIBILITY): add_commute <-- {hypreal,hypnat,hcomplex}_add_commute add_assoc <-- {hypreal,hypnat,hcomplex}_add_assocs OrderedGroup.add_0 <-- {hypreal,hypnat,hcomplex}_add_zero_left OrderedGroup.add_0_right <-- {hypreal,hcomplex}_add_zero_right right_minus <-- hypreal_add_minus left_minus <-- {hypreal,hcomplex}_add_minus_left mult_commute <-- {hypreal,hypnat,hcomplex}_mult_commute mult_assoc <-- {hypreal,hypnat,hcomplex}_mult_assoc mult_1_left <-- {hypreal,hypnat}_mult_1, hcomplex_mult_one_left mult_1_right <-- hcomplex_mult_one_right mult_zero_left <-- hcomplex_mult_zero_left left_distrib <-- {hypreal,hypnat,hcomplex}_add_mult_distrib right_distrib <-- hypnat_add_mult_distrib2 zero_neq_one <-- {hypreal,hypnat,hcomplex}_zero_not_eq_one right_inverse <-- hypreal_mult_inverse left_inverse <-- hypreal_mult_inverse_left, hcomplex_mult_inv_left order_refl <-- {hypreal,hypnat}_le_refl order_trans <-- {hypreal,hypnat}_le_trans order_antisym <-- {hypreal,hypnat}_le_anti_sym order_less_le <-- {hypreal,hypnat}_less_le linorder_linear <-- {hypreal,hypnat}_le_linear add_left_mono <-- {hypreal,hypnat}_add_left_mono mult_strict_left_mono <-- {hypreal,hypnat}_mult_less_mono2 add_nonneg_nonneg <-- hypreal_le_add_order * Hyperreal: Separate theorems having to do with type-specific versions of constants have been merged into theorems that apply to the new polymorphic constants (INCOMPATIBILITY): STAR_UNIV_set <-- {STAR_real,NatStar_real,STARC_complex}_set STAR_empty_set <-- {STAR,NatStar,STARC}_empty_set STAR_Un <-- {STAR,NatStar,STARC}_Un STAR_Int <-- {STAR,NatStar,STARC}_Int STAR_Compl <-- {STAR,NatStar,STARC}_Compl STAR_subset <-- {STAR,NatStar,STARC}_subset STAR_mem <-- {STAR,NatStar,STARC}_mem STAR_mem_Compl <-- {STAR,STARC}_mem_Compl STAR_diff <-- {STAR,STARC}_diff STAR_star_of_image_subset <-- {STAR_hypreal_of_real, NatStar_hypreal_of_real, STARC_hcomplex_of_complex}_image_subset starset_n_Un <-- starset{Nat,C}_n_Un starset_n_Int <-- starset{Nat,C}_n_Int starset_n_Compl <-- starset{Nat,C}_n_Compl starset_n_diff <-- starset{Nat,C}_n_diff InternalSets_Un <-- Internal{Nat,C}Sets_Un InternalSets_Int <-- Internal{Nat,C}Sets_Int InternalSets_Compl <-- Internal{Nat,C}Sets_Compl InternalSets_diff <-- Internal{Nat,C}Sets_diff InternalSets_UNIV_diff <-- Internal{Nat,C}Sets_UNIV_diff InternalSets_starset_n <-- Internal{Nat,C}Sets_starset{Nat,C}_n starset_starset_n_eq <-- starset{Nat,C}_starset{Nat,C}_n_eq starset_n_starset <-- starset{Nat,C}_n_starset{Nat,C} starfun_n_starfun <-- starfun{Nat,Nat2,C,RC,CR}_n_starfun{Nat,Nat2,C,RC,CR} starfun <-- starfun{Nat,Nat2,C,RC,CR} starfun_mult <-- starfun{Nat,Nat2,C,RC,CR}_mult starfun_add <-- starfun{Nat,Nat2,C,RC,CR}_add starfun_minus <-- starfun{Nat,Nat2,C,RC,CR}_minus starfun_diff <-- starfun{C,RC,CR}_diff starfun_o <-- starfun{NatNat2,Nat2,_stafunNat,C,C_starfunRC,_starfunCR}_o starfun_o2 <-- starfun{NatNat2,_stafunNat,C,C_starfunRC,_starfunCR}_o2 starfun_const_fun <-- starfun{Nat,Nat2,C,RC,CR}_const_fun starfun_inverse <-- starfun{Nat,C,RC,CR}_inverse starfun_eq <-- starfun{Nat,Nat2,C,RC,CR}_eq starfun_eq_iff <-- starfun{C,RC,CR}_eq_iff starfun_Id <-- starfunC_Id starfun_approx <-- starfun{Nat,CR}_approx starfun_capprox <-- starfun{C,RC}_capprox starfun_abs <-- starfunNat_rabs starfun_lambda_cancel <-- starfun{C,CR,RC}_lambda_cancel starfun_lambda_cancel2 <-- starfun{C,CR,RC}_lambda_cancel2 starfun_mult_HFinite_approx <-- starfunCR_mult_HFinite_capprox starfun_mult_CFinite_capprox <-- starfun{C,RC}_mult_CFinite_capprox starfun_add_capprox <-- starfun{C,RC}_add_capprox starfun_add_approx <-- starfunCR_add_approx starfun_inverse_inverse <-- starfunC_inverse_inverse starfun_divide <-- starfun{C,CR,RC}_divide starfun_n <-- starfun{Nat,C}_n starfun_n_mult <-- starfun{Nat,C}_n_mult starfun_n_add <-- starfun{Nat,C}_n_add starfun_n_add_minus <-- starfunNat_n_add_minus starfun_n_const_fun <-- starfun{Nat,C}_n_const_fun starfun_n_minus <-- starfun{Nat,C}_n_minus starfun_n_eq <-- starfun{Nat,C}_n_eq star_n_add <-- {hypreal,hypnat,hcomplex}_add star_n_minus <-- {hypreal,hcomplex}_minus star_n_diff <-- {hypreal,hcomplex}_diff star_n_mult <-- {hypreal,hcomplex}_mult star_n_inverse <-- {hypreal,hcomplex}_inverse star_n_le <-- {hypreal,hypnat}_le star_n_less <-- {hypreal,hypnat}_less star_n_zero_num <-- {hypreal,hypnat,hcomplex}_zero_num star_n_one_num <-- {hypreal,hypnat,hcomplex}_one_num star_n_abs <-- hypreal_hrabs star_n_divide <-- hcomplex_divide star_of_add <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_add star_of_minus <-- {hypreal_of_real,hcomplex_of_complex}_minus star_of_diff <-- hypreal_of_real_diff star_of_mult <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_mult star_of_one <-- {hypreal_of_real,hcomplex_of_complex}_one star_of_zero <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_zero star_of_le <-- {hypreal_of_real,hypnat_of_nat}_le_iff star_of_less <-- {hypreal_of_real,hypnat_of_nat}_less_iff star_of_eq <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_eq_iff star_of_inverse <-- {hypreal_of_real,hcomplex_of_complex}_inverse star_of_divide <-- {hypreal_of_real,hcomplex_of_complex}_divide star_of_of_nat <-- {hypreal_of_real,hcomplex_of_complex}_of_nat star_of_of_int <-- {hypreal_of_real,hcomplex_of_complex}_of_int star_of_number_of <-- {hypreal,hcomplex}_number_of star_of_number_less <-- number_of_less_hypreal_of_real_iff star_of_number_le <-- number_of_le_hypreal_of_real_iff star_of_eq_number <-- hypreal_of_real_eq_number_of_iff star_of_less_number <-- hypreal_of_real_less_number_of_iff star_of_le_number <-- hypreal_of_real_le_number_of_iff star_of_power <-- hypreal_of_real_power star_of_eq_0 <-- hcomplex_of_complex_zero_iff * Hyperreal: new method "transfer" that implements the transfer principle of nonstandard analysis. With a subgoal that mentions nonstandard types like "'a star", the command "apply transfer" replaces it with an equivalent one that mentions only standard types. To be successful, all free variables must have standard types; non- standard variables must have explicit universal quantifiers. * Hyperreal: A theory of Taylor series. *** HOLCF *** * Discontinued special version of 'constdefs' (which used to support continuous functions) in favor of the general Pure one with full type-inference. * New simplification procedure for solving continuity conditions; it is much faster on terms with many nested lambda abstractions (cubic instead of exponential time). * New syntax for domain package: selector names are now optional. Parentheses should be omitted unless argument is lazy, for example: domain 'a stream = cons "'a" (lazy "'a stream") * New command 'fixrec' for defining recursive functions with pattern matching; defining multiple functions with mutual recursion is also supported. Patterns may include the constants cpair, spair, up, sinl, sinr, or any data constructor defined by the domain package. The given equations are proven as rewrite rules. See HOLCF/ex/Fixrec_ex.thy for syntax and examples. * New commands 'cpodef' and 'pcpodef' for defining predicate subtypes of cpo and pcpo types. Syntax is exactly like the 'typedef' command, but the proof obligation additionally includes an admissibility requirement. The packages generate instances of class cpo or pcpo, with continuity and strictness theorems for Rep and Abs. * HOLCF: Many theorems have been renamed according to a more standard naming scheme (INCOMPATIBILITY): foo_inject: "foo$x = foo$y ==> x = y" foo_eq: "(foo$x = foo$y) = (x = y)" foo_less: "(foo$x << foo$y) = (x << y)" foo_strict: "foo$UU = UU" foo_defined: "... ==> foo$x ~= UU" foo_defined_iff: "(foo$x = UU) = (x = UU)" *** ZF *** * ZF/ex: theories Group and Ring provide examples in abstract algebra, including the First Isomorphism Theorem (on quotienting by the kernel of a homomorphism). * ZF/Simplifier: install second copy of type solver that actually makes use of TC rules declared to Isar proof contexts (or locales); the old version is still required for ML proof scripts. *** Cube *** * Converted to Isar theory format; use locales instead of axiomatic theories. *** ML *** * Pure/library.ML: added ##>, ##>>, #>> -- higher-order counterparts for ||>, ||>>, |>>, * Pure/library.ML no longer defines its own option datatype, but uses that of the SML basis, which has constructors NONE and SOME instead of None and Some, as well as exception Option.Option instead of OPTION. The functions the, if_none, is_some, is_none have been adapted accordingly, while Option.map replaces apsome. * Pure/library.ML: the exception LIST has been given up in favour of the standard exceptions Empty and Subscript, as well as Library.UnequalLengths. Function like Library.hd and Library.tl are superceded by the standard hd and tl functions etc. A number of basic list functions are no longer exported to the ML toplevel, as they are variants of predefined functions. The following suggests how one can translate existing code: rev_append xs ys = List.revAppend (xs, ys) nth_elem (i, xs) = List.nth (xs, i) last_elem xs = List.last xs flat xss = List.concat xss seq fs = List.app fs partition P xs = List.partition P xs mapfilter f xs = List.mapPartial f xs * Pure/library.ML: several combinators for linear functional transformations, notably reverse application and composition: x |> f f #> g (x, y) |-> f f #-> g * Pure/library.ML: introduced/changed precedence of infix operators: infix 1 |> |-> ||> ||>> |>> |>>> #> #->; infix 2 ?; infix 3 o oo ooo oooo; infix 4 ~~ upto downto; Maybe INCOMPATIBILITY when any of those is used in conjunction with other infix operators. * Pure/library.ML: natural list combinators fold, fold_rev, and fold_map support linear functional transformations and nesting. For example: fold f [x1, ..., xN] y = y |> f x1 |> ... |> f xN (fold o fold) f [xs1, ..., xsN] y = y |> fold f xs1 |> ... |> fold f xsN fold f [x1, ..., xN] = f x1 #> ... #> f xN (fold o fold) f [xs1, ..., xsN] = fold f xs1 #> ... #> fold f xsN * Pure/library.ML: the following selectors on type 'a option are available: the: 'a option -> 'a (*partial*) these: 'a option -> 'a where 'a = 'b list the_default: 'a -> 'a option -> 'a the_list: 'a option -> 'a list * Pure/General: structure AList (cf. Pure/General/alist.ML) provides basic operations for association lists, following natural argument order; moreover the explicit equality predicate passed here avoids potentially expensive polymorphic runtime equality checks. The old functions may be expressed as follows: assoc = uncurry (AList.lookup (op =)) assocs = these oo AList.lookup (op =) overwrite = uncurry (AList.update (op =)) o swap * Pure/General: structure AList (cf. Pure/General/alist.ML) provides val make: ('a -> 'b) -> 'a list -> ('a * 'b) list val find: ('a * 'b -> bool) -> ('c * 'b) list -> 'a -> 'c list replacing make_keylist and keyfilter (occassionally used) Naive rewrites: make_keylist = AList.make keyfilter = AList.find (op =) * eq_fst and eq_snd now take explicit equality parameter, thus avoiding eqtypes. Naive rewrites: eq_fst = eq_fst (op =) eq_snd = eq_snd (op =) * Removed deprecated apl and apr (rarely used). Naive rewrites: apl (n, op) =>>= curry op n apr (op, m) =>>= fn n => op (n, m) * Pure/General: structure OrdList (cf. Pure/General/ord_list.ML) provides a reasonably efficient light-weight implementation of sets as lists. * Pure/General: generic tables (cf. Pure/General/table.ML) provide a few new operations; existing lookup and update are now curried to follow natural argument order (for use with fold etc.); INCOMPATIBILITY, use (uncurry Symtab.lookup) etc. as last resort. * Pure/General: output via the Isabelle channels of writeln/warning/error etc. is now passed through Output.output, with a hook for arbitrary transformations depending on the print_mode (cf. Output.add_mode -- the first active mode that provides a output function wins). Already formatted output may be embedded into further text via Output.raw; the result of Pretty.string_of/str_of and derived functions (string_of_term/cterm/thm etc.) is already marked raw to accommodate easy composition of diagnostic messages etc. Programmers rarely need to care about Output.output or Output.raw at all, with some notable exceptions: Output.output is required when bypassing the standard channels (writeln etc.), or in token translations to produce properly formatted results; Output.raw is required when capturing already output material that will eventually be presented to the user a second time. For the default print mode, both Output.output and Output.raw have no effect. * Pure/General: Output.time_accumulator NAME creates an operator ('a -> 'b) -> 'a -> 'b to measure runtime and count invocations; the cumulative results are displayed at the end of a batch session. * Pure/General: File.sysify_path and File.quote_sysify path have been replaced by File.platform_path and File.shell_path (with appropriate hooks). This provides a clean interface for unusual systems where the internal and external process view of file names are different. * Pure: more efficient orders for basic syntactic entities: added fast_string_ord, fast_indexname_ord, fast_term_ord; changed sort_ord and typ_ord to use fast_string_ord and fast_indexname_ord (term_ord is NOT affected); structures Symtab, Vartab, Typtab, Termtab use the fast orders now -- potential INCOMPATIBILITY for code that depends on a particular order for Symtab.keys, Symtab.dest, etc. (consider using Library.sort_strings on result). * Pure/term.ML: combinators fold_atyps, fold_aterms, fold_term_types, fold_types traverse types/terms from left to right, observing natural argument order. Supercedes previous foldl_XXX versions, add_frees, add_vars etc. have been adapted as well: INCOMPATIBILITY. * Pure: name spaces have been refined, with significant changes of the internal interfaces -- INCOMPATIBILITY. Renamed cond_extern(_table) to extern(_table). The plain name entry path is superceded by a general 'naming' context, which also includes the 'policy' to produce a fully qualified name and external accesses of a fully qualified name; NameSpace.extend is superceded by context dependent Sign.declare_name. Several theory and proof context operations modify the naming context. Especially note Theory.restore_naming and ProofContext.restore_naming to get back to a sane state; note that Theory.add_path is no longer sufficient to recover from Theory.absolute_path in particular. * Pure: new flags short_names (default false) and unique_names (default true) for controlling output of qualified names. If short_names is set, names are printed unqualified. If unique_names is reset, the name prefix is reduced to the minimum required to achieve the original result when interning again, even if there is an overlap with earlier declarations. * Pure/TheoryDataFun: change of the argument structure; 'prep_ext' is now 'extend', and 'merge' gets an additional Pretty.pp argument (useful for printing error messages). INCOMPATIBILITY. * Pure: major reorganization of the theory context. Type Sign.sg and Theory.theory are now identified, referring to the universal Context.theory (see Pure/context.ML). Actual signature and theory content is managed as theory data. The old code and interfaces were spread over many files and structures; the new arrangement introduces considerable INCOMPATIBILITY to gain more clarity: Context -- theory management operations (name, identity, inclusion, parents, ancestors, merge, etc.), plus generic theory data; Sign -- logical signature and syntax operations (declaring consts, types, etc.), plus certify/read for common entities; Theory -- logical theory operations (stating axioms, definitions, oracles), plus a copy of logical signature operations (consts, types, etc.); also a few basic management operations (Theory.copy, Theory.merge, etc.) The most basic sign_of operations (Theory.sign_of, Thm.sign_of_thm etc.) as well as the sign field in Thm.rep_thm etc. have been retained for convenience -- they merely return the theory. * Pure: type Type.tsig is superceded by theory in most interfaces. * Pure: the Isar proof context type is already defined early in Pure as Context.proof (note that ProofContext.context and Proof.context are aliases, where the latter is the preferred name). This enables other Isabelle components to refer to that type even before Isar is present. * Pure/sign/theory: discontinued named name spaces (i.e. classK, typeK, constK, axiomK, oracleK), but provide explicit operations for any of these kinds. For example, Sign.intern typeK is now Sign.intern_type, Theory.hide_space Sign.typeK is now Theory.hide_types. Also note that former Theory.hide_classes/types/consts are now Theory.hide_classes_i/types_i/consts_i, while the non '_i' versions internalize their arguments! INCOMPATIBILITY. * Pure: get_thm interface (of PureThy and ProofContext) expects datatype thmref (with constructors Name and NameSelection) instead of plain string -- INCOMPATIBILITY; * Pure: cases produced by proof methods specify options, where NONE means to remove case bindings -- INCOMPATIBILITY in (RAW_)METHOD_CASES. * Pure: the following operations retrieve axioms or theorems from a theory node or theory hierarchy, respectively: Theory.axioms_of: theory -> (string * term) list Theory.all_axioms_of: theory -> (string * term) list PureThy.thms_of: theory -> (string * thm) list PureThy.all_thms_of: theory -> (string * thm) list * Pure: print_tac now outputs the goal through the trace channel. * Isar toplevel: improved diagnostics, mostly for Poly/ML only. Reference Toplevel.debug (default false) controls detailed printing and tracing of low-level exceptions; Toplevel.profiling (default 0) controls execution profiling -- set to 1 for time and 2 for space (both increase the runtime). * Isar session: The initial use of ROOT.ML is now always timed, i.e. the log will show the actual process times, in contrast to the elapsed wall-clock time that the outer shell wrapper produces. * Simplifier: improved handling of bound variables (nameless representation, avoid allocating new strings). Simprocs that invoke the Simplifier recursively should use Simplifier.inherit_bounds to avoid local name clashes. Failure to do so produces warnings "Simplifier: renamed bound variable ..."; set Simplifier.debug_bounds for further details. * ML functions legacy_bindings and use_legacy_bindings produce ML fact bindings for all theorems stored within a given theory; this may help in porting non-Isar theories to Isar ones, while keeping ML proof scripts for the time being. * ML operator HTML.with_charset specifies the charset begin used for generated HTML files. For example: HTML.with_charset "utf-8" use_thy "Hebrew"; HTML.with_charset "utf-8" use_thy "Chinese"; *** System *** * Allow symlinks to all proper Isabelle executables (Isabelle, isabelle, isatool etc.). * ISABELLE_DOC_FORMAT setting specifies preferred document format (for isatool doc, isatool mkdir, display_drafts etc.). * isatool usedir: option -f allows specification of the ML file to be used by Isabelle; default is ROOT.ML. * New isatool version outputs the version identifier of the Isabelle distribution being used. * HOL: new isatool dimacs2hol converts files in DIMACS CNF format (containing Boolean satisfiability problems) into Isabelle/HOL theories. New in Isabelle2004 (April 2004) -------------------------------- *** General *** * Provers/order.ML: new efficient reasoner for partial and linear orders. Replaces linorder.ML. * Pure: Greek letters (except small lambda, \), as well as Gothic (\...\\...\), calligraphic (\...\), and Euler (\...\), are now considered normal letters, and can therefore be used anywhere where an ASCII letter (a...zA...Z) has until now. COMPATIBILITY: This obviously changes the parsing of some terms, especially where a symbol has been used as a binder, say '\x. ...', which is now a type error since \x will be parsed as an identifier. Fix it by inserting a space around former symbols. Call 'isatool fixgreek' to try to fix parsing errors in existing theory and ML files. * Pure: Macintosh and Windows line-breaks are now allowed in theory files. * Pure: single letter sub/superscripts (\<^isub> and \<^isup>) are now allowed in identifiers. Similar to Greek letters \<^isub> is now considered a normal (but invisible) letter. For multiple letter subscripts repeat \<^isub> like this: x\<^isub>1\<^isub>2. * Pure: There are now sub-/superscripts that can span more than one character. Text between \<^bsub> and \<^esub> is set in subscript in ProofGeneral and LaTeX, text between \<^bsup> and \<^esup> in superscript. The new control characters are not identifier parts. * Pure: Control-symbols of the form \<^raw:...> will literally print the content of "..." to the latex file instead of \isacntrl... . The "..." may consist of any printable characters excluding the end bracket >. * Pure: Using new Isar command "finalconsts" (or the ML functions Theory.add_finals or Theory.add_finals_i) it is now possible to declare constants "final", which prevents their being given a definition later. It is useful for constants whose behaviour is fixed axiomatically rather than definitionally, such as the meta-logic connectives. * Pure: 'instance' now handles general arities with general sorts (i.e. intersections of classes), * Presentation: generated HTML now uses a CSS style sheet to make layout (somewhat) independent of content. It is copied from lib/html/isabelle.css. It can be changed to alter the colors/layout of generated pages. *** Isar *** * Tactic emulation methods rule_tac, erule_tac, drule_tac, frule_tac, cut_tac, subgoal_tac and thin_tac: - Now understand static (Isar) contexts. As a consequence, users of Isar locales are no longer forced to write Isar proof scripts. For details see Isar Reference Manual, paragraph 4.3.2: Further tactic emulations. - INCOMPATIBILITY: names of variables to be instantiated may no longer be enclosed in quotes. Instead, precede variable name with `?'. This is consistent with the instantiation attribute "where". * Attributes "where" and "of": - Now take type variables of instantiated theorem into account when reading the instantiation string. This fixes a bug that caused instantiated theorems to have too special types in some circumstances. - "where" permits explicit instantiations of type variables. * Calculation commands "moreover" and "also" no longer interfere with current facts ("this"), admitting arbitrary combinations with "then" and derived forms. * Locales: - Goal statements involving the context element "includes" no longer generate theorems with internal delta predicates (those ending on "_axioms") in the premise. Resolve particular premise with .intro to obtain old form. - Fixed bug in type inference ("unify_frozen") that prevented mix of target specification and "includes" elements in goal statement. - Rule sets .intro and .axioms no longer declared as [intro?] and [elim?] (respectively) by default. - Experimental command for instantiation of locales in proof contexts: instantiate "; \<^assert> (length (Symbol.explode s) = 1); \<^assert> (size s = 4); \ text \ Note that in Unicode renderings of the symbol \\\, variations of encodings like UTF-8 or UTF-16 pose delicate questions about the multi-byte representations of its codepoint, which is outside of the 16-bit address space of the original Unicode standard from the 1990-ies. In Isabelle/ML it is just ``\<^verbatim>\\\'' literally, using plain ASCII characters beyond any doubts. \ subsection \Integers\ text %mlref \ \begin{mldecls} - @{index_ML_type int} \\ + @{define_ML_type int} \\ \end{mldecls} \<^descr> Type \<^ML_type>\int\ represents regular mathematical integers, which are \<^emph>\unbounded\. Overflow is treated properly, but should never happen in practice.\<^footnote>\The size limit for integer bit patterns in memory is 64\,MB for 32-bit Poly/ML, and much higher for 64-bit systems.\ Structure \<^ML_structure>\IntInf\ of SML97 is obsolete and superseded by \<^ML_structure>\Int\. Structure \<^ML_structure>\Integer\ in \<^file>\~~/src/Pure/General/integer.ML\ provides some additional operations. \ subsection \Rational numbers\ text %mlref \ \begin{mldecls} - @{index_ML_type Rat.rat} \\ + @{define_ML_type Rat.rat} \\ \end{mldecls} \<^descr> Type \<^ML_type>\Rat.rat\ represents rational numbers, based on the unbounded integers of Poly/ML. Literal rationals may be written with special antiquotation syntax \<^verbatim>\@\\int\\<^verbatim>\/\\nat\ or \<^verbatim>\@\\int\ (without any white space). For example \<^verbatim>\@~1/4\ or \<^verbatim>\@10\. The ML toplevel pretty printer uses the same format. Standard operations are provided via ad-hoc overloading of \<^verbatim>\+\, \<^verbatim>\-\, \<^verbatim>\*\, \<^verbatim>\/\, etc. \ subsection \Time\ text %mlref \ \begin{mldecls} - @{index_ML_type Time.time} \\ - @{index_ML seconds: "real -> Time.time"} \\ + @{define_ML_type Time.time} \\ + @{define_ML seconds: "real -> Time.time"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\Time.time\ represents time abstractly according to the SML97 basis library definition. This is adequate for internal ML operations, but awkward in concrete time specifications. \<^descr> \<^ML>\seconds\~\s\ turns the concrete scalar \s\ (measured in seconds) into an abstract time value. Floating point numbers are easy to use as configuration options in the context (see \secref{sec:config-options}) or system options that are maintained externally. \ subsection \Options\ text %mlref \ \begin{mldecls} - @{index_ML Option.map: "('a -> 'b) -> 'a option -> 'b option"} \\ - @{index_ML is_some: "'a option -> bool"} \\ - @{index_ML is_none: "'a option -> bool"} \\ - @{index_ML the: "'a option -> 'a"} \\ - @{index_ML these: "'a list option -> 'a list"} \\ - @{index_ML the_list: "'a option -> 'a list"} \\ - @{index_ML the_default: "'a -> 'a option -> 'a"} \\ + @{define_ML Option.map: "('a -> 'b) -> 'a option -> 'b option"} \\ + @{define_ML is_some: "'a option -> bool"} \\ + @{define_ML is_none: "'a option -> bool"} \\ + @{define_ML the: "'a option -> 'a"} \\ + @{define_ML these: "'a list option -> 'a list"} \\ + @{define_ML the_list: "'a option -> 'a list"} \\ + @{define_ML the_default: "'a -> 'a option -> 'a"} \\ \end{mldecls} \ text \ Apart from \<^ML>\Option.map\ most other operations defined in structure \<^ML_structure>\Option\ are alien to Isabelle/ML and never used. The operations shown above are defined in \<^file>\~~/src/Pure/General/basics.ML\. \ subsection \Lists\ text \ Lists are ubiquitous in ML as simple and light-weight ``collections'' for many everyday programming tasks. Isabelle/ML provides important additions and improvements over operations that are predefined in the SML97 library. \ text %mlref \ \begin{mldecls} - @{index_ML cons: "'a -> 'a list -> 'a list"} \\ - @{index_ML member: "('b * 'a -> bool) -> 'a list -> 'b -> bool"} \\ - @{index_ML insert: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\ - @{index_ML remove: "('b * 'a -> bool) -> 'b -> 'a list -> 'a list"} \\ - @{index_ML update: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\ + @{define_ML cons: "'a -> 'a list -> 'a list"} \\ + @{define_ML member: "('b * 'a -> bool) -> 'a list -> 'b -> bool"} \\ + @{define_ML insert: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\ + @{define_ML remove: "('b * 'a -> bool) -> 'b -> 'a list -> 'a list"} \\ + @{define_ML update: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\ \end{mldecls} \<^descr> \<^ML>\cons\~\x xs\ evaluates to \x :: xs\. Tupled infix operators are a historical accident in Standard ML. The curried \<^ML>\cons\ amends this, but it should be only used when partial application is required. \<^descr> \<^ML>\member\, \<^ML>\insert\, \<^ML>\remove\, \<^ML>\update\ treat lists as a set-like container that maintains the order of elements. See \<^file>\~~/src/Pure/library.ML\ for the full specifications (written in ML). There are some further derived operations like \<^ML>\union\ or \<^ML>\inter\. Note that \<^ML>\insert\ is conservative about elements that are already a \<^ML>\member\ of the list, while \<^ML>\update\ ensures that the latest entry is always put in front. The latter discipline is often more appropriate in declarations of context data (\secref{sec:context-data}) that are issued by the user in Isar source: later declarations take precedence over earlier ones. \ text %mlex \ Using canonical \<^ML>\fold\ together with \<^ML>\cons\ (or similar standard operations) alternates the orientation of data. The is quite natural and should not be altered forcible by inserting extra applications of \<^ML>\rev\. The alternative \<^ML>\fold_rev\ can be used in the few situations, where alternation should be prevented. \ ML_val \ val items = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; val list1 = fold cons items []; \<^assert> (list1 = rev items); val list2 = fold_rev cons items []; \<^assert> (list2 = items); \ text \ The subsequent example demonstrates how to \<^emph>\merge\ two lists in a natural way. \ ML_val \ fun merge_lists eq (xs, ys) = fold_rev (insert eq) ys xs; \ text \ Here the first list is treated conservatively: only the new elements from the second list are inserted. The inside-out order of insertion via \<^ML>\fold_rev\ attempts to preserve the order of elements in the result. This way of merging lists is typical for context data (\secref{sec:context-data}). See also \<^ML>\merge\ as defined in \<^file>\~~/src/Pure/library.ML\. \ subsection \Association lists\ text \ The operations for association lists interpret a concrete list of pairs as a finite function from keys to values. Redundant representations with multiple occurrences of the same key are implicitly normalized: lookup and update only take the first occurrence into account. \ text \ \begin{mldecls} - @{index_ML AList.lookup: "('a * 'b -> bool) -> ('b * 'c) list -> 'a -> 'c option"} \\ - @{index_ML AList.defined: "('a * 'b -> bool) -> ('b * 'c) list -> 'a -> bool"} \\ - @{index_ML AList.update: "('a * 'a -> bool) -> 'a * 'b -> ('a * 'b) list -> ('a * 'b) list"} \\ + @{define_ML AList.lookup: "('a * 'b -> bool) -> ('b * 'c) list -> 'a -> 'c option"} \\ + @{define_ML AList.defined: "('a * 'b -> bool) -> ('b * 'c) list -> 'a -> bool"} \\ + @{define_ML AList.update: "('a * 'a -> bool) -> 'a * 'b -> ('a * 'b) list -> ('a * 'b) list"} \\ \end{mldecls} \<^descr> \<^ML>\AList.lookup\, \<^ML>\AList.defined\, \<^ML>\AList.update\ implement the main ``framework operations'' for mappings in Isabelle/ML, following standard conventions for their names and types. Note that a function called \<^verbatim>\lookup\ is obliged to express its partiality via an explicit option element. There is no choice to raise an exception, without changing the name to something like \the_element\ or \get\. The \defined\ operation is essentially a contraction of \<^ML>\is_some\ and \<^verbatim>\lookup\, but this is sufficiently frequent to justify its independent existence. This also gives the implementation some opportunity for peep-hole optimization. Association lists are adequate as simple implementation of finite mappings in many practical situations. A more advanced table structure is defined in \<^file>\~~/src/Pure/General/table.ML\; that version scales easily to thousands or millions of elements. \ subsection \Unsynchronized references\ text %mlref \ \begin{mldecls} - @{index_ML_type "'a Unsynchronized.ref"} \\ - @{index_ML Unsynchronized.ref: "'a -> 'a Unsynchronized.ref"} \\ - @{index_ML "!": "'a Unsynchronized.ref -> 'a"} \\ - @{index_ML_op ":=": "'a Unsynchronized.ref * 'a -> unit"} \\ + @{define_ML_type 'a "Unsynchronized.ref"} \\ + @{define_ML Unsynchronized.ref: "'a -> 'a Unsynchronized.ref"} \\ + @{define_ML "!": "'a Unsynchronized.ref -> 'a"} \\ + @{define_ML_infix ":=" : "'a Unsynchronized.ref * 'a -> unit"} \\ \end{mldecls} \ text \ Due to ubiquitous parallelism in Isabelle/ML (see also \secref{sec:multi-threading}), the mutable reference cells of Standard ML are notorious for causing problems. In a highly parallel system, both correctness \<^emph>\and\ performance are easily degraded when using mutable data. The unwieldy name of \<^ML>\Unsynchronized.ref\ for the constructor for references in Isabelle/ML emphasizes the inconveniences caused by - mutability. Existing operations \<^ML>\!\ and \<^ML_op>\:=\ are unchanged, + mutability. Existing operations \<^ML>\!\ and \<^ML_infix>\:=\ are unchanged, but should be used with special precautions, say in a strictly local situation that is guaranteed to be restricted to sequential evaluation --- now and in the future. \begin{warn} Never \<^ML_text>\open Unsynchronized\, not even in a local scope! Pretending that mutable state is no problem is a very bad idea. \end{warn} \ section \Thread-safe programming \label{sec:multi-threading}\ text \ Multi-threaded execution has become an everyday reality in Isabelle since Poly/ML 5.2.1 and Isabelle2008. Isabelle/ML provides implicit and explicit parallelism by default, and there is no way for user-space tools to ``opt out''. ML programs that are purely functional, output messages only via the official channels (\secref{sec:message-channels}), and do not intercept interrupts (\secref{sec:exceptions}) can participate in the multi-threaded environment immediately without further ado. More ambitious tools with more fine-grained interaction with the environment need to observe the principles explained below. \ subsection \Multi-threading with shared memory\ text \ Multiple threads help to organize advanced operations of the system, such as real-time conditions on command transactions, sub-components with explicit communication, general asynchronous interaction etc. Moreover, parallel evaluation is a prerequisite to make adequate use of the CPU resources that are available on multi-core systems.\<^footnote>\Multi-core computing does not mean that there are ``spare cycles'' to be wasted. It means that the continued exponential speedup of CPU performance due to ``Moore's Law'' follows different rules: clock frequency has reached its peak around 2005, and applications need to be parallelized in order to avoid a perceived loss of performance. See also @{cite "Sutter:2005"}.\ Isabelle/Isar exploits the inherent structure of theories and proofs to support \<^emph>\implicit parallelism\ to a large extent. LCF-style theorem proving provides almost ideal conditions for that, see also @{cite "Wenzel:2009"}. This means, significant parts of theory and proof checking is parallelized by default. In Isabelle2013, a maximum speedup-factor of 3.5 on 4 cores and 6.5 on 8 cores can be expected @{cite "Wenzel:2013:ITP"}. \<^medskip> ML threads lack the memory protection of separate processes, and operate concurrently on shared heap memory. This has the advantage that results of independent computations are directly available to other threads: abstract values can be passed without copying or awkward serialization that is typically required for separate processes. To make shared-memory multi-threading work robustly and efficiently, some programming guidelines need to be observed. While the ML system is responsible to maintain basic integrity of the representation of ML values in memory, the application programmer needs to ensure that multi-threaded execution does not break the intended semantics. \begin{warn} To participate in implicit parallelism, tools need to be thread-safe. A single ill-behaved tool can affect the stability and performance of the whole system. \end{warn} Apart from observing the principles of thread-safeness passively, advanced tools may also exploit parallelism actively, e.g.\ by using library functions for parallel list operations (\secref{sec:parlist}). \begin{warn} Parallel computing resources are managed centrally by the Isabelle/ML infrastructure. User programs should not fork their own ML threads to perform heavy computations. \end{warn} \ subsection \Critical shared resources\ text \ Thread-safeness is mainly concerned about concurrent read/write access to shared resources, which are outside the purely functional world of ML. This covers the following in particular. \<^item> Global references (or arrays), i.e.\ mutable memory cells that persist over several invocations of associated operations.\<^footnote>\This is independent of the visibility of such mutable values in the toplevel scope.\ \<^item> Global state of the running Isabelle/ML process, i.e.\ raw I/O channels, environment variables, current working directory. \<^item> Writable resources in the file-system that are shared among different threads or external processes. Isabelle/ML provides various mechanisms to avoid critical shared resources in most situations. As last resort there are some mechanisms for explicit synchronization. The following guidelines help to make Isabelle/ML programs work smoothly in a concurrent environment. \<^item> Avoid global references altogether. Isabelle/Isar maintains a uniform context that incorporates arbitrary data declared by user programs (\secref{sec:context-data}). This context is passed as plain value and user tools can get/map their own data in a purely functional manner. Configuration options within the context (\secref{sec:config-options}) provide simple drop-in replacements for historic reference variables. \<^item> Keep components with local state information re-entrant. Instead of poking initial values into (private) global references, a new state record can be created on each invocation, and passed through any auxiliary functions of the component. The state record contain mutable references in special situations, without requiring any synchronization, as long as each invocation gets its own copy and the tool itself is single-threaded. \<^item> Avoid raw output on \stdout\ or \stderr\. The Poly/ML library is thread-safe for each individual output operation, but the ordering of parallel invocations is arbitrary. This means raw output will appear on some system console with unpredictable interleaving of atomic chunks. Note that this does not affect regular message output channels (\secref{sec:message-channels}). An official message id is associated with the command transaction from where it originates, independently of other transactions. This means each running Isar command has effectively its own set of message channels, and interleaving can only happen when commands use parallelism internally (and only at message boundaries). \<^item> Treat environment variables and the current working directory of the running process as read-only. \<^item> Restrict writing to the file-system to unique temporary files. Isabelle already provides a temporary directory that is unique for the running process, and there is a centralized source of unique serial numbers in Isabelle/ML. Thus temporary files that are passed to to some external process will be always disjoint, and thus thread-safe. \ text %mlref \ \begin{mldecls} - @{index_ML File.tmp_path: "Path.T -> Path.T"} \\ - @{index_ML serial_string: "unit -> string"} \\ + @{define_ML File.tmp_path: "Path.T -> Path.T"} \\ + @{define_ML serial_string: "unit -> string"} \\ \end{mldecls} \<^descr> \<^ML>\File.tmp_path\~\path\ relocates the base component of \path\ into the unique temporary directory of the running Isabelle/ML process. \<^descr> \<^ML>\serial_string\~\()\ creates a new serial number that is unique over the runtime of the Isabelle/ML process. \ text %mlex \ The following example shows how to create unique temporary file names. \ ML_val \ val tmp1 = File.tmp_path (Path.basic ("foo" ^ serial_string ())); val tmp2 = File.tmp_path (Path.basic ("foo" ^ serial_string ())); \<^assert> (tmp1 <> tmp2); \ subsection \Explicit synchronization\ text \ Isabelle/ML provides explicit synchronization for mutable variables over immutable data, which may be updated atomically and exclusively. This addresses the rare situations where mutable shared resources are really required. Synchronization in Isabelle/ML is based on primitives of Poly/ML, which have been adapted to the specific assumptions of the concurrent Isabelle environment. User code should not break this abstraction, but stay within the confines of concurrent Isabelle/ML. A \<^emph>\synchronized variable\ is an explicit state component associated with mechanisms for locking and signaling. There are operations to await a condition, change the state, and signal the change to all other waiting threads. Synchronized access to the state variable is \<^emph>\not\ re-entrant: direct or indirect nesting within the same thread will cause a deadlock! \ text %mlref \ \begin{mldecls} - @{index_ML_type "'a Synchronized.var"} \\ - @{index_ML Synchronized.var: "string -> 'a -> 'a Synchronized.var"} \\ - @{index_ML Synchronized.guarded_access: "'a Synchronized.var -> + @{define_ML_type 'a "Synchronized.var"} \\ + @{define_ML Synchronized.var: "string -> 'a -> 'a Synchronized.var"} \\ + @{define_ML Synchronized.guarded_access: "'a Synchronized.var -> ('a -> ('b * 'a) option) -> 'b"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\'a Synchronized.var\ represents synchronized variables with state of type \<^ML_type>\'a\. \<^descr> \<^ML>\Synchronized.var\~\name x\ creates a synchronized variable that is initialized with value \x\. The \name\ is used for tracing. \<^descr> \<^ML>\Synchronized.guarded_access\~\var f\ lets the function \f\ operate within a critical section on the state \x\ as follows: if \f x\ produces \<^ML>\NONE\, it continues to wait on the internal condition variable, expecting that some other thread will eventually change the content in a suitable manner; if \f x\ produces \<^ML>\SOME\~\(y, x')\ it is satisfied and assigns the new state value \x'\, broadcasts a signal to all waiting threads on the associated condition variable, and returns the result \y\. There are some further variants of the \<^ML>\Synchronized.guarded_access\ combinator, see \<^file>\~~/src/Pure/Concurrent/synchronized.ML\ for details. \ text %mlex \ The following example implements a counter that produces positive integers that are unique over the runtime of the Isabelle process: \ ML_val \ local val counter = Synchronized.var "counter" 0; in fun next () = Synchronized.guarded_access counter (fn i => let val j = i + 1 in SOME (j, j) end); end; val a = next (); val b = next (); \<^assert> (a <> b); \ text \ \<^medskip> See \<^file>\~~/src/Pure/Concurrent/mailbox.ML\ how to implement a mailbox as synchronized variable over a purely functional list. \ section \Managed evaluation\ text \ Execution of Standard ML follows the model of strict functional evaluation with optional exceptions. Evaluation happens whenever some function is applied to (sufficiently many) arguments. The result is either an explicit value or an implicit exception. \<^emph>\Managed evaluation\ in Isabelle/ML organizes expressions and results to control certain physical side-conditions, to say more specifically when and how evaluation happens. For example, the Isabelle/ML library supports lazy evaluation with memoing, parallel evaluation via futures, asynchronous evaluation via promises, evaluation with time limit etc. \<^medskip> An \<^emph>\unevaluated expression\ is represented either as unit abstraction \<^verbatim>\fn () => a\ of type \<^verbatim>\unit -> 'a\ or as regular function \<^verbatim>\fn a => b\ of type \<^verbatim>\'a -> 'b\. Both forms occur routinely, and special care is required to tell them apart --- the static type-system of SML is only of limited help here. The first form is more intuitive: some combinator \<^verbatim>\(unit -> 'a) -> 'a\ applies the given function to \<^verbatim>\()\ to initiate the postponed evaluation process. The second form is more flexible: some combinator \<^verbatim>\('a -> 'b) -> 'a -> 'b\ acts like a modified form of function application; several such combinators may be cascaded to modify a given function, before it is ultimately applied to some argument. \<^medskip> \<^emph>\Reified results\ make the disjoint sum of regular values versions exceptional situations explicit as ML datatype: \'a result = Res of 'a | Exn of exn\. This is typically used for administrative purposes, to store the overall outcome of an evaluation process. \<^emph>\Parallel exceptions\ aggregate reified results, such that multiple exceptions are digested as a collection in canonical form that identifies exceptions according to their original occurrence. This is particular important for parallel evaluation via futures \secref{sec:futures}, which are organized as acyclic graph of evaluations that depend on other evaluations: exceptions stemming from shared sub-graphs are exposed exactly once and in the order of their original occurrence (e.g.\ when printed at the toplevel). Interrupt counts as neutral element here: it is treated as minimal information about some canceled evaluation process, and is absorbed by the presence of regular program exceptions. \ text %mlref \ \begin{mldecls} - @{index_ML_type "'a Exn.result"} \\ - @{index_ML Exn.capture: "('a -> 'b) -> 'a -> 'b Exn.result"} \\ - @{index_ML Exn.interruptible_capture: "('a -> 'b) -> 'a -> 'b Exn.result"} \\ - @{index_ML Exn.release: "'a Exn.result -> 'a"} \\ - @{index_ML Par_Exn.release_all: "'a Exn.result list -> 'a list"} \\ - @{index_ML Par_Exn.release_first: "'a Exn.result list -> 'a list"} \\ + @{define_ML_type 'a "Exn.result"} \\ + @{define_ML Exn.capture: "('a -> 'b) -> 'a -> 'b Exn.result"} \\ + @{define_ML Exn.interruptible_capture: "('a -> 'b) -> 'a -> 'b Exn.result"} \\ + @{define_ML Exn.release: "'a Exn.result -> 'a"} \\ + @{define_ML Par_Exn.release_all: "'a Exn.result list -> 'a list"} \\ + @{define_ML Par_Exn.release_first: "'a Exn.result list -> 'a list"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\'a Exn.result\ represents the disjoint sum of ML results explicitly, with constructor \<^ML>\Exn.Res\ for regular values and \<^ML>\Exn.Exn\ for exceptions. \<^descr> \<^ML>\Exn.capture\~\f x\ manages the evaluation of \f x\ such that exceptions are made explicit as \<^ML>\Exn.Exn\. Note that this includes physical interrupts (see also \secref{sec:exceptions}), so the same precautions apply to user code: interrupts must not be absorbed accidentally! \<^descr> \<^ML>\Exn.interruptible_capture\ is similar to \<^ML>\Exn.capture\, but interrupts are immediately re-raised as required for user code. \<^descr> \<^ML>\Exn.release\~\result\ releases the original runtime result, exposing its regular value or raising the reified exception. \<^descr> \<^ML>\Par_Exn.release_all\~\results\ combines results that were produced independently (e.g.\ by parallel evaluation). If all results are regular values, that list is returned. Otherwise, the collection of all exceptions is raised, wrapped-up as collective parallel exception. Note that the latter prevents access to individual exceptions by conventional \<^verbatim>\handle\ of ML. \<^descr> \<^ML>\Par_Exn.release_first\ is similar to \<^ML>\Par_Exn.release_all\, but only the first (meaningful) exception that has occurred in the original evaluation process is raised again, the others are ignored. That single exception may get handled by conventional means in ML. \ subsection \Parallel skeletons \label{sec:parlist}\ text \ Algorithmic skeletons are combinators that operate on lists in parallel, in the manner of well-known \map\, \exists\, \forall\ etc. Management of futures (\secref{sec:futures}) and their results as reified exceptions is wrapped up into simple programming interfaces that resemble the sequential versions. What remains is the application-specific problem to present expressions with suitable \<^emph>\granularity\: each list element corresponds to one evaluation task. If the granularity is too coarse, the available CPUs are not saturated. If it is too fine-grained, CPU cycles are wasted due to the overhead of organizing parallel processing. In the worst case, parallel performance will be less than the sequential counterpart! \ text %mlref \ \begin{mldecls} - @{index_ML Par_List.map: "('a -> 'b) -> 'a list -> 'b list"} \\ - @{index_ML Par_List.get_some: "('a -> 'b option) -> 'a list -> 'b option"} \\ + @{define_ML Par_List.map: "('a -> 'b) -> 'a list -> 'b list"} \\ + @{define_ML Par_List.get_some: "('a -> 'b option) -> 'a list -> 'b option"} \\ \end{mldecls} \<^descr> \<^ML>\Par_List.map\~\f [x\<^sub>1, \, x\<^sub>n]\ is like \<^ML>\map\~\f [x\<^sub>1, \, x\<^sub>n]\, but the evaluation of \f x\<^sub>i\ for \i = 1, \, n\ is performed in parallel. An exception in any \f x\<^sub>i\ cancels the overall evaluation process. The final result is produced via \<^ML>\Par_Exn.release_first\ as explained above, which means the first program exception that happened to occur in the parallel evaluation is propagated, and all other failures are ignored. \<^descr> \<^ML>\Par_List.get_some\~\f [x\<^sub>1, \, x\<^sub>n]\ produces some \f x\<^sub>i\ that is of the form \SOME y\<^sub>i\, if that exists, otherwise \NONE\. Thus it is similar to \<^ML>\Library.get_first\, but subject to a non-deterministic parallel choice process. The first successful result cancels the overall evaluation process; other exceptions are propagated as for \<^ML>\Par_List.map\. This generic parallel choice combinator is the basis for derived forms, such as \<^ML>\Par_List.find_some\, \<^ML>\Par_List.exists\, \<^ML>\Par_List.forall\. \ text %mlex \ Subsequently, the Ackermann function is evaluated in parallel for some ranges of arguments. \ ML_val \ fun ackermann 0 n = n + 1 | ackermann m 0 = ackermann (m - 1) 1 | ackermann m n = ackermann (m - 1) (ackermann m (n - 1)); Par_List.map (ackermann 2) (500 upto 1000); Par_List.map (ackermann 3) (5 upto 10); \ subsection \Lazy evaluation\ text \ Classic lazy evaluation works via the \lazy\~/ \force\ pair of operations: \lazy\ to wrap an unevaluated expression, and \force\ to evaluate it once and store its result persistently. Later invocations of \force\ retrieve the stored result without another evaluation. Isabelle/ML refines this idea to accommodate the aspects of multi-threading, synchronous program exceptions and asynchronous interrupts. The first thread that invokes \force\ on an unfinished lazy value changes its state into a \<^emph>\promise\ of the eventual result and starts evaluating it. Any other threads that \force\ the same lazy value in the meantime need to wait for it to finish, by producing a regular result or program exception. If the evaluation attempt is interrupted, this event is propagated to all waiting threads and the lazy value is reset to its original state. This means a lazy value is completely evaluated at most once, in a thread-safe manner. There might be multiple interrupted evaluation attempts, and multiple receivers of intermediate interrupt events. Interrupts are \<^emph>\not\ made persistent: later evaluation attempts start again from the original expression. \ text %mlref \ \begin{mldecls} - @{index_ML_type "'a lazy"} \\ - @{index_ML Lazy.lazy: "(unit -> 'a) -> 'a lazy"} \\ - @{index_ML Lazy.value: "'a -> 'a lazy"} \\ - @{index_ML Lazy.force: "'a lazy -> 'a"} \\ + @{define_ML_type 'a "lazy"} \\ + @{define_ML Lazy.lazy: "(unit -> 'a) -> 'a lazy"} \\ + @{define_ML Lazy.value: "'a -> 'a lazy"} \\ + @{define_ML Lazy.force: "'a lazy -> 'a"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\'a lazy\ represents lazy values over type \<^verbatim>\'a\. \<^descr> \<^ML>\Lazy.lazy\~\(fn () => e)\ wraps the unevaluated expression \e\ as unfinished lazy value. \<^descr> \<^ML>\Lazy.value\~\a\ wraps the value \a\ as finished lazy value. When forced, it returns \a\ without any further evaluation. There is very low overhead for this proforma wrapping of strict values as lazy values. \<^descr> \<^ML>\Lazy.force\~\x\ produces the result of the lazy value in a thread-safe manner as explained above. Thus it may cause the current thread to wait on a pending evaluation attempt by another thread. \ subsection \Futures \label{sec:futures}\ text \ Futures help to organize parallel execution in a value-oriented manner, with \fork\~/ \join\ as the main pair of operations, and some further variants; see also @{cite "Wenzel:2009" and "Wenzel:2013:ITP"}. Unlike lazy values, futures are evaluated strictly and spontaneously on separate worker threads. Futures may be canceled, which leads to interrupts on running evaluation attempts, and forces structurally related futures to fail for all time; already finished futures remain unchanged. Exceptions between related futures are propagated as well, and turned into parallel exceptions (see above). Technically, a future is a single-assignment variable together with a \<^emph>\task\ that serves administrative purposes, notably within the \<^emph>\task queue\ where new futures are registered for eventual evaluation and the worker threads retrieve their work. The pool of worker threads is limited, in correlation with the number of physical cores on the machine. Note that allocation of runtime resources may be distorted either if workers yield CPU time (e.g.\ via system sleep or wait operations), or if non-worker threads contend for significant runtime resources independently. There is a limited number of replacement worker threads that get activated in certain explicit wait conditions, after a timeout. \<^medskip> Each future task belongs to some \<^emph>\task group\, which represents the hierarchic structure of related tasks, together with the exception status a that point. By default, the task group of a newly created future is a new sub-group of the presently running one, but it is also possible to indicate different group layouts under program control. Cancellation of futures actually refers to the corresponding task group and all its sub-groups. Thus interrupts are propagated down the group hierarchy. Regular program exceptions are treated likewise: failure of the evaluation of some future task affects its own group and all sub-groups. Given a particular task group, its \<^emph>\group status\ cumulates all relevant exceptions according to its position within the group hierarchy. Interrupted tasks that lack regular result information, will pick up parallel exceptions from the cumulative group status. \<^medskip> A \<^emph>\passive future\ or \<^emph>\promise\ is a future with slightly different evaluation policies: there is only a single-assignment variable and some expression to evaluate for the \<^emph>\failed\ case (e.g.\ to clean up resources when canceled). A regular result is produced by external means, using a separate \<^emph>\fulfill\ operation. Promises are managed in the same task queue, so regular futures may depend on them. This allows a form of reactive programming, where some promises are used as minimal elements (or guards) within the future dependency graph: when these promises are fulfilled the evaluation of subsequent futures starts spontaneously, according to their own inter-dependencies. \ text %mlref \ \begin{mldecls} - @{index_ML_type "'a future"} \\ - @{index_ML Future.fork: "(unit -> 'a) -> 'a future"} \\ - @{index_ML Future.forks: "Future.params -> (unit -> 'a) list -> 'a future list"} \\ - @{index_ML Future.join: "'a future -> 'a"} \\ - @{index_ML Future.joins: "'a future list -> 'a list"} \\ - @{index_ML Future.value: "'a -> 'a future"} \\ - @{index_ML Future.map: "('a -> 'b) -> 'a future -> 'b future"} \\ - @{index_ML Future.cancel: "'a future -> unit"} \\ - @{index_ML Future.cancel_group: "Future.group -> unit"} \\[0.5ex] - @{index_ML Future.promise: "(unit -> unit) -> 'a future"} \\ - @{index_ML Future.fulfill: "'a future -> 'a -> unit"} \\ + @{define_ML_type 'a "future"} \\ + @{define_ML Future.fork: "(unit -> 'a) -> 'a future"} \\ + @{define_ML Future.forks: "Future.params -> (unit -> 'a) list -> 'a future list"} \\ + @{define_ML Future.join: "'a future -> 'a"} \\ + @{define_ML Future.joins: "'a future list -> 'a list"} \\ + @{define_ML Future.value: "'a -> 'a future"} \\ + @{define_ML Future.map: "('a -> 'b) -> 'a future -> 'b future"} \\ + @{define_ML Future.cancel: "'a future -> unit"} \\ + @{define_ML Future.cancel_group: "Future.group -> unit"} \\[0.5ex] + @{define_ML Future.promise: "(unit -> unit) -> 'a future"} \\ + @{define_ML Future.fulfill: "'a future -> 'a -> unit"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\'a future\ represents future values over type \<^verbatim>\'a\. \<^descr> \<^ML>\Future.fork\~\(fn () => e)\ registers the unevaluated expression \e\ as unfinished future value, to be evaluated eventually on the parallel worker-thread farm. This is a shorthand for \<^ML>\Future.forks\ below, with default parameters and a single expression. \<^descr> \<^ML>\Future.forks\~\params exprs\ is the general interface to fork several futures simultaneously. The \params\ consist of the following fields: \<^item> \name : string\ (default \<^ML>\""\) specifies a common name for the tasks of the forked futures, which serves diagnostic purposes. \<^item> \group : Future.group option\ (default \<^ML>\NONE\) specifies an optional task group for the forked futures. \<^ML>\NONE\ means that a new sub-group of the current worker-thread task context is created. If this is not a worker thread, the group will be a new root in the group hierarchy. \<^item> \deps : Future.task list\ (default \<^ML>\[]\) specifies dependencies on other future tasks, i.e.\ the adjacency relation in the global task queue. Dependencies on already finished tasks are ignored. \<^item> \pri : int\ (default \<^ML>\0\) specifies a priority within the task queue. Typically there is only little deviation from the default priority \<^ML>\0\. As a rule of thumb, \<^ML>\~1\ means ``low priority" and \<^ML>\1\ means ``high priority''. Note that the task priority only affects the position in the queue, not the thread priority. When a worker thread picks up a task for processing, it runs with the normal thread priority to the end (or until canceled). Higher priority tasks that are queued later need to wait until this (or another) worker thread becomes free again. \<^item> \interrupts : bool\ (default \<^ML>\true\) tells whether the worker thread that processes the corresponding task is initially put into interruptible state. This state may change again while running, by modifying the thread attributes. With interrupts disabled, a running future task cannot be canceled. It is the responsibility of the programmer that this special state is retained only briefly. \<^descr> \<^ML>\Future.join\~\x\ retrieves the value of an already finished future, which may lead to an exception, according to the result of its previous evaluation. For an unfinished future there are several cases depending on the role of the current thread and the status of the future. A non-worker thread waits passively until the future is eventually evaluated. A worker thread temporarily changes its task context and takes over the responsibility to evaluate the future expression on the spot. The latter is done in a thread-safe manner: other threads that intend to join the same future need to wait until the ongoing evaluation is finished. Note that excessive use of dynamic dependencies of futures by adhoc joining may lead to bad utilization of CPU cores, due to threads waiting on other threads to finish required futures. The future task farm has a limited amount of replacement threads that continue working on unrelated tasks after some timeout. Whenever possible, static dependencies of futures should be specified explicitly when forked (see \deps\ above). Thus the evaluation can work from the bottom up, without join conflicts and wait states. \<^descr> \<^ML>\Future.joins\~\xs\ joins the given list of futures simultaneously, which is more efficient than \<^ML>\map Future.join\~\xs\. Based on the dependency graph of tasks, the current thread takes over the responsibility to evaluate future expressions that are required for the main result, working from the bottom up. Waiting on future results that are presently evaluated on other threads only happens as last resort, when no other unfinished futures are left over. \<^descr> \<^ML>\Future.value\~\a\ wraps the value \a\ as finished future value, bypassing the worker-thread farm. When joined, it returns \a\ without any further evaluation. There is very low overhead for this proforma wrapping of strict values as futures. \<^descr> \<^ML>\Future.map\~\f x\ is a fast-path implementation of \<^ML>\Future.fork\~\(fn () => f (\\<^ML>\Future.join\~\x))\, which avoids the full overhead of the task queue and worker-thread farm as far as possible. The function \f\ is supposed to be some trivial post-processing or projection of the future result. \<^descr> \<^ML>\Future.cancel\~\x\ cancels the task group of the given future, using \<^ML>\Future.cancel_group\ below. \<^descr> \<^ML>\Future.cancel_group\~\group\ cancels all tasks of the given task group for all time. Threads that are presently processing a task of the given group are interrupted: it may take some time until they are actually terminated. Tasks that are queued but not yet processed are dequeued and forced into interrupted state. Since the task group is itself invalidated, any further attempt to fork a future that belongs to it will yield a canceled result as well. \<^descr> \<^ML>\Future.promise\~\abort\ registers a passive future with the given \abort\ operation: it is invoked when the future task group is canceled. \<^descr> \<^ML>\Future.fulfill\~\x a\ finishes the passive future \x\ by the given value \a\. If the promise has already been canceled, the attempt to fulfill it causes an exception. \ end diff --git a/src/Doc/Implementation/Prelim.thy b/src/Doc/Implementation/Prelim.thy --- a/src/Doc/Implementation/Prelim.thy +++ b/src/Doc/Implementation/Prelim.thy @@ -1,974 +1,974 @@ (*:maxLineLen=78:*) theory Prelim imports Base begin chapter \Preliminaries\ section \Contexts \label{sec:context}\ text \ A logical context represents the background that is required for formulating statements and composing proofs. It acts as a medium to produce formal content, depending on earlier material (declarations, results etc.). For example, derivations within the Isabelle/Pure logic can be described as a judgment \\ \\<^sub>\ \\, which means that a proposition \\\ is derivable from hypotheses \\\ within the theory \\\. There are logical reasons for keeping \\\ and \\\ separate: theories can be liberal about supporting type constructors and schematic polymorphism of constants and axioms, while the inner calculus of \\ \ \\ is strictly limited to Simple Type Theory (with fixed type variables in the assumptions). \<^medskip> Contexts and derivations are linked by the following key principles: \<^item> Transfer: monotonicity of derivations admits results to be transferred into a \<^emph>\larger\ context, i.e.\ \\ \\<^sub>\ \\ implies \\' \\<^sub>\\<^sub>' \\ for contexts \\' \ \\ and \\' \ \\. \<^item> Export: discharge of hypotheses admits results to be exported into a \<^emph>\smaller\ context, i.e.\ \\' \\<^sub>\ \\ implies \\ \\<^sub>\ \ \ \\ where \\' \ \\ and \\ = \' - \\. Note that \\\ remains unchanged here, only the \\\ part is affected. \<^medskip> By modeling the main characteristics of the primitive \\\ and \\\ above, and abstracting over any particular logical content, we arrive at the fundamental notions of \<^emph>\theory context\ and \<^emph>\proof context\ in Isabelle/Isar. These implement a certain policy to manage arbitrary \<^emph>\context data\. There is a strongly-typed mechanism to declare new kinds of data at compile time. The internal bootstrap process of Isabelle/Pure eventually reaches a stage where certain data slots provide the logical content of \\\ and \\\ sketched above, but this does not stop there! Various additional data slots support all kinds of mechanisms that are not necessarily part of the core logic. For example, there would be data for canonical introduction and elimination rules for arbitrary operators (depending on the object-logic and application), which enables users to perform standard proof steps implicitly (cf.\ the \rule\ method @{cite "isabelle-isar-ref"}). \<^medskip> Thus Isabelle/Isar is able to bring forth more and more concepts successively. In particular, an object-logic like Isabelle/HOL continues the Isabelle/Pure setup by adding specific components for automated reasoning (classical reasoner, tableau prover, structured induction etc.) and derived specification mechanisms (inductive predicates, recursive functions etc.). All of this is ultimately based on the generic data management by theory and proof contexts introduced here. \ subsection \Theory context \label{sec:context-theory}\ text \ A \<^emph>\theory\ is a data container with explicit name and unique identifier. Theories are related by a (nominal) sub-theory relation, which corresponds to the dependency graph of the original construction; each theory is derived from a certain sub-graph of ancestor theories. To this end, the system maintains a set of symbolic ``identification stamps'' within each theory. The \begin\ operation starts a new theory by importing several parent theories (with merged contents) and entering a special mode of nameless incremental updates, until the final \end\ operation is performed. \<^medskip> The example in \figref{fig:ex-theory} below shows a theory graph derived from \Pure\, with theory \Length\ importing \Nat\ and \List\. The body of \Length\ consists of a sequence of updates, resulting in locally a linear sub-theory relation for each intermediate step. \begin{figure}[htb] \begin{center} \begin{tabular}{rcccl} & & \Pure\ \\ & & \\\ \\ & & \FOL\ \\ & $\swarrow$ & & $\searrow$ & \\ \Nat\ & & & & \List\ \\ & $\searrow$ & & $\swarrow$ \\ & & \Length\ \\ & & \multicolumn{3}{l}{~~@{keyword "begin"}} \\ & & $\vdots$~~ \\ & & \multicolumn{3}{l}{~~@{command "end"}} \\ \end{tabular} \caption{A theory definition depending on ancestors}\label{fig:ex-theory} \end{center} \end{figure} \<^medskip> Derived formal entities may retain a reference to the background theory in order to indicate the formal context from which they were produced. This provides an immutable certificate of the background theory. \ text %mlref \ \begin{mldecls} - @{index_ML_type theory} \\ - @{index_ML Context.eq_thy: "theory * theory -> bool"} \\ - @{index_ML Context.subthy: "theory * theory -> bool"} \\ - @{index_ML Theory.begin_theory: "string * Position.T -> theory list -> theory"} \\ - @{index_ML Theory.parents_of: "theory -> theory list"} \\ - @{index_ML Theory.ancestors_of: "theory -> theory list"} \\ + @{define_ML_type theory} \\ + @{define_ML Context.eq_thy: "theory * theory -> bool"} \\ + @{define_ML Context.subthy: "theory * theory -> bool"} \\ + @{define_ML Theory.begin_theory: "string * Position.T -> theory list -> theory"} \\ + @{define_ML Theory.parents_of: "theory -> theory list"} \\ + @{define_ML Theory.ancestors_of: "theory -> theory list"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\theory\ represents theory contexts. \<^descr> \<^ML>\Context.eq_thy\~\(thy\<^sub>1, thy\<^sub>2)\ check strict identity of two theories. \<^descr> \<^ML>\Context.subthy\~\(thy\<^sub>1, thy\<^sub>2)\ compares theories according to the intrinsic graph structure of the construction. This sub-theory relation is a nominal approximation of inclusion (\\\) of the corresponding content (according to the semantics of the ML modules that implement the data). \<^descr> \<^ML>\Theory.begin_theory\~\name parents\ constructs a new theory based on the given parents. This ML function is normally not invoked directly. \<^descr> \<^ML>\Theory.parents_of\~\thy\ returns the direct ancestors of \thy\. \<^descr> \<^ML>\Theory.ancestors_of\~\thy\ returns all ancestors of \thy\ (not including \thy\ itself). \ text %mlantiq \ \begin{matharray}{rcl} @{ML_antiquotation_def "theory"} & : & \ML_antiquotation\ \\ @{ML_antiquotation_def "theory_context"} & : & \ML_antiquotation\ \\ \end{matharray} \<^rail>\ @@{ML_antiquotation theory} embedded? ; @@{ML_antiquotation theory_context} embedded \ \<^descr> \@{theory}\ refers to the background theory of the current context --- as abstract value. \<^descr> \@{theory A}\ refers to an explicitly named ancestor theory \A\ of the background theory of the current context --- as abstract value. \<^descr> \@{theory_context A}\ is similar to \@{theory A}\, but presents the result as initial \<^ML_type>\Proof.context\ (see also \<^ML>\Proof_Context.init_global\). \ subsection \Proof context \label{sec:context-proof}\ text \ A proof context is a container for pure data that refers to the theory from which it is derived. The \init\ operation creates a proof context from a given theory. There is an explicit \transfer\ operation to force resynchronization with updates to the background theory -- this is rarely required in practice. Entities derived in a proof context need to record logical requirements explicitly, since there is no separate context identification or symbolic inclusion as for theories. For example, hypotheses used in primitive derivations (cf.\ \secref{sec:thms}) are recorded separately within the sequent \\ \ \\, just to make double sure. Results could still leak into an alien proof context due to programming errors, but Isabelle/Isar includes some extra validity checks in critical positions, notably at the end of a sub-proof. Proof contexts may be manipulated arbitrarily, although the common discipline is to follow block structure as a mental model: a given context is extended consecutively, and results are exported back into the original context. Note that an Isar proof state models block-structured reasoning explicitly, using a stack of proof contexts internally. For various technical reasons, the background theory of an Isar proof state must not be changed while the proof is still under construction! \ text %mlref \ \begin{mldecls} - @{index_ML_type Proof.context} \\ - @{index_ML Proof_Context.init_global: "theory -> Proof.context"} \\ - @{index_ML Proof_Context.theory_of: "Proof.context -> theory"} \\ - @{index_ML Proof_Context.transfer: "theory -> Proof.context -> Proof.context"} \\ + @{define_ML_type Proof.context} \\ + @{define_ML Proof_Context.init_global: "theory -> Proof.context"} \\ + @{define_ML Proof_Context.theory_of: "Proof.context -> theory"} \\ + @{define_ML Proof_Context.transfer: "theory -> Proof.context -> Proof.context"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\Proof.context\ represents proof contexts. \<^descr> \<^ML>\Proof_Context.init_global\~\thy\ produces a proof context derived from \thy\, initializing all data. \<^descr> \<^ML>\Proof_Context.theory_of\~\ctxt\ selects the background theory from \ctxt\. \<^descr> \<^ML>\Proof_Context.transfer\~\thy ctxt\ promotes the background theory of \ctxt\ to the super theory \thy\. \ text %mlantiq \ \begin{matharray}{rcl} @{ML_antiquotation_def "context"} & : & \ML_antiquotation\ \\ \end{matharray} \<^descr> \@{context}\ refers to \<^emph>\the\ context at compile-time --- as abstract value. Independently of (local) theory or proof mode, this always produces a meaningful result. This is probably the most common antiquotation in interactive experimentation with ML inside Isar. \ subsection \Generic contexts \label{sec:generic-context}\ text \ A generic context is the disjoint sum of either a theory or proof context. Occasionally, this enables uniform treatment of generic context data, typically extra-logical information. Operations on generic contexts include the usual injections, partial selections, and combinators for lifting operations on either component of the disjoint sum. Moreover, there are total operations \theory_of\ and \proof_of\ to convert a generic context into either kind: a theory can always be selected from the sum, while a proof context might have to be constructed by an ad-hoc \init\ operation, which incurs a small runtime overhead. \ text %mlref \ \begin{mldecls} - @{index_ML_type Context.generic} \\ - @{index_ML Context.theory_of: "Context.generic -> theory"} \\ - @{index_ML Context.proof_of: "Context.generic -> Proof.context"} \\ + @{define_ML_type Context.generic} \\ + @{define_ML Context.theory_of: "Context.generic -> theory"} \\ + @{define_ML Context.proof_of: "Context.generic -> Proof.context"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\Context.generic\ is the direct sum of \<^ML_type>\theory\ and \<^ML_type>\Proof.context\, with the datatype constructors \<^ML>\Context.Theory\ and \<^ML>\Context.Proof\. \<^descr> \<^ML>\Context.theory_of\~\context\ always produces a theory from the generic \context\, using \<^ML>\Proof_Context.theory_of\ as required. \<^descr> \<^ML>\Context.proof_of\~\context\ always produces a proof context from the generic \context\, using \<^ML>\Proof_Context.init_global\ as required (note that this re-initializes the context data with each invocation). \ subsection \Context data \label{sec:context-data}\ text \ The main purpose of theory and proof contexts is to manage arbitrary (pure) data. New data types can be declared incrementally at compile time. There are separate declaration mechanisms for any of the three kinds of contexts: theory, proof, generic. \ paragraph \Theory data\ text \declarations need to implement the following ML signature: \<^medskip> \begin{tabular}{ll} \\ T\ & representing type \\ \\ empty: T\ & empty default value \\ \\ extend: T \ T\ & obsolete (identity function) \\ \\ merge: T \ T \ T\ & merge data \\ \end{tabular} \<^medskip> The \empty\ value acts as initial default for \<^emph>\any\ theory that does not declare actual data content; \extend\ is obsolete: it needs to be the identity function. The \merge\ operation needs to join the data from two theories in a conservative manner. The standard scheme for \merge (data\<^sub>1, data\<^sub>2)\ inserts those parts of \data\<^sub>2\ into \data\<^sub>1\ that are not yet present, while keeping the general order of things. The \<^ML>\Library.merge\ function on plain lists may serve as canonical template. Particularly note that shared parts of the data must not be duplicated by naive concatenation, or a theory graph that resembles a chain of diamonds would cause an exponential blowup! Sometimes, the data consists of a single item that cannot be ``merged'' in a sensible manner. Then the standard scheme degenerates to the projection to \data\<^sub>1\, ignoring \data\<^sub>2\ outright. \ paragraph \Proof context data\ text \declarations need to implement the following ML signature: \<^medskip> \begin{tabular}{ll} \\ T\ & representing type \\ \\ init: theory \ T\ & produce initial value \\ \end{tabular} \<^medskip> The \init\ operation is supposed to produce a pure value from the given background theory and should be somehow ``immediate''. Whenever a proof context is initialized, which happens frequently, the the system invokes the \init\ operation of \<^emph>\all\ theory data slots ever declared. This also means that one needs to be economic about the total number of proof data declarations in the system, i.e.\ each ML module should declare at most one, sometimes two data slots for its internal use. Repeated data declarations to simulate a record type should be avoided! \ paragraph \Generic data\ text \ provides a hybrid interface for both theory and proof data. The \init\ operation for proof contexts is predefined to select the current data value from the background theory. \<^bigskip> Any of the above data declarations over type \T\ result in an ML structure with the following signature: \<^medskip> \begin{tabular}{ll} \get: context \ T\ \\ \put: T \ context \ context\ \\ \map: (T \ T) \ context \ context\ \\ \end{tabular} \<^medskip> These other operations provide exclusive access for the particular kind of context (theory, proof, or generic context). This interface observes the ML discipline for types and scopes: there is no other way to access the corresponding data slot of a context. By keeping these operations private, an Isabelle/ML module may maintain abstract values authentically. \ text %mlref \ \begin{mldecls} - @{index_ML_functor Theory_Data} \\ - @{index_ML_functor Proof_Data} \\ - @{index_ML_functor Generic_Data} \\ + @{define_ML_functor Theory_Data} \\ + @{define_ML_functor Proof_Data} \\ + @{define_ML_functor Generic_Data} \\ \end{mldecls} \<^descr> \<^ML_functor>\Theory_Data\\(spec)\ declares data for type \<^ML_type>\theory\ according to the specification provided as argument structure. The resulting structure provides data init and access operations as described above. \<^descr> \<^ML_functor>\Proof_Data\\(spec)\ is analogous to \<^ML_functor>\Theory_Data\ for type \<^ML_type>\Proof.context\. \<^descr> \<^ML_functor>\Generic_Data\\(spec)\ is analogous to \<^ML_functor>\Theory_Data\ for type \<^ML_type>\Context.generic\. \ text %mlex \ The following artificial example demonstrates theory data: we maintain a set of terms that are supposed to be wellformed wrt.\ the enclosing theory. The public interface is as follows: \ ML \ signature WELLFORMED_TERMS = sig val get: theory -> term list val add: term -> theory -> theory end; \ text \ The implementation uses private theory data internally, and only exposes an operation that involves explicit argument checking wrt.\ the given theory. \ ML \ structure Wellformed_Terms: WELLFORMED_TERMS = struct structure Terms = Theory_Data ( type T = term Ord_List.T; val empty = []; val extend = I; fun merge (ts1, ts2) = Ord_List.union Term_Ord.fast_term_ord ts1 ts2; ); val get = Terms.get; fun add raw_t thy = let val t = Sign.cert_term thy raw_t; in Terms.map (Ord_List.insert Term_Ord.fast_term_ord t) thy end; end; \ text \ Type \<^ML_type>\term Ord_List.T\ is used for reasonably efficient representation of a set of terms: all operations are linear in the number of stored elements. Here we assume that users of this module do not care about the declaration order, since that data structure forces its own arrangement of elements. Observe how the \<^ML_text>\merge\ operation joins the data slots of the two constituents: \<^ML>\Ord_List.union\ prevents duplication of common data from different branches, thus avoiding the danger of exponential blowup. Plain list append etc.\ must never be used for theory data merges! \<^medskip> Our intended invariant is achieved as follows: \<^enum> \<^ML>\Wellformed_Terms.add\ only admits terms that have passed the \<^ML>\Sign.cert_term\ check of the given theory at that point. \<^enum> Wellformedness in the sense of \<^ML>\Sign.cert_term\ is monotonic wrt.\ the sub-theory relation. So our data can move upwards in the hierarchy (via extension or merges), and maintain wellformedness without further checks. Note that all basic operations of the inference kernel (which includes \<^ML>\Sign.cert_term\) observe this monotonicity principle, but other user-space tools don't. For example, fully-featured type-inference via \<^ML>\Syntax.check_term\ (cf.\ \secref{sec:term-check}) is not necessarily monotonic wrt.\ the background theory, since constraints of term constants can be modified by later declarations, for example. In most cases, user-space context data does not have to take such invariants too seriously. The situation is different in the implementation of the inference kernel itself, which uses the very same data mechanisms for types, constants, axioms etc. \ subsection \Configuration options \label{sec:config-options}\ text \ A \<^emph>\configuration option\ is a named optional value of some basic type (Boolean, integer, string) that is stored in the context. It is a simple application of general context data (\secref{sec:context-data}) that is sufficiently common to justify customized setup, which includes some concrete declarations for end-users using existing notation for attributes (cf.\ \secref{sec:attributes}). For example, the predefined configuration option @{attribute show_types} controls output of explicit type constraints for variables in printed terms (cf.\ \secref{sec:read-print}). Its value can be modified within Isar text like this: \ experiment begin declare [[show_types = false]] \ \declaration within (local) theory context\ notepad begin note [[show_types = true]] \ \declaration within proof (forward mode)\ term x have "x = x" using [[show_types = false]] \ \declaration within proof (backward mode)\ .. end end text \ Configuration options that are not set explicitly hold a default value that can depend on the application context. This allows to retrieve the value from another slot within the context, or fall back on a global preference mechanism, for example. The operations to declare configuration options and get/map their values are modeled as direct replacements for historic global references, only that the context is made explicit. This allows easy configuration of tools, without relying on the execution order as required for old-style mutable references. \ text %mlref \ \begin{mldecls} - @{index_ML Config.get: "Proof.context -> 'a Config.T -> 'a"} \\ - @{index_ML Config.map: "'a Config.T -> ('a -> 'a) -> Proof.context -> Proof.context"} \\ - @{index_ML Attrib.setup_config_bool: "binding -> (Context.generic -> bool) -> + @{define_ML Config.get: "Proof.context -> 'a Config.T -> 'a"} \\ + @{define_ML Config.map: "'a Config.T -> ('a -> 'a) -> Proof.context -> Proof.context"} \\ + @{define_ML Attrib.setup_config_bool: "binding -> (Context.generic -> bool) -> bool Config.T"} \\ - @{index_ML Attrib.setup_config_int: "binding -> (Context.generic -> int) -> + @{define_ML Attrib.setup_config_int: "binding -> (Context.generic -> int) -> int Config.T"} \\ - @{index_ML Attrib.setup_config_real: "binding -> (Context.generic -> real) -> + @{define_ML Attrib.setup_config_real: "binding -> (Context.generic -> real) -> real Config.T"} \\ - @{index_ML Attrib.setup_config_string: "binding -> (Context.generic -> string) -> + @{define_ML Attrib.setup_config_string: "binding -> (Context.generic -> string) -> string Config.T"} \\ \end{mldecls} \<^descr> \<^ML>\Config.get\~\ctxt config\ gets the value of \config\ in the given context. \<^descr> \<^ML>\Config.map\~\config f ctxt\ updates the context by updating the value of \config\. \<^descr> \config =\~\<^ML>\Attrib.setup_config_bool\~\name default\ creates a named configuration option of type \<^ML_type>\bool\, with the given \default\ depending on the application context. The resulting \config\ can be used to get/map its value in a given context. There is an implicit update of the background theory that registers the option as attribute with some concrete syntax. \<^descr> \<^ML>\Attrib.config_int\, \<^ML>\Attrib.config_real\, and \<^ML>\Attrib.config_string\ work like \<^ML>\Attrib.config_bool\, but for types \<^ML_type>\int\ and \<^ML_type>\string\, respectively. \ text %mlex \ The following example shows how to declare and use a Boolean configuration option called \my_flag\ with constant default value \<^ML>\false\. \ ML \ val my_flag = Attrib.setup_config_bool \<^binding>\my_flag\ (K false) \ text \ Now the user can refer to @{attribute my_flag} in declarations, while ML tools can retrieve the current value from the context via \<^ML>\Config.get\. \ ML_val \\<^assert> (Config.get \<^context> my_flag = false)\ declare [[my_flag = true]] ML_val \\<^assert> (Config.get \<^context> my_flag = true)\ notepad begin { note [[my_flag = false]] ML_val \\<^assert> (Config.get \<^context> my_flag = false)\ } ML_val \\<^assert> (Config.get \<^context> my_flag = true)\ end text \ Here is another example involving ML type \<^ML_type>\real\ (floating-point numbers). \ ML \ val airspeed_velocity = Attrib.setup_config_real \<^binding>\airspeed_velocity\ (K 0.0) \ declare [[airspeed_velocity = 10]] declare [[airspeed_velocity = 9.9]] section \Names \label{sec:names}\ text \ In principle, a name is just a string, but there are various conventions for representing additional structure. For example, ``\Foo.bar.baz\'' is considered as a long name consisting of qualifier \Foo.bar\ and base name \baz\. The individual constituents of a name may have further substructure, e.g.\ the string ``\<^verbatim>\\\'' encodes as a single symbol (\secref{sec:symbols}). \<^medskip> Subsequently, we shall introduce specific categories of names. Roughly speaking these correspond to logical entities as follows: \<^item> Basic names (\secref{sec:basic-name}): free and bound variables. \<^item> Indexed names (\secref{sec:indexname}): schematic variables. \<^item> Long names (\secref{sec:long-name}): constants of any kind (type constructors, term constants, other concepts defined in user space). Such entities are typically managed via name spaces (\secref{sec:name-space}). \ subsection \Basic names \label{sec:basic-name}\ text \ A \<^emph>\basic name\ essentially consists of a single Isabelle identifier. There are conventions to mark separate classes of basic names, by attaching a suffix of underscores: one underscore means \<^emph>\internal name\, two underscores means \<^emph>\Skolem name\, three underscores means \<^emph>\internal Skolem name\. For example, the basic name \foo\ has the internal version \foo_\, with Skolem versions \foo__\ and \foo___\, respectively. These special versions provide copies of the basic name space, apart from anything that normally appears in the user text. For example, system generated variables in Isar proof contexts are usually marked as internal, which prevents mysterious names like \xaa\ to appear in human-readable text. \<^medskip> Manipulating binding scopes often requires on-the-fly renamings. A \<^emph>\name context\ contains a collection of already used names. The \declare\ operation adds names to the context. The \invents\ operation derives a number of fresh names from a given starting point. For example, the first three names derived from \a\ are \a\, \b\, \c\. The \variants\ operation produces fresh names by incrementing tentative names as base-26 numbers (with digits \a..z\) until all clashes are resolved. For example, name \foo\ results in variants \fooa\, \foob\, \fooc\, \dots, \fooaa\, \fooab\ etc.; each renaming step picks the next unused variant from this sequence. \ text %mlref \ \begin{mldecls} - @{index_ML Name.internal: "string -> string"} \\ - @{index_ML Name.skolem: "string -> string"} \\ + @{define_ML Name.internal: "string -> string"} \\ + @{define_ML Name.skolem: "string -> string"} \\ \end{mldecls} \begin{mldecls} - @{index_ML_type Name.context} \\ - @{index_ML Name.context: Name.context} \\ - @{index_ML Name.declare: "string -> Name.context -> Name.context"} \\ - @{index_ML Name.invent: "Name.context -> string -> int -> string list"} \\ - @{index_ML Name.variant: "string -> Name.context -> string * Name.context"} \\ + @{define_ML_type Name.context} \\ + @{define_ML Name.context: Name.context} \\ + @{define_ML Name.declare: "string -> Name.context -> Name.context"} \\ + @{define_ML Name.invent: "Name.context -> string -> int -> string list"} \\ + @{define_ML Name.variant: "string -> Name.context -> string * Name.context"} \\ \end{mldecls} \begin{mldecls} - @{index_ML Variable.names_of: "Proof.context -> Name.context"} \\ + @{define_ML Variable.names_of: "Proof.context -> Name.context"} \\ \end{mldecls} \<^descr> \<^ML>\Name.internal\~\name\ produces an internal name by adding one underscore. \<^descr> \<^ML>\Name.skolem\~\name\ produces a Skolem name by adding two underscores. \<^descr> Type \<^ML_type>\Name.context\ represents the context of already used names; the initial value is \<^ML>\Name.context\. \<^descr> \<^ML>\Name.declare\~\name\ enters a used name into the context. \<^descr> \<^ML>\Name.invent\~\context name n\ produces \n\ fresh names derived from \name\. \<^descr> \<^ML>\Name.variant\~\name context\ produces a fresh variant of \name\; the result is declared to the context. \<^descr> \<^ML>\Variable.names_of\~\ctxt\ retrieves the context of declared type and term variable names. Projecting a proof context down to a primitive name context is occasionally useful when invoking lower-level operations. Regular management of ``fresh variables'' is done by suitable operations of structure \<^ML_structure>\Variable\, which is also able to provide an official status of ``locally fixed variable'' within the logical environment (cf.\ \secref{sec:variables}). \ text %mlex \ The following simple examples demonstrate how to produce fresh names from the initial \<^ML>\Name.context\. \ ML_val \ val list1 = Name.invent Name.context "a" 5; \<^assert> (list1 = ["a", "b", "c", "d", "e"]); val list2 = #1 (fold_map Name.variant ["x", "x", "a", "a", "'a", "'a"] Name.context); \<^assert> (list2 = ["x", "xa", "a", "aa", "'a", "'aa"]); \ text \ \<^medskip> The same works relatively to the formal context as follows.\ experiment fixes a b c :: 'a begin ML_val \ val names = Variable.names_of \<^context>; val list1 = Name.invent names "a" 5; \<^assert> (list1 = ["d", "e", "f", "g", "h"]); val list2 = #1 (fold_map Name.variant ["x", "x", "a", "a", "'a", "'a"] names); \<^assert> (list2 = ["x", "xa", "aa", "ab", "'aa", "'ab"]); \ end subsection \Indexed names \label{sec:indexname}\ text \ An \<^emph>\indexed name\ (or \indexname\) is a pair of a basic name and a natural number. This representation allows efficient renaming by incrementing the second component only. The canonical way to rename two collections of indexnames apart from each other is this: determine the maximum index \maxidx\ of the first collection, then increment all indexes of the second collection by \maxidx + 1\; the maximum index of an empty collection is \-1\. Occasionally, basic names are injected into the same pair type of indexed names: then \(x, -1)\ is used to encode the basic name \x\. \<^medskip> Isabelle syntax observes the following rules for representing an indexname \(x, i)\ as a packed string: \<^item> \?x\ if \x\ does not end with a digit and \i = 0\, \<^item> \?xi\ if \x\ does not end with a digit, \<^item> \?x.i\ otherwise. Indexnames may acquire large index numbers after several maxidx shifts have been applied. Results are usually normalized towards \0\ at certain checkpoints, notably at the end of a proof. This works by producing variants of the corresponding basic name components. For example, the collection \?x1, ?x7, ?x42\ becomes \?x, ?xa, ?xb\. \ text %mlref \ \begin{mldecls} - @{index_ML_type indexname: "string * int"} \\ + @{define_ML_type indexname = "string * int"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\indexname\ represents indexed names. This is an abbreviation for \<^ML_type>\string * int\. The second component is usually non-negative, except for situations where \(x, -1)\ is used to inject basic names into this type. Other negative indexes should not be used. \ subsection \Long names \label{sec:long-name}\ text \ A \<^emph>\long name\ consists of a sequence of non-empty name components. The packed representation uses a dot as separator, as in ``\A.b.c\''. The last component is called \<^emph>\base name\, the remaining prefix is called \<^emph>\qualifier\ (which may be empty). The qualifier can be understood as the access path to the named entity while passing through some nested block-structure, although our free-form long names do not really enforce any strict discipline. For example, an item named ``\A.b.c\'' may be understood as a local entity \c\, within a local structure \b\, within a global structure \A\. In practice, long names usually represent 1--3 levels of qualification. User ML code should not make any assumptions about the particular structure of long names! The empty name is commonly used as an indication of unnamed entities, or entities that are not entered into the corresponding name space, whenever this makes any sense. The basic operations on long names map empty names again to empty names. \ text %mlref \ \begin{mldecls} - @{index_ML Long_Name.base_name: "string -> string"} \\ - @{index_ML Long_Name.qualifier: "string -> string"} \\ - @{index_ML Long_Name.append: "string -> string -> string"} \\ - @{index_ML Long_Name.implode: "string list -> string"} \\ - @{index_ML Long_Name.explode: "string -> string list"} \\ + @{define_ML Long_Name.base_name: "string -> string"} \\ + @{define_ML Long_Name.qualifier: "string -> string"} \\ + @{define_ML Long_Name.append: "string -> string -> string"} \\ + @{define_ML Long_Name.implode: "string list -> string"} \\ + @{define_ML Long_Name.explode: "string -> string list"} \\ \end{mldecls} \<^descr> \<^ML>\Long_Name.base_name\~\name\ returns the base name of a long name. \<^descr> \<^ML>\Long_Name.qualifier\~\name\ returns the qualifier of a long name. \<^descr> \<^ML>\Long_Name.append\~\name\<^sub>1 name\<^sub>2\ appends two long names. \<^descr> \<^ML>\Long_Name.implode\~\names\ and \<^ML>\Long_Name.explode\~\name\ convert between the packed string representation and the explicit list form of long names. \ subsection \Name spaces \label{sec:name-space}\ text \ A \name space\ manages a collection of long names, together with a mapping between partially qualified external names and fully qualified internal names (in both directions). Note that the corresponding \intern\ and \extern\ operations are mostly used for parsing and printing only! The \declare\ operation augments a name space according to the accesses determined by a given binding, and a naming policy from the context. \<^medskip> A \binding\ specifies details about the prospective long name of a newly introduced formal entity. It consists of a base name, prefixes for qualification (separate ones for system infrastructure and user-space mechanisms), a slot for the original source position, and some additional flags. \<^medskip> A \naming\ provides some additional details for producing a long name from a binding. Normally, the naming is implicit in the theory or proof context. The \full\ operation (and its variants for different context types) produces a fully qualified internal name to be entered into a name space. The main equation of this ``chemical reaction'' when binding new entities in a context is as follows: \<^medskip> \begin{tabular}{l} \binding + naming \ long name + name space accesses\ \end{tabular} \<^bigskip> As a general principle, there is a separate name space for each kind of formal entity, e.g.\ fact, logical constant, type constructor, type class. It is usually clear from the occurrence in concrete syntax (or from the scope) which kind of entity a name refers to. For example, the very same name \c\ may be used uniformly for a constant, type constructor, and type class. There are common schemes to name derived entities systematically according to the name of the main logical entity involved, e.g.\ fact \c.intro\ for a canonical introduction rule related to constant \c\. This technique of mapping names from one space into another requires some care in order to avoid conflicts. In particular, theorem names derived from a type constructor or type class should get an additional suffix in addition to the usual qualification. This leads to the following conventions for derived names: \<^medskip> \begin{tabular}{ll} logical entity & fact name \\\hline constant \c\ & \c.intro\ \\ type \c\ & \c_type.intro\ \\ class \c\ & \c_class.intro\ \\ \end{tabular} \ text %mlref \ \begin{mldecls} - @{index_ML_type binding} \\ - @{index_ML Binding.empty: binding} \\ - @{index_ML Binding.name: "string -> binding"} \\ - @{index_ML Binding.qualify: "bool -> string -> binding -> binding"} \\ - @{index_ML Binding.prefix: "bool -> string -> binding -> binding"} \\ - @{index_ML Binding.concealed: "binding -> binding"} \\ - @{index_ML Binding.print: "binding -> string"} \\ + @{define_ML_type binding} \\ + @{define_ML Binding.empty: binding} \\ + @{define_ML Binding.name: "string -> binding"} \\ + @{define_ML Binding.qualify: "bool -> string -> binding -> binding"} \\ + @{define_ML Binding.prefix: "bool -> string -> binding -> binding"} \\ + @{define_ML Binding.concealed: "binding -> binding"} \\ + @{define_ML Binding.print: "binding -> string"} \\ \end{mldecls} \begin{mldecls} - @{index_ML_type Name_Space.naming} \\ - @{index_ML Name_Space.global_naming: Name_Space.naming} \\ - @{index_ML Name_Space.add_path: "string -> Name_Space.naming -> Name_Space.naming"} \\ - @{index_ML Name_Space.full_name: "Name_Space.naming -> binding -> string"} \\ + @{define_ML_type Name_Space.naming} \\ + @{define_ML Name_Space.global_naming: Name_Space.naming} \\ + @{define_ML Name_Space.add_path: "string -> Name_Space.naming -> Name_Space.naming"} \\ + @{define_ML Name_Space.full_name: "Name_Space.naming -> binding -> string"} \\ \end{mldecls} \begin{mldecls} - @{index_ML_type Name_Space.T} \\ - @{index_ML Name_Space.empty: "string -> Name_Space.T"} \\ - @{index_ML Name_Space.merge: "Name_Space.T * Name_Space.T -> Name_Space.T"} \\ - @{index_ML Name_Space.declare: "Context.generic -> bool -> + @{define_ML_type Name_Space.T} \\ + @{define_ML Name_Space.empty: "string -> Name_Space.T"} \\ + @{define_ML Name_Space.merge: "Name_Space.T * Name_Space.T -> Name_Space.T"} \\ + @{define_ML Name_Space.declare: "Context.generic -> bool -> binding -> Name_Space.T -> string * Name_Space.T"} \\ - @{index_ML Name_Space.intern: "Name_Space.T -> string -> string"} \\ - @{index_ML Name_Space.extern: "Proof.context -> Name_Space.T -> string -> string"} \\ - @{index_ML Name_Space.is_concealed: "Name_Space.T -> string -> bool"} + @{define_ML Name_Space.intern: "Name_Space.T -> string -> string"} \\ + @{define_ML Name_Space.extern: "Proof.context -> Name_Space.T -> string -> string"} \\ + @{define_ML Name_Space.is_concealed: "Name_Space.T -> string -> bool"} \end{mldecls} \<^descr> Type \<^ML_type>\binding\ represents the abstract concept of name bindings. \<^descr> \<^ML>\Binding.empty\ is the empty binding. \<^descr> \<^ML>\Binding.name\~\name\ produces a binding with base name \name\. Note that this lacks proper source position information; see also the ML antiquotation @{ML_antiquotation binding}. \<^descr> \<^ML>\Binding.qualify\~\mandatory name binding\ prefixes qualifier \name\ to \binding\. The \mandatory\ flag tells if this name component always needs to be given in name space accesses --- this is mostly \false\ in practice. Note that this part of qualification is typically used in derived specification mechanisms. \<^descr> \<^ML>\Binding.prefix\ is similar to \<^ML>\Binding.qualify\, but affects the system prefix. This part of extra qualification is typically used in the infrastructure for modular specifications, notably ``local theory targets'' (see also \chref{ch:local-theory}). \<^descr> \<^ML>\Binding.concealed\~\binding\ indicates that the binding shall refer to an entity that serves foundational purposes only. This flag helps to mark implementation details of specification mechanism etc. Other tools should not depend on the particulars of concealed entities (cf.\ \<^ML>\Name_Space.is_concealed\). \<^descr> \<^ML>\Binding.print\~\binding\ produces a string representation for human-readable output, together with some formal markup that might get used in GUI front-ends, for example. \<^descr> Type \<^ML_type>\Name_Space.naming\ represents the abstract concept of a naming policy. \<^descr> \<^ML>\Name_Space.global_naming\ is the default naming policy: it is global and lacks any path prefix. In a regular theory context this is augmented by a path prefix consisting of the theory name. \<^descr> \<^ML>\Name_Space.add_path\~\path naming\ augments the naming policy by extending its path component. \<^descr> \<^ML>\Name_Space.full_name\~\naming binding\ turns a name binding (usually a basic name) into the fully qualified internal name, according to the given naming policy. \<^descr> Type \<^ML_type>\Name_Space.T\ represents name spaces. \<^descr> \<^ML>\Name_Space.empty\~\kind\ and \<^ML>\Name_Space.merge\~\(space\<^sub>1, space\<^sub>2)\ are the canonical operations for maintaining name spaces according to theory data management (\secref{sec:context-data}); \kind\ is a formal comment to characterize the purpose of a name space. \<^descr> \<^ML>\Name_Space.declare\~\context strict binding space\ enters a name binding as fully qualified internal name into the name space, using the naming of the context. \<^descr> \<^ML>\Name_Space.intern\~\space name\ internalizes a (partially qualified) external name. This operation is mostly for parsing! Note that fully qualified names stemming from declarations are produced via \<^ML>\Name_Space.full_name\ and \<^ML>\Name_Space.declare\ (or their derivatives for \<^ML_type>\theory\ and \<^ML_type>\Proof.context\). \<^descr> \<^ML>\Name_Space.extern\~\ctxt space name\ externalizes a (fully qualified) internal name. This operation is mostly for printing! User code should not rely on the precise result too much. \<^descr> \<^ML>\Name_Space.is_concealed\~\space name\ indicates whether \name\ refers to a strictly private entity that other tools are supposed to ignore! \ text %mlantiq \ \begin{matharray}{rcl} @{ML_antiquotation_def "binding"} & : & \ML_antiquotation\ \\ \end{matharray} \<^rail>\ @@{ML_antiquotation binding} embedded \ \<^descr> \@{binding name}\ produces a binding with base name \name\ and the source position taken from the concrete syntax of this antiquotation. In many situations this is more appropriate than the more basic \<^ML>\Binding.name\ function. \ text %mlex \ The following example yields the source position of some concrete binding inlined into the text: \ ML_val \Binding.pos_of \<^binding>\here\\ text \ \<^medskip> That position can be also printed in a message as follows: \ ML_command \writeln ("Look here" ^ Position.here (Binding.pos_of \<^binding>\here\))\ text \ This illustrates a key virtue of formalized bindings as opposed to raw specifications of base names: the system can use this additional information for feedback given to the user (error messages etc.). \<^medskip> The following example refers to its source position directly, which is occasionally useful for experimentation and diagnostic purposes: \ ML_command \warning ("Look here" ^ Position.here \<^here>)\ end diff --git a/src/Doc/Implementation/Proof.thy b/src/Doc/Implementation/Proof.thy --- a/src/Doc/Implementation/Proof.thy +++ b/src/Doc/Implementation/Proof.thy @@ -1,473 +1,473 @@ (*:maxLineLen=78:*) theory Proof imports Base begin chapter \Structured proofs\ section \Variables \label{sec:variables}\ text \ Any variable that is not explicitly bound by \\\-abstraction is considered as ``free''. Logically, free variables act like outermost universal quantification at the sequent level: \A\<^sub>1(x), \, A\<^sub>n(x) \ B(x)\ means that the result holds \<^emph>\for all\ values of \x\. Free variables for terms (not types) can be fully internalized into the logic: \\ B(x)\ and \\ \x. B(x)\ are interchangeable, provided that \x\ does not occur elsewhere in the context. Inspecting \\ \x. B(x)\ more closely, we see that inside the quantifier, \x\ is essentially ``arbitrary, but fixed'', while from outside it appears as a place-holder for instantiation (thanks to \\\ elimination). The Pure logic represents the idea of variables being either inside or outside the current scope by providing separate syntactic categories for \<^emph>\fixed variables\ (e.g.\ \x\) vs.\ \<^emph>\schematic variables\ (e.g.\ \?x\). Incidently, a universal result \\ \x. B(x)\ has the HHF normal form \\ B(?x)\, which represents its generality without requiring an explicit quantifier. The same principle works for type variables: \\ B(?\)\ represents the idea of ``\\ \\. B(\)\'' without demanding a truly polymorphic framework. \<^medskip> Additional care is required to treat type variables in a way that facilitates type-inference. In principle, term variables depend on type variables, which means that type variables would have to be declared first. For example, a raw type-theoretic framework would demand the context to be constructed in stages as follows: \\ = \: type, x: \, a: A(x\<^sub>\)\. We allow a slightly less formalistic mode of operation: term variables \x\ are fixed without specifying a type yet (essentially \<^emph>\all\ potential occurrences of some instance \x\<^sub>\\ are fixed); the first occurrence of \x\ within a specific term assigns its most general type, which is then maintained consistently in the context. The above example becomes \\ = x: term, \: type, A(x\<^sub>\)\, where type \\\ is fixed \<^emph>\after\ term \x\, and the constraint \x :: \\ is an implicit consequence of the occurrence of \x\<^sub>\\ in the subsequent proposition. This twist of dependencies is also accommodated by the reverse operation of exporting results from a context: a type variable \\\ is considered fixed as long as it occurs in some fixed term variable of the context. For example, exporting \x: term, \: type \ x\<^sub>\ \ x\<^sub>\\ produces in the first step \x: term \ x\<^sub>\ \ x\<^sub>\\ for fixed \\\, and only in the second step \\ ?x\<^sub>?\<^sub>\ \ ?x\<^sub>?\<^sub>\\ for schematic \?x\ and \?\\. The following Isar source text illustrates this scenario. \ notepad begin { fix x \ \all potential occurrences of some \x::\\ are fixed\ { have "x::'a \ x" \ \implicit type assignment by concrete occurrence\ by (rule reflexive) } thm this \ \result still with fixed type \'a\\ } thm this \ \fully general result for arbitrary \?x::?'a\\ end text \ The Isabelle/Isar proof context manages the details of term vs.\ type variables, with high-level principles for moving the frontier between fixed and schematic variables. The \add_fixes\ operation explicitly declares fixed variables; the \declare_term\ operation absorbs a term into a context by fixing new type variables and adding syntactic constraints. The \export\ operation is able to perform the main work of generalizing term and type variables as sketched above, assuming that fixing variables and terms have been declared properly. There \import\ operation makes a generalized fact a genuine part of the context, by inventing fixed variables for the schematic ones. The effect can be reversed by using \export\ later, potentially with an extended context; the result is equivalent to the original modulo renaming of schematic variables. The \focus\ operation provides a variant of \import\ for nested propositions (with explicit quantification): \\x\<^sub>1 \ x\<^sub>n. B(x\<^sub>1, \, x\<^sub>n)\ is decomposed by inventing fixed variables \x\<^sub>1, \, x\<^sub>n\ for the body. \ text %mlref \ \begin{mldecls} - @{index_ML Variable.add_fixes: " - string list -> Proof.context -> string list * Proof.context"} \\ - @{index_ML Variable.variant_fixes: " + @{define_ML Variable.add_fixes: " string list -> Proof.context -> string list * Proof.context"} \\ - @{index_ML Variable.declare_term: "term -> Proof.context -> Proof.context"} \\ - @{index_ML Variable.declare_constraints: "term -> Proof.context -> Proof.context"} \\ - @{index_ML Variable.export: "Proof.context -> Proof.context -> thm list -> thm list"} \\ - @{index_ML Variable.polymorphic: "Proof.context -> term list -> term list"} \\ - @{index_ML Variable.import: "bool -> thm list -> Proof.context -> + @{define_ML Variable.variant_fixes: " + string list -> Proof.context -> string list * Proof.context"} \\ + @{define_ML Variable.declare_term: "term -> Proof.context -> Proof.context"} \\ + @{define_ML Variable.declare_constraints: "term -> Proof.context -> Proof.context"} \\ + @{define_ML Variable.export: "Proof.context -> Proof.context -> thm list -> thm list"} \\ + @{define_ML Variable.polymorphic: "Proof.context -> term list -> term list"} \\ + @{define_ML Variable.import: "bool -> thm list -> Proof.context -> ((((indexname * sort) * ctyp) list * ((indexname * typ) * cterm) list) * thm list) * Proof.context"} \\ - @{index_ML Variable.focus: "binding list option -> term -> Proof.context -> + @{define_ML Variable.focus: "binding list option -> term -> Proof.context -> ((string * (string * typ)) list * term) * Proof.context"} \\ \end{mldecls} \<^descr> \<^ML>\Variable.add_fixes\~\xs ctxt\ fixes term variables \xs\, returning the resulting internal names. By default, the internal representation coincides with the external one, which also means that the given variables must not be fixed already. There is a different policy within a local proof body: the given names are just hints for newly invented Skolem variables. \<^descr> \<^ML>\Variable.variant_fixes\ is similar to \<^ML>\Variable.add_fixes\, but always produces fresh variants of the given names. \<^descr> \<^ML>\Variable.declare_term\~\t ctxt\ declares term \t\ to belong to the context. This automatically fixes new type variables, but not term variables. Syntactic constraints for type and term variables are declared uniformly, though. \<^descr> \<^ML>\Variable.declare_constraints\~\t ctxt\ declares syntactic constraints from term \t\, without making it part of the context yet. \<^descr> \<^ML>\Variable.export\~\inner outer thms\ generalizes fixed type and term variables in \thms\ according to the difference of the \inner\ and \outer\ context, following the principles sketched above. \<^descr> \<^ML>\Variable.polymorphic\~\ctxt ts\ generalizes type variables in \ts\ as far as possible, even those occurring in fixed term variables. The default policy of type-inference is to fix newly introduced type variables, which is essentially reversed with \<^ML>\Variable.polymorphic\: here the given terms are detached from the context as far as possible. \<^descr> \<^ML>\Variable.import\~\open thms ctxt\ invents fixed type and term variables for the schematic ones occurring in \thms\. The \open\ flag indicates whether the fixed names should be accessible to the user, otherwise newly introduced names are marked as ``internal'' (\secref{sec:names}). \<^descr> \<^ML>\Variable.focus\~\bindings B\ decomposes the outermost \\\ prefix of proposition \B\, using the given name bindings. \ text %mlex \ The following example shows how to work with fixed term and type parameters and with type-inference. \ ML_val \ (*static compile-time context -- for testing only*) val ctxt0 = \<^context>; (*locally fixed parameters -- no type assignment yet*) val ([x, y], ctxt1) = ctxt0 |> Variable.add_fixes ["x", "y"]; (*t1: most general fixed type; t1': most general arbitrary type*) val t1 = Syntax.read_term ctxt1 "x"; val t1' = singleton (Variable.polymorphic ctxt1) t1; (*term u enforces specific type assignment*) val u = Syntax.read_term ctxt1 "(x::nat) \ y"; (*official declaration of u -- propagates constraints etc.*) val ctxt2 = ctxt1 |> Variable.declare_term u; val t2 = Syntax.read_term ctxt2 "x"; (*x::nat is enforced*) \ text \ In the above example, the starting context is derived from the toplevel theory, which means that fixed variables are internalized literally: \x\ is mapped again to \x\, and attempting to fix it again in the subsequent context is an error. Alternatively, fixed parameters can be renamed explicitly as follows: \ ML_val \ val ctxt0 = \<^context>; val ([x1, x2, x3], ctxt1) = ctxt0 |> Variable.variant_fixes ["x", "x", "x"]; \ text \ The following ML code can now work with the invented names of \x1\, \x2\, \x3\, without depending on the details on the system policy for introducing these variants. Recall that within a proof body the system always invents fresh ``Skolem constants'', e.g.\ as follows: \ notepad begin ML_prf %"ML" \val ctxt0 = \<^context>; val ([x1], ctxt1) = ctxt0 |> Variable.add_fixes ["x"]; val ([x2], ctxt2) = ctxt1 |> Variable.add_fixes ["x"]; val ([x3], ctxt3) = ctxt2 |> Variable.add_fixes ["x"]; val ([y1, y2], ctxt4) = ctxt3 |> Variable.variant_fixes ["y", "y"];\ end text \ In this situation \<^ML>\Variable.add_fixes\ and \<^ML>\Variable.variant_fixes\ are very similar, but identical name proposals given in a row are only accepted by the second version. \ section \Assumptions \label{sec:assumptions}\ text \ An \<^emph>\assumption\ is a proposition that it is postulated in the current context. Local conclusions may use assumptions as additional facts, but this imposes implicit hypotheses that weaken the overall statement. Assumptions are restricted to fixed non-schematic statements, i.e.\ all generality needs to be expressed by explicit quantifiers. Nevertheless, the result will be in HHF normal form with outermost quantifiers stripped. For example, by assuming \\x :: \. P x\ we get \\x :: \. P x \ P ?x\ for schematic \?x\ of fixed type \\\. Local derivations accumulate more and more explicit references to hypotheses: \A\<^sub>1, \, A\<^sub>n \ B\ where \A\<^sub>1, \, A\<^sub>n\ needs to be covered by the assumptions of the current context. \<^medskip> The \add_assms\ operation augments the context by local assumptions, which are parameterized by an arbitrary \export\ rule (see below). The \export\ operation moves facts from a (larger) inner context into a (smaller) outer context, by discharging the difference of the assumptions as specified by the associated export rules. Note that the discharged portion is determined by the difference of contexts, not the facts being exported! There is a separate flag to indicate a goal context, where the result is meant to refine an enclosing sub-goal of a structured proof state. \<^medskip> The most basic export rule discharges assumptions directly by means of the \\\ introduction rule: \[ \infer[(\\\intro\)]{\\ - A \ A \ B\}{\\ \ B\} \] The variant for goal refinements marks the newly introduced premises, which causes the canonical Isar goal refinement scheme to enforce unification with local premises within the goal: \[ \infer[(\#\\intro\)]{\\ - A \ #A \ B\}{\\ \ B\} \] \<^medskip> Alternative versions of assumptions may perform arbitrary transformations on export, as long as the corresponding portion of hypotheses is removed from the given facts. For example, a local definition works by fixing \x\ and assuming \x \ t\, with the following export rule to reverse the effect: \[ \infer[(\\\expand\)]{\\ - (x \ t) \ B t\}{\\ \ B x\} \] This works, because the assumption \x \ t\ was introduced in a context with \x\ being fresh, so \x\ does not occur in \\\ here. \ text %mlref \ \begin{mldecls} - @{index_ML_type Assumption.export} \\ - @{index_ML Assumption.assume: "Proof.context -> cterm -> thm"} \\ - @{index_ML Assumption.add_assms: + @{define_ML_type Assumption.export} \\ + @{define_ML Assumption.assume: "Proof.context -> cterm -> thm"} \\ + @{define_ML Assumption.add_assms: "Assumption.export -> cterm list -> Proof.context -> thm list * Proof.context"} \\ - @{index_ML Assumption.add_assumes: " + @{define_ML Assumption.add_assumes: " cterm list -> Proof.context -> thm list * Proof.context"} \\ - @{index_ML Assumption.export: "bool -> Proof.context -> Proof.context -> thm -> thm"} \\ + @{define_ML Assumption.export: "bool -> Proof.context -> Proof.context -> thm -> thm"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\Assumption.export\ represents arbitrary export rules, which is any function of type \<^ML_type>\bool -> cterm list -> thm -> thm\, where the \<^ML_type>\bool\ indicates goal mode, and the \<^ML_type>\cterm list\ the collection of assumptions to be discharged simultaneously. \<^descr> \<^ML>\Assumption.assume\~\ctxt A\ turns proposition \A\ into a primitive assumption \A \ A'\, where the conclusion \A'\ is in HHF normal form. \<^descr> \<^ML>\Assumption.add_assms\~\r As\ augments the context by assumptions \As\ with export rule \r\. The resulting facts are hypothetical theorems as produced by the raw \<^ML>\Assumption.assume\. \<^descr> \<^ML>\Assumption.add_assumes\~\As\ is a special case of \<^ML>\Assumption.add_assms\ where the export rule performs \\\intro\ or \#\\intro\, depending on goal mode. \<^descr> \<^ML>\Assumption.export\~\is_goal inner outer thm\ exports result \thm\ from the the \inner\ context back into the \outer\ one; \is_goal = true\ means this is a goal context. The result is in HHF normal form. Note that \<^ML>\Proof_Context.export\ combines \<^ML>\Variable.export\ and \<^ML>\Assumption.export\ in the canonical way. \ text %mlex \ The following example demonstrates how rules can be derived by building up a context of assumptions first, and exporting some local fact afterwards. We refer to \<^theory>\Pure\ equality here for testing purposes. \ ML_val \ (*static compile-time context -- for testing only*) val ctxt0 = \<^context>; val ([eq], ctxt1) = ctxt0 |> Assumption.add_assumes [\<^cprop>\x \ y\]; val eq' = Thm.symmetric eq; (*back to original context -- discharges assumption*) val r = Assumption.export false ctxt1 ctxt0 eq'; \ text \ Note that the variables of the resulting rule are not generalized. This would have required to fix them properly in the context beforehand, and export wrt.\ variables afterwards (cf.\ \<^ML>\Variable.export\ or the combined \<^ML>\Proof_Context.export\). \ section \Structured goals and results \label{sec:struct-goals}\ text \ Local results are established by monotonic reasoning from facts within a context. This allows common combinations of theorems, e.g.\ via \\/\\ elimination, resolution rules, or equational reasoning, see \secref{sec:thms}. Unaccounted context manipulations should be avoided, notably raw \\/\\ introduction or ad-hoc references to free variables or assumptions not present in the proof context. \<^medskip> The \SUBPROOF\ combinator allows to structure a tactical proof recursively by decomposing a selected sub-goal: \(\x. A(x) \ B(x)) \ \\ is turned into \B(x) \ \\ after fixing \x\ and assuming \A(x)\. This means the tactic needs to solve the conclusion, but may use the premise as a local fact, for locally fixed variables. The family of \FOCUS\ combinators is similar to \SUBPROOF\, but allows to retain schematic variables and pending subgoals in the resulting goal state. The \prove\ operation provides an interface for structured backwards reasoning under program control, with some explicit sanity checks of the result. The goal context can be augmented by additional fixed variables (cf.\ \secref{sec:variables}) and assumptions (cf.\ \secref{sec:assumptions}), which will be available as local facts during the proof and discharged into implications in the result. Type and term variables are generalized as usual, according to the context. The \obtain\ operation produces results by eliminating existing facts by means of a given tactic. This acts like a dual conclusion: the proof demonstrates that the context may be augmented by parameters and assumptions, without affecting any conclusions that do not mention these parameters. See also @{cite "isabelle-isar-ref"} for the user-level @{command obtain} and @{command guess} elements. Final results, which may not refer to the parameters in the conclusion, need to exported explicitly into the original context.\ text %mlref \ \begin{mldecls} - @{index_ML SUBPROOF: "(Subgoal.focus -> tactic) -> - Proof.context -> int -> tactic"} \\ - @{index_ML Subgoal.FOCUS: "(Subgoal.focus -> tactic) -> - Proof.context -> int -> tactic"} \\ - @{index_ML Subgoal.FOCUS_PREMS: "(Subgoal.focus -> tactic) -> + @{define_ML SUBPROOF: "(Subgoal.focus -> tactic) -> Proof.context -> int -> tactic"} \\ - @{index_ML Subgoal.FOCUS_PARAMS: "(Subgoal.focus -> tactic) -> + @{define_ML Subgoal.FOCUS: "(Subgoal.focus -> tactic) -> Proof.context -> int -> tactic"} \\ - @{index_ML Subgoal.focus: "Proof.context -> int -> binding list option -> + @{define_ML Subgoal.FOCUS_PREMS: "(Subgoal.focus -> tactic) -> + Proof.context -> int -> tactic"} \\ + @{define_ML Subgoal.FOCUS_PARAMS: "(Subgoal.focus -> tactic) -> + Proof.context -> int -> tactic"} \\ + @{define_ML Subgoal.focus: "Proof.context -> int -> binding list option -> thm -> Subgoal.focus * thm"} \\ - @{index_ML Subgoal.focus_prems: "Proof.context -> int -> binding list option -> + @{define_ML Subgoal.focus_prems: "Proof.context -> int -> binding list option -> thm -> Subgoal.focus * thm"} \\ - @{index_ML Subgoal.focus_params: "Proof.context -> int -> binding list option -> + @{define_ML Subgoal.focus_params: "Proof.context -> int -> binding list option -> thm -> Subgoal.focus * thm"} \\ \end{mldecls} \begin{mldecls} - @{index_ML Goal.prove: "Proof.context -> string list -> term list -> term -> + @{define_ML Goal.prove: "Proof.context -> string list -> term list -> term -> ({prems: thm list, context: Proof.context} -> tactic) -> thm"} \\ - @{index_ML Goal.prove_common: "Proof.context -> int option -> + @{define_ML Goal.prove_common: "Proof.context -> int option -> string list -> term list -> term list -> ({prems: thm list, context: Proof.context} -> tactic) -> thm list"} \\ \end{mldecls} \begin{mldecls} - @{index_ML Obtain.result: "(Proof.context -> tactic) -> thm list -> + @{define_ML Obtain.result: "(Proof.context -> tactic) -> thm list -> Proof.context -> ((string * cterm) list * thm list) * Proof.context"} \\ \end{mldecls} \<^descr> \<^ML>\SUBPROOF\~\tac ctxt i\ decomposes the structure of the specified sub-goal, producing an extended context and a reduced goal, which needs to be solved by the given tactic. All schematic parameters of the goal are imported into the context as fixed ones, which may not be instantiated in the sub-proof. \<^descr> \<^ML>\Subgoal.FOCUS\, \<^ML>\Subgoal.FOCUS_PREMS\, and \<^ML>\Subgoal.FOCUS_PARAMS\ are similar to \<^ML>\SUBPROOF\, but are slightly more flexible: only the specified parts of the subgoal are imported into the context, and the body tactic may introduce new subgoals and schematic variables. \<^descr> \<^ML>\Subgoal.focus\, \<^ML>\Subgoal.focus_prems\, \<^ML>\Subgoal.focus_params\ extract the focus information from a goal state in the same way as the corresponding tacticals above. This is occasionally useful to experiment without writing actual tactics yet. \<^descr> \<^ML>\Goal.prove\~\ctxt xs As C tac\ states goal \C\ in the context augmented by fixed variables \xs\ and assumptions \As\, and applies tactic \tac\ to solve it. The latter may depend on the local assumptions being presented as facts. The result is in HHF normal form. \<^descr> \<^ML>\Goal.prove_common\~\ctxt fork_pri\ is the common form to state and prove a simultaneous goal statement, where \<^ML>\Goal.prove\ is a convenient shorthand that is most frequently used in applications. The given list of simultaneous conclusions is encoded in the goal state by means of Pure conjunction: \<^ML>\Goal.conjunction_tac\ will turn this into a collection of individual subgoals, but note that the original multi-goal state is usually required for advanced induction. It is possible to provide an optional priority for a forked proof, typically \<^ML>\SOME ~1\, while \<^ML>\NONE\ means the proof is immediate (sequential) as for \<^ML>\Goal.prove\. Note that a forked proof does not exhibit any failures in the usual way via exceptions in ML, but accumulates error situations under the execution id of the running transaction. Thus the system is able to expose error messages ultimately to the end-user, even though the subsequent ML code misses them. \<^descr> \<^ML>\Obtain.result\~\tac thms ctxt\ eliminates the given facts using a tactic, which results in additional fixed variables and assumptions in the context. Final results need to be exported explicitly. \ text %mlex \ The following minimal example illustrates how to access the focus information of a structured goal state. \ notepad begin fix A B C :: "'a \ bool" have "\x. A x \ B x \ C x" ML_val \val {goal, context = goal_ctxt, ...} = @{Isar.goal}; val (focus as {params, asms, concl, ...}, goal') = Subgoal.focus goal_ctxt 1 (SOME [\<^binding>\x\]) goal; val [A, B] = #prems focus; val [(_, x)] = #params focus;\ sorry end text \ \<^medskip> The next example demonstrates forward-elimination in a local context, using \<^ML>\Obtain.result\. \ notepad begin assume ex: "\x. B x" ML_prf %"ML" \val ctxt0 = \<^context>; val (([(_, x)], [B]), ctxt1) = ctxt0 |> Obtain.result (fn _ => eresolve_tac ctxt0 @{thms exE} 1) [@{thm ex}];\ ML_prf %"ML" \singleton (Proof_Context.export ctxt1 ctxt0) @{thm refl};\ ML_prf %"ML" \Proof_Context.export ctxt1 ctxt0 [Thm.reflexive x] handle ERROR msg => (warning msg; []);\ end end diff --git a/src/Doc/Implementation/Syntax.thy b/src/Doc/Implementation/Syntax.thy --- a/src/Doc/Implementation/Syntax.thy +++ b/src/Doc/Implementation/Syntax.thy @@ -1,259 +1,259 @@ (*:maxLineLen=78:*) theory Syntax imports Base begin chapter \Concrete syntax and type-checking\ text \ Pure \\\-calculus as introduced in \chref{ch:logic} is an adequate foundation for logical languages --- in the tradition of \<^emph>\higher-order abstract syntax\ --- but end-users require additional means for reading and printing of terms and types. This important add-on outside the logical core is called \<^emph>\inner syntax\ in Isabelle jargon, as opposed to the \<^emph>\outer syntax\ of the theory and proof language @{cite "isabelle-isar-ref"}. For example, according to @{cite church40} quantifiers are represented as higher-order constants \All :: ('a \ bool) \ bool\ such that \All (\x::'a. B x)\ faithfully represents the idea that is displayed in Isabelle as \\x::'a. B x\ via @{keyword "binder"} notation. Moreover, type-inference in the style of Hindley-Milner @{cite hindleymilner} (and extensions) enables users to write \\x. B x\ concisely, when the type \'a\ is already clear from the context.\<^footnote>\Type-inference taken to the extreme can easily confuse users. Beginners often stumble over unexpectedly general types inferred by the system.\ \<^medskip> The main inner syntax operations are \<^emph>\read\ for parsing together with type-checking, and \<^emph>\pretty\ for formatted output. See also \secref{sec:read-print}. Furthermore, the input and output syntax layers are sub-divided into separate phases for \<^emph>\concrete syntax\ versus \<^emph>\abstract syntax\, see also \secref{sec:parse-unparse} and \secref{sec:term-check}, respectively. This results in the following decomposition of the main operations: \<^item> \read = parse; check\ \<^item> \pretty = uncheck; unparse\ For example, some specification package might thus intercept syntax processing at a well-defined stage after \parse\, to a augment the resulting pre-term before full type-reconstruction is performed by \check\. Note that the formal status of bound variables, versus free variables, versus constants must not be changed between these phases. \<^medskip> In general, \check\ and \uncheck\ operate simultaneously on a list of terms. This is particular important for type-checking, to reconstruct types for several terms of the same context and scope. In contrast, \parse\ and \unparse\ operate separately on single terms. There are analogous operations to read and print types, with the same sub-division into phases. \ section \Reading and pretty printing \label{sec:read-print}\ text \ Read and print operations are roughly dual to each other, such that for the user \s' = pretty (read s)\ looks similar to the original source text \s\, but the details depend on many side-conditions. There are also explicit options to control the removal of type information in the output. The default configuration routinely looses information, so \t' = read (pretty t)\ might fail, or produce a differently typed term, or a completely different term in the face of syntactic overloading. \ text %mlref \ \begin{mldecls} - @{index_ML Syntax.read_typs: "Proof.context -> string list -> typ list"} \\ - @{index_ML Syntax.read_terms: "Proof.context -> string list -> term list"} \\ - @{index_ML Syntax.read_props: "Proof.context -> string list -> term list"} \\[0.5ex] - @{index_ML Syntax.read_typ: "Proof.context -> string -> typ"} \\ - @{index_ML Syntax.read_term: "Proof.context -> string -> term"} \\ - @{index_ML Syntax.read_prop: "Proof.context -> string -> term"} \\[0.5ex] - @{index_ML Syntax.pretty_typ: "Proof.context -> typ -> Pretty.T"} \\ - @{index_ML Syntax.pretty_term: "Proof.context -> term -> Pretty.T"} \\ - @{index_ML Syntax.string_of_typ: "Proof.context -> typ -> string"} \\ - @{index_ML Syntax.string_of_term: "Proof.context -> term -> string"} \\ + @{define_ML Syntax.read_typs: "Proof.context -> string list -> typ list"} \\ + @{define_ML Syntax.read_terms: "Proof.context -> string list -> term list"} \\ + @{define_ML Syntax.read_props: "Proof.context -> string list -> term list"} \\[0.5ex] + @{define_ML Syntax.read_typ: "Proof.context -> string -> typ"} \\ + @{define_ML Syntax.read_term: "Proof.context -> string -> term"} \\ + @{define_ML Syntax.read_prop: "Proof.context -> string -> term"} \\[0.5ex] + @{define_ML Syntax.pretty_typ: "Proof.context -> typ -> Pretty.T"} \\ + @{define_ML Syntax.pretty_term: "Proof.context -> term -> Pretty.T"} \\ + @{define_ML Syntax.string_of_typ: "Proof.context -> typ -> string"} \\ + @{define_ML Syntax.string_of_term: "Proof.context -> term -> string"} \\ \end{mldecls} \<^descr> \<^ML>\Syntax.read_typs\~\ctxt strs\ parses and checks a simultaneous list of source strings as types of the logic. \<^descr> \<^ML>\Syntax.read_terms\~\ctxt strs\ parses and checks a simultaneous list of source strings as terms of the logic. Type-reconstruction puts all parsed terms into the same scope: types of free variables ultimately need to coincide. If particular type-constraints are required for some of the arguments, the read operations needs to be split into its parse and check phases. Then it is possible to use \<^ML>\Type.constraint\ on the intermediate pre-terms (\secref{sec:term-check}). \<^descr> \<^ML>\Syntax.read_props\~\ctxt strs\ parses and checks a simultaneous list of source strings as terms of the logic, with an implicit type-constraint for each argument to enforce type \<^typ>\prop\; this also affects the inner syntax for parsing. The remaining type-reconstruction works as for \<^ML>\Syntax.read_terms\. \<^descr> \<^ML>\Syntax.read_typ\, \<^ML>\Syntax.read_term\, \<^ML>\Syntax.read_prop\ are like the simultaneous versions, but operate on a single argument only. This convenient shorthand is adequate in situations where a single item in its own scope is processed. Do not use \<^ML>\map o Syntax.read_term\ where \<^ML>\Syntax.read_terms\ is actually intended! \<^descr> \<^ML>\Syntax.pretty_typ\~\ctxt T\ and \<^ML>\Syntax.pretty_term\~\ctxt t\ uncheck and pretty-print the given type or term, respectively. Although the uncheck phase acts on a simultaneous list as well, this is rarely used in practice, so only the singleton case is provided as combined pretty operation. There is no distinction of term vs.\ proposition. \<^descr> \<^ML>\Syntax.string_of_typ\ and \<^ML>\Syntax.string_of_term\ are convenient compositions of \<^ML>\Syntax.pretty_typ\ and \<^ML>\Syntax.pretty_term\ with \<^ML>\Pretty.string_of\ for output. The result may be concatenated with other strings, as long as there is no further formatting and line-breaking involved. \<^ML>\Syntax.read_term\, \<^ML>\Syntax.read_prop\, and \<^ML>\Syntax.string_of_term\ are the most important operations in practice. \<^medskip> Note that the string values that are passed in and out are annotated by the system, to carry further markup that is relevant for the Prover IDE @{cite "isabelle-jedit"}. User code should neither compose its own input strings, nor try to analyze the output strings. Conceptually this is an abstract datatype, encoded as concrete string for historical reasons. The standard way to provide the required position markup for input works via the outer syntax parser wrapper \<^ML>\Parse.inner_syntax\, which is already part of \<^ML>\Parse.typ\, \<^ML>\Parse.term\, \<^ML>\Parse.prop\. So a string obtained from one of the latter may be directly passed to the corresponding read operation: this yields PIDE markup of the input and precise positions for warning and error messages. \ section \Parsing and unparsing \label{sec:parse-unparse}\ text \ Parsing and unparsing converts between actual source text and a certain \<^emph>\pre-term\ format, where all bindings and scopes are already resolved faithfully. Thus the names of free variables or constants are determined in the sense of the logical context, but type information might be still missing. Pre-terms support an explicit language of \<^emph>\type constraints\ that may be augmented by user code to guide the later \<^emph>\check\ phase. Actual parsing is based on traditional lexical analysis and Earley parsing for arbitrary context-free grammars. The user can specify the grammar declaratively via mixfix annotations. Moreover, there are \<^emph>\syntax translations\ that can be augmented by the user, either declaratively via @{command translations} or programmatically via @{command parse_translation}, @{command print_translation} @{cite "isabelle-isar-ref"}. The final scope-resolution is performed by the system, according to name spaces for types, term variables and constants determined by the context. \ text %mlref \ \begin{mldecls} - @{index_ML Syntax.parse_typ: "Proof.context -> string -> typ"} \\ - @{index_ML Syntax.parse_term: "Proof.context -> string -> term"} \\ - @{index_ML Syntax.parse_prop: "Proof.context -> string -> term"} \\[0.5ex] - @{index_ML Syntax.unparse_typ: "Proof.context -> typ -> Pretty.T"} \\ - @{index_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\ + @{define_ML Syntax.parse_typ: "Proof.context -> string -> typ"} \\ + @{define_ML Syntax.parse_term: "Proof.context -> string -> term"} \\ + @{define_ML Syntax.parse_prop: "Proof.context -> string -> term"} \\[0.5ex] + @{define_ML Syntax.unparse_typ: "Proof.context -> typ -> Pretty.T"} \\ + @{define_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\ \end{mldecls} \<^descr> \<^ML>\Syntax.parse_typ\~\ctxt str\ parses a source string as pre-type that is ready to be used with subsequent check operations. \<^descr> \<^ML>\Syntax.parse_term\~\ctxt str\ parses a source string as pre-term that is ready to be used with subsequent check operations. \<^descr> \<^ML>\Syntax.parse_prop\~\ctxt str\ parses a source string as pre-term that is ready to be used with subsequent check operations. The inner syntax category is \<^typ>\prop\ and a suitable type-constraint is included to ensure that this information is observed in subsequent type reconstruction. \<^descr> \<^ML>\Syntax.unparse_typ\~\ctxt T\ unparses a type after uncheck operations, to turn it into a pretty tree. \<^descr> \<^ML>\Syntax.unparse_term\~\ctxt T\ unparses a term after uncheck operations, to turn it into a pretty tree. There is no distinction for propositions here. These operations always operate on a single item; use the combinator \<^ML>\map\ to apply them to a list. \ section \Checking and unchecking \label{sec:term-check}\ text \ These operations define the transition from pre-terms and fully-annotated terms in the sense of the logical core (\chref{ch:logic}). The \<^emph>\check\ phase is meant to subsume a variety of mechanisms in the manner of ``type-inference'' or ``type-reconstruction'' or ``type-improvement'', not just type-checking in the narrow sense. The \<^emph>\uncheck\ phase is roughly dual, it prunes type-information before pretty printing. A typical add-on for the check/uncheck syntax layer is the @{command abbreviation} mechanism @{cite "isabelle-isar-ref"}. Here the user specifies syntactic definitions that are managed by the system as polymorphic \let\ bindings. These are expanded during the \check\ phase, and contracted during the \uncheck\ phase, without affecting the type-assignment of the given terms. \<^medskip> The precise meaning of type checking depends on the context --- additional check/uncheck modules might be defined in user space. For example, the @{command class} command defines a context where \check\ treats certain type instances of overloaded constants according to the ``dictionary construction'' of its logical foundation. This involves ``type improvement'' (specialization of slightly too general types) and replacement by certain locale parameters. See also @{cite "Haftmann-Wenzel:2009"}. \ text %mlref \ \begin{mldecls} - @{index_ML Syntax.check_typs: "Proof.context -> typ list -> typ list"} \\ - @{index_ML Syntax.check_terms: "Proof.context -> term list -> term list"} \\ - @{index_ML Syntax.check_props: "Proof.context -> term list -> term list"} \\[0.5ex] - @{index_ML Syntax.uncheck_typs: "Proof.context -> typ list -> typ list"} \\ - @{index_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\ + @{define_ML Syntax.check_typs: "Proof.context -> typ list -> typ list"} \\ + @{define_ML Syntax.check_terms: "Proof.context -> term list -> term list"} \\ + @{define_ML Syntax.check_props: "Proof.context -> term list -> term list"} \\[0.5ex] + @{define_ML Syntax.uncheck_typs: "Proof.context -> typ list -> typ list"} \\ + @{define_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\ \end{mldecls} \<^descr> \<^ML>\Syntax.check_typs\~\ctxt Ts\ checks a simultaneous list of pre-types as types of the logic. Typically, this involves normalization of type synonyms. \<^descr> \<^ML>\Syntax.check_terms\~\ctxt ts\ checks a simultaneous list of pre-terms as terms of the logic. Typically, this involves type-inference and normalization term abbreviations. The types within the given terms are treated in the same way as for \<^ML>\Syntax.check_typs\. Applications sometimes need to check several types and terms together. The standard approach uses \<^ML>\Logic.mk_type\ to embed the language of types into that of terms; all arguments are appended into one list of terms that is checked; afterwards the type arguments are recovered with \<^ML>\Logic.dest_type\. \<^descr> \<^ML>\Syntax.check_props\~\ctxt ts\ checks a simultaneous list of pre-terms as terms of the logic, such that all terms are constrained by type \<^typ>\prop\. The remaining check operation works as \<^ML>\Syntax.check_terms\ above. \<^descr> \<^ML>\Syntax.uncheck_typs\~\ctxt Ts\ unchecks a simultaneous list of types of the logic, in preparation of pretty printing. \<^descr> \<^ML>\Syntax.uncheck_terms\~\ctxt ts\ unchecks a simultaneous list of terms of the logic, in preparation of pretty printing. There is no distinction for propositions here. These operations always operate simultaneously on a list; use the combinator \<^ML>\singleton\ to apply them to a single item. \ end diff --git a/src/Doc/Implementation/Tactic.thy b/src/Doc/Implementation/Tactic.thy --- a/src/Doc/Implementation/Tactic.thy +++ b/src/Doc/Implementation/Tactic.thy @@ -1,817 +1,817 @@ (*:maxLineLen=78:*) theory Tactic imports Base begin chapter \Tactical reasoning\ text \ Tactical reasoning works by refining an initial claim in a backwards fashion, until a solved form is reached. A \goal\ consists of several subgoals that need to be solved in order to achieve the main statement; zero subgoals means that the proof may be finished. A \tactic\ is a refinement operation that maps a goal to a lazy sequence of potential successors. A \tactical\ is a combinator for composing tactics. \ section \Goals \label{sec:tactical-goals}\ text \ Isabelle/Pure represents a goal as a theorem stating that the subgoals imply the main goal: \A\<^sub>1 \ \ \ A\<^sub>n \ C\. The outermost goal structure is that of a Horn Clause: i.e.\ an iterated implication without any quantifiers\<^footnote>\Recall that outermost \\x. \[x]\ is always represented via schematic variables in the body: \\[?x]\. These variables may get instantiated during the course of reasoning.\. For \n = 0\ a goal is called ``solved''. The structure of each subgoal \A\<^sub>i\ is that of a general Hereditary Harrop Formula \\x\<^sub>1 \ \x\<^sub>k. H\<^sub>1 \ \ \ H\<^sub>m \ B\. Here \x\<^sub>1, \, x\<^sub>k\ are goal parameters, i.e.\ arbitrary-but-fixed entities of certain types, and \H\<^sub>1, \, H\<^sub>m\ are goal hypotheses, i.e.\ facts that may be assumed locally. Together, this forms the goal context of the conclusion \B\ to be established. The goal hypotheses may be again arbitrary Hereditary Harrop Formulas, although the level of nesting rarely exceeds 1--2 in practice. The main conclusion \C\ is internally marked as a protected proposition, which is represented explicitly by the notation \#C\ here. This ensures that the decomposition into subgoals and main conclusion is well-defined for arbitrarily structured claims. \<^medskip> Basic goal management is performed via the following Isabelle/Pure rules: \[ \infer[\(init)\]{\C \ #C\}{} \qquad \infer[\(finish)\]{\C\}{\#C\} \] \<^medskip> The following low-level variants admit general reasoning with protected propositions: \[ \infer[\(protect n)\]{\A\<^sub>1 \ \ \ A\<^sub>n \ #C\}{\A\<^sub>1 \ \ \ A\<^sub>n \ C\} \] \[ \infer[\(conclude)\]{\A \ \ \ C\}{\A \ \ \ #C\} \] \ text %mlref \ \begin{mldecls} - @{index_ML Goal.init: "cterm -> thm"} \\ - @{index_ML Goal.finish: "Proof.context -> thm -> thm"} \\ - @{index_ML Goal.protect: "int -> thm -> thm"} \\ - @{index_ML Goal.conclude: "thm -> thm"} \\ + @{define_ML Goal.init: "cterm -> thm"} \\ + @{define_ML Goal.finish: "Proof.context -> thm -> thm"} \\ + @{define_ML Goal.protect: "int -> thm -> thm"} \\ + @{define_ML Goal.conclude: "thm -> thm"} \\ \end{mldecls} \<^descr> \<^ML>\Goal.init\~\C\ initializes a tactical goal from the well-formed proposition \C\. \<^descr> \<^ML>\Goal.finish\~\ctxt thm\ checks whether theorem \thm\ is a solved goal (no subgoals), and concludes the result by removing the goal protection. The context is only required for printing error messages. \<^descr> \<^ML>\Goal.protect\~\n thm\ protects the statement of theorem \thm\. The parameter \n\ indicates the number of premises to be retained. \<^descr> \<^ML>\Goal.conclude\~\thm\ removes the goal protection, even if there are pending subgoals. \ section \Tactics\label{sec:tactics}\ text \ A \tactic\ is a function \goal \ goal\<^sup>*\<^sup>*\ that maps a given goal state (represented as a theorem, cf.\ \secref{sec:tactical-goals}) to a lazy sequence of potential successor states. The underlying sequence implementation is lazy both in head and tail, and is purely functional in \<^emph>\not\ supporting memoing.\<^footnote>\The lack of memoing and the strict nature of ML requires some care when working with low-level sequence operations, to avoid duplicate or premature evaluation of results. It also means that modified runtime behavior, such as timeout, is very hard to achieve for general tactics.\ An \<^emph>\empty result sequence\ means that the tactic has failed: in a compound tactic expression other tactics might be tried instead, or the whole refinement step might fail outright, producing a toplevel error message in the end. When implementing tactics from scratch, one should take care to observe the basic protocol of mapping regular error conditions to an empty result; only serious faults should emerge as exceptions. By enumerating \<^emph>\multiple results\, a tactic can easily express the potential outcome of an internal search process. There are also combinators for building proof tools that involve search systematically, see also \secref{sec:tacticals}. \<^medskip> As explained before, a goal state essentially consists of a list of subgoals that imply the main goal (conclusion). Tactics may operate on all subgoals or on a particularly specified subgoal, but must not change the main conclusion (apart from instantiating schematic goal variables). Tactics with explicit \<^emph>\subgoal addressing\ are of the form \int \ tactic\ and may be applied to a particular subgoal (counting from 1). If the subgoal number is out of range, the tactic should fail with an empty result sequence, but must not raise an exception! Operating on a particular subgoal means to replace it by an interval of zero or more subgoals in the same place; other subgoals must not be affected, apart from instantiating schematic variables ranging over the whole goal state. A common pattern of composing tactics with subgoal addressing is to try the first one, and then the second one only if the subgoal has not been solved yet. Special care is required here to avoid bumping into unrelated subgoals that happen to come after the original subgoal. Assuming that there is only a single initial subgoal is a very common error when implementing tactics! Tactics with internal subgoal addressing should expose the subgoal index as \int\ argument in full generality; a hardwired subgoal 1 is not acceptable. \<^medskip> The main well-formedness conditions for proper tactics are summarized as follows. \<^item> General tactic failure is indicated by an empty result, only serious faults may produce an exception. \<^item> The main conclusion must not be changed, apart from instantiating schematic variables. \<^item> A tactic operates either uniformly on all subgoals, or specifically on a selected subgoal (without bumping into unrelated subgoals). \<^item> Range errors in subgoal addressing produce an empty result. Some of these conditions are checked by higher-level goal infrastructure (\secref{sec:struct-goals}); others are not checked explicitly, and violating them merely results in ill-behaved tactics experienced by the user (e.g.\ tactics that insist in being applicable only to singleton goals, or prevent composition via standard tacticals such as \<^ML>\REPEAT\). \ text %mlref \ \begin{mldecls} - @{index_ML_type tactic: "thm -> thm Seq.seq"} \\ - @{index_ML no_tac: tactic} \\ - @{index_ML all_tac: tactic} \\ - @{index_ML print_tac: "Proof.context -> string -> tactic"} \\[1ex] - @{index_ML PRIMITIVE: "(thm -> thm) -> tactic"} \\[1ex] - @{index_ML SUBGOAL: "(term * int -> tactic) -> int -> tactic"} \\ - @{index_ML CSUBGOAL: "(cterm * int -> tactic) -> int -> tactic"} \\ - @{index_ML SELECT_GOAL: "tactic -> int -> tactic"} \\ - @{index_ML PREFER_GOAL: "tactic -> int -> tactic"} \\ + @{define_ML_type tactic = "thm -> thm Seq.seq"} \\ + @{define_ML no_tac: tactic} \\ + @{define_ML all_tac: tactic} \\ + @{define_ML print_tac: "Proof.context -> string -> tactic"} \\[1ex] + @{define_ML PRIMITIVE: "(thm -> thm) -> tactic"} \\[1ex] + @{define_ML SUBGOAL: "(term * int -> tactic) -> int -> tactic"} \\ + @{define_ML CSUBGOAL: "(cterm * int -> tactic) -> int -> tactic"} \\ + @{define_ML SELECT_GOAL: "tactic -> int -> tactic"} \\ + @{define_ML PREFER_GOAL: "tactic -> int -> tactic"} \\ \end{mldecls} \<^descr> Type \<^ML_type>\tactic\ represents tactics. The well-formedness conditions described above need to be observed. See also \<^file>\~~/src/Pure/General/seq.ML\ for the underlying implementation of lazy sequences. \<^descr> Type \<^ML_type>\int -> tactic\ represents tactics with explicit subgoal addressing, with well-formedness conditions as described above. \<^descr> \<^ML>\no_tac\ is a tactic that always fails, returning the empty sequence. \<^descr> \<^ML>\all_tac\ is a tactic that always succeeds, returning a singleton sequence with unchanged goal state. \<^descr> \<^ML>\print_tac\~\ctxt message\ is like \<^ML>\all_tac\, but prints a message together with the goal state on the tracing channel. \<^descr> \<^ML>\PRIMITIVE\~\rule\ turns a primitive inference rule into a tactic with unique result. Exception \<^ML>\THM\ is considered a regular tactic failure and produces an empty result; other exceptions are passed through. \<^descr> \<^ML>\SUBGOAL\~\(fn (subgoal, i) => tactic)\ is the most basic form to produce a tactic with subgoal addressing. The given abstraction over the subgoal term and subgoal number allows to peek at the relevant information of the full goal state. The subgoal range is checked as required above. \<^descr> \<^ML>\CSUBGOAL\ is similar to \<^ML>\SUBGOAL\, but passes the subgoal as \<^ML_type>\cterm\ instead of raw \<^ML_type>\term\. This avoids expensive re-certification in situations where the subgoal is used directly for primitive inferences. \<^descr> \<^ML>\SELECT_GOAL\~\tac i\ confines a tactic to the specified subgoal \i\. This rearranges subgoals and the main goal protection (\secref{sec:tactical-goals}), while retaining the syntactic context of the overall goal state (concerning schematic variables etc.). \<^descr> \<^ML>\PREFER_GOAL\~\tac i\ rearranges subgoals to put \i\ in front. This is similar to \<^ML>\SELECT_GOAL\, but without changing the main goal protection. \ subsection \Resolution and assumption tactics \label{sec:resolve-assume-tac}\ text \ \<^emph>\Resolution\ is the most basic mechanism for refining a subgoal using a theorem as object-level rule. \<^emph>\Elim-resolution\ is particularly suited for elimination rules: it resolves with a rule, proves its first premise by assumption, and finally deletes that assumption from any new subgoals. \<^emph>\Destruct-resolution\ is like elim-resolution, but the given destruction rules are first turned into canonical elimination format. \<^emph>\Forward-resolution\ is like destruct-resolution, but without deleting the selected assumption. The \r/e/d/f\ naming convention is maintained for several different kinds of resolution rules and tactics. Assumption tactics close a subgoal by unifying some of its premises against its conclusion. \<^medskip> All the tactics in this section operate on a subgoal designated by a positive integer. Other subgoals might be affected indirectly, due to instantiation of schematic variables. There are various sources of non-determinism, the tactic result sequence enumerates all possibilities of the following choices (if applicable): \<^enum> selecting one of the rules given as argument to the tactic; \<^enum> selecting a subgoal premise to eliminate, unifying it against the first premise of the rule; \<^enum> unifying the conclusion of the subgoal to the conclusion of the rule. Recall that higher-order unification may produce multiple results that are enumerated here. \ text %mlref \ \begin{mldecls} - @{index_ML resolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML eresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML dresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML forward_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML biresolve_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\[1ex] - @{index_ML assume_tac: "Proof.context -> int -> tactic"} \\ - @{index_ML eq_assume_tac: "int -> tactic"} \\[1ex] - @{index_ML match_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML ematch_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML dmatch_tac: "Proof.context -> thm list -> int -> tactic"} \\ - @{index_ML bimatch_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\ + @{define_ML resolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML eresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML dresolve_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML forward_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML biresolve_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\[1ex] + @{define_ML assume_tac: "Proof.context -> int -> tactic"} \\ + @{define_ML eq_assume_tac: "int -> tactic"} \\[1ex] + @{define_ML match_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML ematch_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML dmatch_tac: "Proof.context -> thm list -> int -> tactic"} \\ + @{define_ML bimatch_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\resolve_tac\~\ctxt thms i\ refines the goal state using the given theorems, which should normally be introduction rules. The tactic resolves a rule's conclusion with subgoal \i\, replacing it by the corresponding versions of the rule's premises. \<^descr> \<^ML>\eresolve_tac\~\ctxt thms i\ performs elim-resolution with the given theorems, which are normally be elimination rules. Note that \<^ML_text>\eresolve_tac ctxt [asm_rl]\ is equivalent to \<^ML_text>\assume_tac ctxt\, which facilitates mixing of assumption steps with genuine eliminations. \<^descr> \<^ML>\dresolve_tac\~\ctxt thms i\ performs destruct-resolution with the given theorems, which should normally be destruction rules. This replaces an assumption by the result of applying one of the rules. \<^descr> \<^ML>\forward_tac\ is like \<^ML>\dresolve_tac\ except that the selected assumption is not deleted. It applies a rule to an assumption, adding the result as a new assumption. \<^descr> \<^ML>\biresolve_tac\~\ctxt brls i\ refines the proof state by resolution or elim-resolution on each rule, as indicated by its flag. It affects subgoal \i\ of the proof state. For each pair \(flag, rule)\, it applies resolution if the flag is \false\ and elim-resolution if the flag is \true\. A single tactic call handles a mixture of introduction and elimination rules, which is useful to organize the search process systematically in proof tools. \<^descr> \<^ML>\assume_tac\~\ctxt i\ attempts to solve subgoal \i\ by assumption (modulo higher-order unification). \<^descr> \<^ML>\eq_assume_tac\ is similar to \<^ML>\assume_tac\, but checks only for immediate \\\-convertibility instead of using unification. It succeeds (with a unique next state) if one of the assumptions is equal to the subgoal's conclusion. Since it does not instantiate variables, it cannot make other subgoals unprovable. \<^descr> \<^ML>\match_tac\, \<^ML>\ematch_tac\, \<^ML>\dmatch_tac\, and \<^ML>\bimatch_tac\ are similar to \<^ML>\resolve_tac\, \<^ML>\eresolve_tac\, \<^ML>\dresolve_tac\, and \<^ML>\biresolve_tac\, respectively, but do not instantiate schematic variables in the goal state.\<^footnote>\Strictly speaking, matching means to treat the unknowns in the goal state as constants, but these tactics merely discard unifiers that would update the goal state. In rare situations (where the conclusion and goal state have flexible terms at the same position), the tactic will fail even though an acceptable unifier exists.\ These tactics were written for a specific application within the classical reasoner. Flexible subgoals are not updated at will, but are left alone. \ subsection \Explicit instantiation within a subgoal context\ text \ The main resolution tactics (\secref{sec:resolve-assume-tac}) use higher-order unification, which works well in many practical situations despite its daunting theoretical properties. Nonetheless, there are important problem classes where unguided higher-order unification is not so useful. This typically involves rules like universal elimination, existential introduction, or equational substitution. Here the unification problem involves fully flexible \?P ?x\ schemes, which are hard to manage without further hints. By providing a (small) rigid term for \?x\ explicitly, the remaining unification problem is to assign a (large) term to \?P\, according to the shape of the given subgoal. This is sufficiently well-behaved in most practical situations. \<^medskip> Isabelle provides separate versions of the standard \r/e/d/f\ resolution tactics that allow to provide explicit instantiations of unknowns of the given rule, wrt.\ terms that refer to the implicit context of the selected subgoal. An instantiation consists of a list of pairs of the form \(?x, t)\, where \?x\ is a schematic variable occurring in the given rule, and \t\ is a term from the current proof context, augmented by the local goal parameters of the selected subgoal; cf.\ the \focus\ operation described in \secref{sec:variables}. Entering the syntactic context of a subgoal is a brittle operation, because its exact form is somewhat accidental, and the choice of bound variable names depends on the presence of other local and global names. Explicit renaming of subgoal parameters prior to explicit instantiation might help to achieve a bit more robustness. Type instantiations may be given as well, via pairs like \(?'a, \)\. Type instantiations are distinguished from term instantiations by the syntactic form of the schematic variable. Types are instantiated before terms are. Since term instantiation already performs simple type-inference, so explicit type instantiations are seldom necessary. \ text %mlref \ \begin{mldecls} - @{index_ML Rule_Insts.res_inst_tac: "Proof.context -> - ((indexname * Position.T) * string) list -> (binding * string option * mixfix) list -> - thm -> int -> tactic"} \\ - @{index_ML Rule_Insts.eres_inst_tac: "Proof.context -> + @{define_ML Rule_Insts.res_inst_tac: "Proof.context -> ((indexname * Position.T) * string) list -> (binding * string option * mixfix) list -> thm -> int -> tactic"} \\ - @{index_ML Rule_Insts.dres_inst_tac: "Proof.context -> + @{define_ML Rule_Insts.eres_inst_tac: "Proof.context -> ((indexname * Position.T) * string) list -> (binding * string option * mixfix) list -> thm -> int -> tactic"} \\ - @{index_ML Rule_Insts.forw_inst_tac: "Proof.context -> + @{define_ML Rule_Insts.dres_inst_tac: "Proof.context -> ((indexname * Position.T) * string) list -> (binding * string option * mixfix) list -> thm -> int -> tactic"} \\ - @{index_ML Rule_Insts.subgoal_tac: "Proof.context -> string -> + @{define_ML Rule_Insts.forw_inst_tac: "Proof.context -> + ((indexname * Position.T) * string) list -> (binding * string option * mixfix) list -> + thm -> int -> tactic"} \\ + @{define_ML Rule_Insts.subgoal_tac: "Proof.context -> string -> (binding * string option * mixfix) list -> int -> tactic"} \\ - @{index_ML Rule_Insts.thin_tac: "Proof.context -> string -> + @{define_ML Rule_Insts.thin_tac: "Proof.context -> string -> (binding * string option * mixfix) list -> int -> tactic"} \\ - @{index_ML rename_tac: "string list -> int -> tactic"} \\ + @{define_ML rename_tac: "string list -> int -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\Rule_Insts.res_inst_tac\~\ctxt insts thm i\ instantiates the rule \thm\ with the instantiations \insts\, as described above, and then performs resolution on subgoal \i\. \<^descr> \<^ML>\Rule_Insts.eres_inst_tac\ is like \<^ML>\Rule_Insts.res_inst_tac\, but performs elim-resolution. \<^descr> \<^ML>\Rule_Insts.dres_inst_tac\ is like \<^ML>\Rule_Insts.res_inst_tac\, but performs destruct-resolution. \<^descr> \<^ML>\Rule_Insts.forw_inst_tac\ is like \<^ML>\Rule_Insts.dres_inst_tac\ except that the selected assumption is not deleted. \<^descr> \<^ML>\Rule_Insts.subgoal_tac\~\ctxt \ i\ adds the proposition \\\ as local premise to subgoal \i\, and poses the same as a new subgoal \i + 1\ (in the original context). \<^descr> \<^ML>\Rule_Insts.thin_tac\~\ctxt \ i\ deletes the specified premise from subgoal \i\. Note that \\\ may contain schematic variables, to abbreviate the intended proposition; the first matching subgoal premise will be deleted. Removing useless premises from a subgoal increases its readability and can make search tactics run faster. \<^descr> \<^ML>\rename_tac\~\names i\ renames the innermost parameters of subgoal \i\ according to the provided \names\ (which need to be distinct identifiers). For historical reasons, the above instantiation tactics take unparsed string arguments, which makes them hard to use in general ML code. The slightly more advanced \<^ML>\Subgoal.FOCUS\ combinator of \secref{sec:struct-goals} allows to refer to internal goal structure with explicit context management. \ subsection \Rearranging goal states\ text \ In rare situations there is a need to rearrange goal states: either the overall collection of subgoals, or the local structure of a subgoal. Various administrative tactics allow to operate on the concrete presentation these conceptual sets of formulae. \ text %mlref \ \begin{mldecls} - @{index_ML rotate_tac: "int -> int -> tactic"} \\ - @{index_ML distinct_subgoals_tac: tactic} \\ - @{index_ML flexflex_tac: "Proof.context -> tactic"} \\ + @{define_ML rotate_tac: "int -> int -> tactic"} \\ + @{define_ML distinct_subgoals_tac: tactic} \\ + @{define_ML flexflex_tac: "Proof.context -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\rotate_tac\~\n i\ rotates the premises of subgoal \i\ by \n\ positions: from right to left if \n\ is positive, and from left to right if \n\ is negative. \<^descr> \<^ML>\distinct_subgoals_tac\ removes duplicate subgoals from a proof state. This is potentially inefficient. \<^descr> \<^ML>\flexflex_tac\ removes all flex-flex pairs from the proof state by applying the trivial unifier. This drastic step loses information. It is already part of the Isar infrastructure for facts resulting from goals, and rarely needs to be invoked manually. Flex-flex constraints arise from difficult cases of higher-order unification. To prevent this, use \<^ML>\Rule_Insts.res_inst_tac\ to instantiate some variables in a rule. Normally flex-flex constraints can be ignored; they often disappear as unknowns get instantiated. \ subsection \Raw composition: resolution without lifting\ text \ Raw composition of two rules means resolving them without prior lifting or renaming of unknowns. This low-level operation, which underlies the resolution tactics, may occasionally be useful for special effects. Schematic variables are not renamed by default, so beware of clashes! \ text %mlref \ \begin{mldecls} - @{index_ML compose_tac: "Proof.context -> (bool * thm * int) -> int -> tactic"} \\ - @{index_ML Drule.compose: "thm * int * thm -> thm"} \\ - @{index_ML_op COMP: "thm * thm -> thm"} \\ + @{define_ML compose_tac: "Proof.context -> (bool * thm * int) -> int -> tactic"} \\ + @{define_ML Drule.compose: "thm * int * thm -> thm"} \\ + @{define_ML_infix COMP: "thm * thm -> thm"} \\ \end{mldecls} \<^descr> \<^ML>\compose_tac\~\ctxt (flag, rule, m) i\ refines subgoal \i\ using \rule\, without lifting. The \rule\ is taken to have the form \\\<^sub>1 \ \ \\<^sub>m \ \\, where \\\ need not be atomic; thus \m\ determines the number of new subgoals. If \flag\ is \true\ then it performs elim-resolution --- it solves the first premise of \rule\ by assumption and deletes that assumption. \<^descr> \<^ML>\Drule.compose\~\(thm\<^sub>1, i, thm\<^sub>2)\ uses \thm\<^sub>1\, regarded as an atomic formula, to solve premise \i\ of \thm\<^sub>2\. Let \thm\<^sub>1\ and \thm\<^sub>2\ be \\\ and \\\<^sub>1 \ \ \\<^sub>n \ \\. The unique \s\ that unifies \\\ and \\\<^sub>i\ yields the theorem \(\\<^sub>1 \ \ \\<^sub>i\<^sub>-\<^sub>1 \ \\<^sub>i\<^sub>+\<^sub>1 \ \ \\<^sub>n \ \)s\. Multiple results are considered as error (exception \<^ML>\THM\). \<^descr> \thm\<^sub>1 COMP thm\<^sub>2\ is the same as \Drule.compose (thm\<^sub>1, 1, thm\<^sub>2)\. \begin{warn} These low-level operations are stepping outside the structure imposed by regular rule resolution. Used without understanding of the consequences, they may produce results that cause problems with standard rules and tactics later on. \end{warn} \ section \Tacticals \label{sec:tacticals}\ text \ A \<^emph>\tactical\ is a functional combinator for building up complex tactics from simpler ones. Common tacticals perform sequential composition, disjunctive choice, iteration, or goal addressing. Various search strategies may be expressed via tacticals. \ subsection \Combining tactics\ text \ Sequential composition and alternative choices are the most basic ways to combine tactics, similarly to ``\<^verbatim>\,\'' and ``\<^verbatim>\|\'' in Isar method notation. - This corresponds to \<^ML_op>\THEN\ and \<^ML_op>\ORELSE\ in ML, but there + This corresponds to \<^ML_infix>\THEN\ and \<^ML_infix>\ORELSE\ in ML, but there are further possibilities for fine-tuning alternation of tactics such as - \<^ML_op>\APPEND\. Further details become visible in ML due to explicit + \<^ML_infix>\APPEND\. Further details become visible in ML due to explicit subgoal addressing. \ text %mlref \ \begin{mldecls} - @{index_ML_op "THEN": "tactic * tactic -> tactic"} \\ - @{index_ML_op "ORELSE": "tactic * tactic -> tactic"} \\ - @{index_ML_op "APPEND": "tactic * tactic -> tactic"} \\ - @{index_ML "EVERY": "tactic list -> tactic"} \\ - @{index_ML "FIRST": "tactic list -> tactic"} \\[0.5ex] + @{define_ML_infix "THEN": "tactic * tactic -> tactic"} \\ + @{define_ML_infix "ORELSE": "tactic * tactic -> tactic"} \\ + @{define_ML_infix "APPEND": "tactic * tactic -> tactic"} \\ + @{define_ML "EVERY": "tactic list -> tactic"} \\ + @{define_ML "FIRST": "tactic list -> tactic"} \\[0.5ex] - @{index_ML_op "THEN'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ - @{index_ML_op "ORELSE'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ - @{index_ML_op "APPEND'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ - @{index_ML "EVERY'": "('a -> tactic) list -> 'a -> tactic"} \\ - @{index_ML "FIRST'": "('a -> tactic) list -> 'a -> tactic"} \\ + @{define_ML_infix "THEN'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ + @{define_ML_infix "ORELSE'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ + @{define_ML_infix "APPEND'": "('a -> tactic) * ('a -> tactic) -> 'a -> tactic"} \\ + @{define_ML "EVERY'": "('a -> tactic) list -> 'a -> tactic"} \\ + @{define_ML "FIRST'": "('a -> tactic) list -> 'a -> tactic"} \\ \end{mldecls} - \<^descr> \tac\<^sub>1\~\<^ML_op>\THEN\~\tac\<^sub>2\ is the sequential composition of \tac\<^sub>1\ and + \<^descr> \tac\<^sub>1\~\<^ML_infix>\THEN\~\tac\<^sub>2\ is the sequential composition of \tac\<^sub>1\ and \tac\<^sub>2\. Applied to a goal state, it returns all states reachable in two steps by applying \tac\<^sub>1\ followed by \tac\<^sub>2\. First, it applies \tac\<^sub>1\ to the goal state, getting a sequence of possible next states; then, it applies \tac\<^sub>2\ to each of these and concatenates the results to produce again one flat sequence of states. - \<^descr> \tac\<^sub>1\~\<^ML_op>\ORELSE\~\tac\<^sub>2\ makes a choice between \tac\<^sub>1\ and + \<^descr> \tac\<^sub>1\~\<^ML_infix>\ORELSE\~\tac\<^sub>2\ makes a choice between \tac\<^sub>1\ and \tac\<^sub>2\. Applied to a state, it tries \tac\<^sub>1\ and returns the result if non-empty; if \tac\<^sub>1\ fails then it uses \tac\<^sub>2\. This is a deterministic choice: if \tac\<^sub>1\ succeeds then \tac\<^sub>2\ is excluded from the result. - \<^descr> \tac\<^sub>1\~\<^ML_op>\APPEND\~\tac\<^sub>2\ concatenates the possible results of - \tac\<^sub>1\ and \tac\<^sub>2\. Unlike \<^ML_op>\ORELSE\ there is \<^emph>\no commitment\ to - either tactic, so \<^ML_op>\APPEND\ helps to avoid incompleteness during + \<^descr> \tac\<^sub>1\~\<^ML_infix>\APPEND\~\tac\<^sub>2\ concatenates the possible results of + \tac\<^sub>1\ and \tac\<^sub>2\. Unlike \<^ML_infix>\ORELSE\ there is \<^emph>\no commitment\ to + either tactic, so \<^ML_infix>\APPEND\ helps to avoid incompleteness during search, at the cost of potential inefficiencies. - \<^descr> \<^ML>\EVERY\~\[tac\<^sub>1, \, tac\<^sub>n]\ abbreviates \tac\<^sub>1\~\<^ML_op>\THEN\~\\\~\<^ML_op>\THEN\~\tac\<^sub>n\. Note that \<^ML>\EVERY []\ is the same as + \<^descr> \<^ML>\EVERY\~\[tac\<^sub>1, \, tac\<^sub>n]\ abbreviates \tac\<^sub>1\~\<^ML_infix>\THEN\~\\\~\<^ML_infix>\THEN\~\tac\<^sub>n\. Note that \<^ML>\EVERY []\ is the same as \<^ML>\all_tac\: it always succeeds. - \<^descr> \<^ML>\FIRST\~\[tac\<^sub>1, \, tac\<^sub>n]\ abbreviates \tac\<^sub>1\~\<^ML_op>\ORELSE\~\\\~\<^ML_op>\ORELSE\~\tac\<^sub>n\. Note that \<^ML>\FIRST []\ is the + \<^descr> \<^ML>\FIRST\~\[tac\<^sub>1, \, tac\<^sub>n]\ abbreviates \tac\<^sub>1\~\<^ML_infix>\ORELSE\~\\\~\<^ML_infix>\ORELSE\~\tac\<^sub>n\. Note that \<^ML>\FIRST []\ is the same as \<^ML>\no_tac\: it always fails. - \<^descr> \<^ML_op>\THEN'\ is the lifted version of \<^ML_op>\THEN\, for tactics - with explicit subgoal addressing. So \(tac\<^sub>1\~\<^ML_op>\THEN'\~\tac\<^sub>2) i\ is - the same as \(tac\<^sub>1 i\~\<^ML_op>\THEN\~\tac\<^sub>2 i)\. + \<^descr> \<^ML_infix>\THEN'\ is the lifted version of \<^ML_infix>\THEN\, for tactics + with explicit subgoal addressing. So \(tac\<^sub>1\~\<^ML_infix>\THEN'\~\tac\<^sub>2) i\ is + the same as \(tac\<^sub>1 i\~\<^ML_infix>\THEN\~\tac\<^sub>2 i)\. The other primed tacticals work analogously. \ subsection \Repetition tacticals\ text \ These tacticals provide further control over repetition of tactics, beyond the stylized forms of ``\<^verbatim>\?\'' and ``\<^verbatim>\+\'' in Isar method expressions. \ text %mlref \ \begin{mldecls} - @{index_ML "TRY": "tactic -> tactic"} \\ - @{index_ML "REPEAT": "tactic -> tactic"} \\ - @{index_ML "REPEAT1": "tactic -> tactic"} \\ - @{index_ML "REPEAT_DETERM": "tactic -> tactic"} \\ - @{index_ML "REPEAT_DETERM_N": "int -> tactic -> tactic"} \\ + @{define_ML "TRY": "tactic -> tactic"} \\ + @{define_ML "REPEAT": "tactic -> tactic"} \\ + @{define_ML "REPEAT1": "tactic -> tactic"} \\ + @{define_ML "REPEAT_DETERM": "tactic -> tactic"} \\ + @{define_ML "REPEAT_DETERM_N": "int -> tactic -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\TRY\~\tac\ applies \tac\ to the goal state and returns the resulting sequence, if non-empty; otherwise it returns the original state. Thus, it applies \tac\ at most once. Note that for tactics with subgoal addressing, the combinator can be applied - via functional composition: \<^ML>\TRY\~\<^ML_op>\o\~\tac\. There is no need + via functional composition: \<^ML>\TRY\~\<^ML_infix>\o\~\tac\. There is no need for \<^verbatim>\TRY'\. \<^descr> \<^ML>\REPEAT\~\tac\ applies \tac\ to the goal state and, recursively, to each element of the resulting sequence. The resulting sequence consists of those states that make \tac\ fail. Thus, it applies \tac\ as many times as possible (including zero times), and allows backtracking over each invocation of \tac\. \<^ML>\REPEAT\ is more general than \<^ML>\REPEAT_DETERM\, but requires more space. \<^descr> \<^ML>\REPEAT1\~\tac\ is like \<^ML>\REPEAT\~\tac\ but it always applies \tac\ at least once, failing if this is impossible. \<^descr> \<^ML>\REPEAT_DETERM\~\tac\ applies \tac\ to the goal state and, recursively, to the head of the resulting sequence. It returns the first state to make \tac\ fail. It is deterministic, discarding alternative outcomes. \<^descr> \<^ML>\REPEAT_DETERM_N\~\n tac\ is like \<^ML>\REPEAT_DETERM\~\tac\ but the number of repetitions is bound by \n\ (where \<^ML>\~1\ means \\\). \ text %mlex \ The basic tactics and tacticals considered above follow some algebraic laws: - \<^item> \<^ML>\all_tac\ is the identity element of the tactical \<^ML_op>\THEN\. + \<^item> \<^ML>\all_tac\ is the identity element of the tactical \<^ML_infix>\THEN\. - \<^item> \<^ML>\no_tac\ is the identity element of \<^ML_op>\ORELSE\ and \<^ML_op>\APPEND\. Also, it is a zero element for \<^ML_op>\THEN\, which means that - \tac\~\<^ML_op>\THEN\~\<^ML>\no_tac\ is equivalent to \<^ML>\no_tac\. + \<^item> \<^ML>\no_tac\ is the identity element of \<^ML_infix>\ORELSE\ and \<^ML_infix>\APPEND\. Also, it is a zero element for \<^ML_infix>\THEN\, which means that + \tac\~\<^ML_infix>\THEN\~\<^ML>\no_tac\ is equivalent to \<^ML>\no_tac\. \<^item> \<^ML>\TRY\ and \<^ML>\REPEAT\ can be expressed as (recursive) functions over more basic combinators (ignoring some internal implementation tricks): \ ML \ fun TRY tac = tac ORELSE all_tac; fun REPEAT tac st = ((tac THEN REPEAT tac) ORELSE all_tac) st; \ text \ - If \tac\ can return multiple outcomes then so can \<^ML>\REPEAT\~\tac\. \<^ML>\REPEAT\ uses \<^ML_op>\ORELSE\ and not \<^ML_op>\APPEND\, it applies \tac\ + If \tac\ can return multiple outcomes then so can \<^ML>\REPEAT\~\tac\. \<^ML>\REPEAT\ uses \<^ML_infix>\ORELSE\ and not \<^ML_infix>\APPEND\, it applies \tac\ as many times as possible in each outcome. \begin{warn} Note the explicit abstraction over the goal state in the ML definition of \<^ML>\REPEAT\. Recursive tacticals must be coded in this awkward fashion to avoid infinite recursion of eager functional evaluation in Standard ML. The following attempt would make \<^ML>\REPEAT\~\tac\ loop: \end{warn} \ ML_val \ (*BAD -- does not terminate!*) fun REPEAT tac = (tac THEN REPEAT tac) ORELSE all_tac; \ subsection \Applying tactics to subgoal ranges\ text \ Tactics with explicit subgoal addressing \<^ML_type>\int -> tactic\ can be used together with tacticals that act like ``subgoal quantifiers'': guided by success of the body tactic a certain range of subgoals is covered. Thus the body tactic is applied to \<^emph>\all\ subgoals, \<^emph>\some\ subgoal etc. Suppose that the goal state has \n \ 0\ subgoals. Many of these tacticals address subgoal ranges counting downwards from \n\ towards \1\. This has the fortunate effect that newly emerging subgoals are concatenated in the result, without interfering each other. Nonetheless, there might be situations where a different order is desired. \ text %mlref \ \begin{mldecls} - @{index_ML ALLGOALS: "(int -> tactic) -> tactic"} \\ - @{index_ML SOMEGOAL: "(int -> tactic) -> tactic"} \\ - @{index_ML FIRSTGOAL: "(int -> tactic) -> tactic"} \\ - @{index_ML HEADGOAL: "(int -> tactic) -> tactic"} \\ - @{index_ML REPEAT_SOME: "(int -> tactic) -> tactic"} \\ - @{index_ML REPEAT_FIRST: "(int -> tactic) -> tactic"} \\ - @{index_ML RANGE: "(int -> tactic) list -> int -> tactic"} \\ + @{define_ML ALLGOALS: "(int -> tactic) -> tactic"} \\ + @{define_ML SOMEGOAL: "(int -> tactic) -> tactic"} \\ + @{define_ML FIRSTGOAL: "(int -> tactic) -> tactic"} \\ + @{define_ML HEADGOAL: "(int -> tactic) -> tactic"} \\ + @{define_ML REPEAT_SOME: "(int -> tactic) -> tactic"} \\ + @{define_ML REPEAT_FIRST: "(int -> tactic) -> tactic"} \\ + @{define_ML RANGE: "(int -> tactic) list -> int -> tactic"} \\ \end{mldecls} - \<^descr> \<^ML>\ALLGOALS\~\tac\ is equivalent to \tac n\~\<^ML_op>\THEN\~\\\~\<^ML_op>\THEN\~\tac 1\. It applies the \tac\ to all the subgoals, counting downwards. + \<^descr> \<^ML>\ALLGOALS\~\tac\ is equivalent to \tac n\~\<^ML_infix>\THEN\~\\\~\<^ML_infix>\THEN\~\tac 1\. It applies the \tac\ to all the subgoals, counting downwards. - \<^descr> \<^ML>\SOMEGOAL\~\tac\ is equivalent to \tac n\~\<^ML_op>\ORELSE\~\\\~\<^ML_op>\ORELSE\~\tac 1\. It applies \tac\ to one subgoal, counting downwards. + \<^descr> \<^ML>\SOMEGOAL\~\tac\ is equivalent to \tac n\~\<^ML_infix>\ORELSE\~\\\~\<^ML_infix>\ORELSE\~\tac 1\. It applies \tac\ to one subgoal, counting downwards. - \<^descr> \<^ML>\FIRSTGOAL\~\tac\ is equivalent to \tac 1\~\<^ML_op>\ORELSE\~\\\~\<^ML_op>\ORELSE\~\tac n\. It applies \tac\ to one subgoal, counting upwards. + \<^descr> \<^ML>\FIRSTGOAL\~\tac\ is equivalent to \tac 1\~\<^ML_infix>\ORELSE\~\\\~\<^ML_infix>\ORELSE\~\tac n\. It applies \tac\ to one subgoal, counting upwards. \<^descr> \<^ML>\HEADGOAL\~\tac\ is equivalent to \tac 1\. It applies \tac\ unconditionally to the first subgoal. \<^descr> \<^ML>\REPEAT_SOME\~\tac\ applies \tac\ once or more to a subgoal, counting downwards. \<^descr> \<^ML>\REPEAT_FIRST\~\tac\ applies \tac\ once or more to a subgoal, counting upwards. \<^descr> \<^ML>\RANGE\~\[tac\<^sub>1, \, tac\<^sub>k] i\ is equivalent to \tac\<^sub>k (i + k - - 1)\~\<^ML_op>\THEN\~\\\~\<^ML_op>\THEN\~\tac\<^sub>1 i\. It applies the given list of + 1)\~\<^ML_infix>\THEN\~\\\~\<^ML_infix>\THEN\~\tac\<^sub>1 i\. It applies the given list of tactics to the corresponding range of subgoals, counting downwards. \ subsection \Control and search tacticals\ text \ A predicate on theorems \<^ML_type>\thm -> bool\ can test whether a goal state enjoys some desirable property --- such as having no subgoals. Tactics that search for satisfactory goal states are easy to express. The main search procedures, depth-first, breadth-first and best-first, are provided as tacticals. They generate the search tree by repeatedly applying a given tactic. \ text %mlref "" subsubsection \Filtering a tactic's results\ text \ \begin{mldecls} - @{index_ML FILTER: "(thm -> bool) -> tactic -> tactic"} \\ - @{index_ML CHANGED: "tactic -> tactic"} \\ + @{define_ML FILTER: "(thm -> bool) -> tactic -> tactic"} \\ + @{define_ML CHANGED: "tactic -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\FILTER\~\sat tac\ applies \tac\ to the goal state and returns a sequence consisting of those result goal states that are satisfactory in the sense of \sat\. \<^descr> \<^ML>\CHANGED\~\tac\ applies \tac\ to the goal state and returns precisely those states that differ from the original state (according to \<^ML>\Thm.eq_thm\). Thus \<^ML>\CHANGED\~\tac\ always has some effect on the state. \ subsubsection \Depth-first search\ text \ \begin{mldecls} - @{index_ML DEPTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\ - @{index_ML DEPTH_SOLVE: "tactic -> tactic"} \\ - @{index_ML DEPTH_SOLVE_1: "tactic -> tactic"} \\ + @{define_ML DEPTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\ + @{define_ML DEPTH_SOLVE: "tactic -> tactic"} \\ + @{define_ML DEPTH_SOLVE_1: "tactic -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\DEPTH_FIRST\~\sat tac\ returns the goal state if \sat\ returns true. Otherwise it applies \tac\, then recursively searches from each element of the resulting sequence. The code uses a stack for efficiency, in effect - applying \tac\~\<^ML_op>\THEN\~\<^ML>\DEPTH_FIRST\~\sat tac\ to the state. + applying \tac\~\<^ML_infix>\THEN\~\<^ML>\DEPTH_FIRST\~\sat tac\ to the state. \<^descr> \<^ML>\DEPTH_SOLVE\\tac\ uses \<^ML>\DEPTH_FIRST\ to search for states having no subgoals. \<^descr> \<^ML>\DEPTH_SOLVE_1\~\tac\ uses \<^ML>\DEPTH_FIRST\ to search for states having fewer subgoals than the given state. Thus, it insists upon solving at least one subgoal. \ subsubsection \Other search strategies\ text \ \begin{mldecls} - @{index_ML BREADTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\ - @{index_ML BEST_FIRST: "(thm -> bool) * (thm -> int) -> tactic -> tactic"} \\ - @{index_ML THEN_BEST_FIRST: "tactic -> (thm -> bool) * (thm -> int) -> tactic -> tactic"} \\ + @{define_ML BREADTH_FIRST: "(thm -> bool) -> tactic -> tactic"} \\ + @{define_ML BEST_FIRST: "(thm -> bool) * (thm -> int) -> tactic -> tactic"} \\ + @{define_ML THEN_BEST_FIRST: "tactic -> (thm -> bool) * (thm -> int) -> tactic -> tactic"} \\ \end{mldecls} These search strategies will find a solution if one exists. However, they do not enumerate all solutions; they terminate after the first satisfactory result from \tac\. \<^descr> \<^ML>\BREADTH_FIRST\~\sat tac\ uses breadth-first search to find states for which \sat\ is true. For most applications, it is too slow. \<^descr> \<^ML>\BEST_FIRST\~\(sat, dist) tac\ does a heuristic search, using \dist\ to estimate the distance from a satisfactory state (in the sense of \sat\). It maintains a list of states ordered by distance. It applies \tac\ to the head of this list; if the result contains any satisfactory states, then it returns them. Otherwise, \<^ML>\BEST_FIRST\ adds the new states to the list, and continues. The distance function is typically \<^ML>\size_of_thm\, which computes the size of the state. The smaller the state, the fewer and simpler subgoals it has. \<^descr> \<^ML>\THEN_BEST_FIRST\~\tac\<^sub>0 (sat, dist) tac\ is like \<^ML>\BEST_FIRST\, except that the priority queue initially contains the result of applying \tac\<^sub>0\ to the goal state. This tactical permits separate tactics for starting the search and continuing the search. \ subsubsection \Auxiliary tacticals for searching\ text \ \begin{mldecls} - @{index_ML COND: "(thm -> bool) -> tactic -> tactic -> tactic"} \\ - @{index_ML IF_UNSOLVED: "tactic -> tactic"} \\ - @{index_ML SOLVE: "tactic -> tactic"} \\ - @{index_ML DETERM: "tactic -> tactic"} \\ + @{define_ML COND: "(thm -> bool) -> tactic -> tactic -> tactic"} \\ + @{define_ML IF_UNSOLVED: "tactic -> tactic"} \\ + @{define_ML SOLVE: "tactic -> tactic"} \\ + @{define_ML DETERM: "tactic -> tactic"} \\ \end{mldecls} \<^descr> \<^ML>\COND\~\sat tac\<^sub>1 tac\<^sub>2\ applies \tac\<^sub>1\ to the goal state if it satisfies predicate \sat\, and applies \tac\<^sub>2\. It is a conditional tactical in that only one of \tac\<^sub>1\ and \tac\<^sub>2\ is applied to a goal state. However, both \tac\<^sub>1\ and \tac\<^sub>2\ are evaluated because ML uses eager evaluation. \<^descr> \<^ML>\IF_UNSOLVED\~\tac\ applies \tac\ to the goal state if it has any subgoals, and simply returns the goal state otherwise. Many common tactics, such as \<^ML>\resolve_tac\, fail if applied to a goal state that has no subgoals. \<^descr> \<^ML>\SOLVE\~\tac\ applies \tac\ to the goal state and then fails iff there are subgoals left. \<^descr> \<^ML>\DETERM\~\tac\ applies \tac\ to the goal state and returns the head of the resulting sequence. \<^ML>\DETERM\ limits the search space by making its argument deterministic. \ subsubsection \Predicates and functions useful for searching\ text \ \begin{mldecls} - @{index_ML has_fewer_prems: "int -> thm -> bool"} \\ - @{index_ML Thm.eq_thm: "thm * thm -> bool"} \\ - @{index_ML Thm.eq_thm_prop: "thm * thm -> bool"} \\ - @{index_ML size_of_thm: "thm -> int"} \\ + @{define_ML has_fewer_prems: "int -> thm -> bool"} \\ + @{define_ML Thm.eq_thm: "thm * thm -> bool"} \\ + @{define_ML Thm.eq_thm_prop: "thm * thm -> bool"} \\ + @{define_ML size_of_thm: "thm -> int"} \\ \end{mldecls} \<^descr> \<^ML>\has_fewer_prems\~\n thm\ reports whether \thm\ has fewer than \n\ premises. \<^descr> \<^ML>\Thm.eq_thm\~\(thm\<^sub>1, thm\<^sub>2)\ reports whether \thm\<^sub>1\ and \thm\<^sub>2\ are equal. Both theorems must have the same conclusions, the same set of hypotheses, and the same set of sort hypotheses. Names of bound variables are ignored as usual. \<^descr> \<^ML>\Thm.eq_thm_prop\~\(thm\<^sub>1, thm\<^sub>2)\ reports whether the propositions of \thm\<^sub>1\ and \thm\<^sub>2\ are equal. Names of bound variables are ignored. \<^descr> \<^ML>\size_of_thm\~\thm\ computes the size of \thm\, namely the number of variables, constants and abstractions in its conclusion. It may serve as a distance function for \<^ML>\BEST_FIRST\. \ end diff --git a/src/Doc/Implementation/document/build b/src/Doc/Implementation/document/build deleted file mode 100755 --- a/src/Doc/Implementation/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo Isar -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Implementation/document/root.tex b/src/Doc/Implementation/document/root.tex --- a/src/Doc/Implementation/document/root.tex +++ b/src/Doc/Implementation/document/root.tex @@ -1,124 +1,124 @@ \documentclass[12pt,a4paper,fleqn]{report} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage[refpage]{nomencl} \usepackage{iman,extra,isar,proof} \usepackage[nohyphen,strings]{underscore} \usepackage{isabelle} \usepackage{isabellesym} \usepackage{railsetup} \usepackage{supertabular} \usepackage{style} \usepackage{pdfsetup} \hyphenation{Isabelle} \hyphenation{Isar} \isadroptag{theory} -\title{\includegraphics[scale=0.5]{isabelle_isar} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] The Isabelle/Isar Implementation} \author{\emph{Makarius Wenzel} \\[3ex] With Contributions by Stefan Berghofer, \\ Florian Haftmann and Larry Paulson } \makeindex \begin{document} \maketitle \begin{abstract} We describe the key concepts underlying the Isabelle/Isar implementation, including ML references for the most important functions. The aim is to give some insight into the overall system architecture, and provide clues on implementing applications within this framework. \end{abstract} \vspace*{2.5cm} \begin{quote} {\small\em Isabelle was not designed; it evolved. Not everyone likes this idea. Specification experts rightly abhor trial-and-error programming. They suggest that no one should write a program without first writing a complete formal specification. But university departments are not software houses. Programs like Isabelle are not products: when they have served their purpose, they are discarded.} Lawrence C. Paulson, ``Isabelle: The Next 700 Theorem Provers'' \vspace*{1cm} {\small\em As I did 20 years ago, I still fervently believe that the only way to make software secure, reliable, and fast is to make it small. Fight features.} Andrew S. Tanenbaum \vspace*{1cm} {\small\em One thing that UNIX does not need is more features. It is successful in part because it has a small number of good ideas that work well together. Merely adding features does not make it easier for users to do things --- it just makes the manual thicker. The right solution in the right place is always more effective than haphazard hacking.} Rob Pike and Brian W. Kernighan \vspace*{1cm} {\small\em If you look at software today, through the lens of the history of engineering, it's certainly engineering of a sort--but it's the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.} Alan Kay \end{quote} \thispagestyle{empty}\clearpage \pagenumbering{roman} \tableofcontents \listoffigures \clearfirst \setcounter{chapter}{-1} \input{ML.tex} \input{Prelim.tex} \input{Logic.tex} \input{Syntax.tex} \input{Tactic.tex} \input{Eq.tex} \input{Proof.tex} \input{Isar.tex} \input{Local_Theory.tex} \input{Integration.tex} \begingroup \tocentry{\bibname} \bibliographystyle{abbrv} \small\raggedright\frenchspacing \bibliography{manual} \endgroup \tocentry{\indexname} \printindex \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End: diff --git a/src/Doc/Intro/document/build b/src/Doc/Intro/document/build --- a/src/Doc/Intro/document/build +++ b/src/Doc/Intro/document/build @@ -1,10 +1,19 @@ #!/usr/bin/env bash set -e -FORMAT="$1" -VARIANT="$2" +$ISABELLE_LUALATEX root -isabelle logo -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" +if [ -f manual.bib -o -f root.bib ] +then + $ISABELLE_BIBTEX root + $ISABELLE_LUALATEX root +fi +$ISABELLE_LUALATEX root + +if [ -f root.idx ] +then + "$ISABELLE_HOME/src/Doc/sedindex" root + $ISABELLE_LUALATEX root +fi diff --git a/src/Doc/Intro/document/root.tex b/src/Doc/Intro/document/root.tex --- a/src/Doc/Intro/document/root.tex +++ b/src/Doc/Intro/document/root.tex @@ -1,154 +1,154 @@ \documentclass[12pt,a4paper]{article} \usepackage{graphicx,iman,extra,ttbox,proof,pdfsetup} %% run bibtex intro to prepare bibliography %% run ../sedindex intro to prepare index file %prth *(\(.*\)); \1; %{\\out \(.*\)} {\\out val it = "\1" : thm} -\title{\includegraphics[scale=0.5]{isabelle} \\[4ex] Old Introduction to Isabelle} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Old Introduction to Isabelle} \author{{\em Lawrence C. Paulson}\\ Computer Laboratory \\ University of Cambridge \\ \texttt{lcp@cl.cam.ac.uk}\\[3ex] With Contributions by Tobias Nipkow and Markus Wenzel } \makeindex \underscoreoff \setcounter{secnumdepth}{2} \setcounter{tocdepth}{2} \sloppy \binperiod %%%treat . like a binary operator \newcommand\qeq{\stackrel{?}{\equiv}} %for disagreement pairs in unification \newcommand{\nand}{\mathbin{\lnot\&}} \newcommand{\xor}{\mathbin{\#}} \pagenumbering{roman} \begin{document} \pagestyle{empty} \begin{titlepage} \maketitle \emph{Note}: this document is part of the earlier Isabelle documentation, which is largely superseded by the Isabelle/HOL \emph{Tutorial}~\cite{isa-tutorial}. It describes the old-style theory syntax and shows how to conduct proofs using the ML top level. This style of interaction is largely obsolete: most Isabelle proofs are now written using the Isar language and the Proof General interface. However, this paper contains valuable information that is not available elsewhere. Its examples are based on first-order logic rather than higher-order logic. \thispagestyle{empty} \vfill {\small Copyright \copyright{} \number\year{} by Lawrence C. Paulson} \end{titlepage} \pagestyle{headings} \part*{Preface} \index{Isabelle!overview} \index{Isabelle!object-logics supported} Isabelle~\cite{paulson-natural,paulson-found,paulson700} is a generic theorem prover. It has been instantiated to support reasoning in several object-logics: \begin{itemize} \item first-order logic, constructive and classical versions \item higher-order logic, similar to that of Gordon's {\sc hol}~\cite{mgordon-hol} \item Zermelo-Fraenkel set theory~\cite{suppes72} \item an extensional version of Martin-L\"of's Type Theory~\cite{nordstrom90} \item the classical first-order sequent calculus, {\sc lk} \item the modal logics $T$, $S4$, and $S43$ \item the Logic for Computable Functions~\cite{paulson87} \end{itemize} A logic's syntax and inference rules are specified declaratively; this allows single-step proof construction. Isabelle provides control structures for expressing search procedures. Isabelle also provides several generic tools, such as simplifiers and classical theorem provers, which can be applied to object-logics. Isabelle is a large system, but beginners can get by with a small repertoire of commands and a basic knowledge of how Isabelle works. The Isabelle/HOL \emph{Tutorial}~\cite{isa-tutorial} describes how to get started. Advanced Isabelle users will benefit from some knowledge of Standard~\ML{}, because Isabelle is written in \ML{}; \index{ML} if you are prepared to writing \ML{} code, you can get Isabelle to do almost anything. My book on~\ML{}~\cite{paulson-ml2} covers much material connected with Isabelle, including a simple theorem prover. Users must be familiar with logic as used in computer science; there are many good texts~\cite{galton90,reeves90}. \index{LCF} {\sc lcf}, developed by Robin Milner and colleagues~\cite{mgordon79}, is an ancestor of {\sc hol}, Nuprl, and several other systems. Isabelle borrows ideas from {\sc lcf}: formulae are~\ML{} values; theorems belong to an abstract type; tactics and tacticals support backward proof. But {\sc lcf} represents object-level rules by functions, while Isabelle represents them by terms. You may find my other writings~\cite{paulson87,paulson-handbook} helpful in understanding the relationship between {\sc lcf} and Isabelle. \index{Isabelle!release history} Isabelle was first distributed in 1986. The 1987 version introduced a higher-order meta-logic with an improved treatment of quantifiers. The 1988 version added limited polymorphism and support for natural deduction. The 1989 version included a parser and pretty printer generator. The 1992 version introduced type classes, to support many-sorted and higher-order logics. The 1994 version introduced greater support for theories. The most important recent change is the introduction of the Isar proof language, thanks to Markus Wenzel. Isabelle is still under development and will continue to change. \subsubsection*{Overview} This manual consists of three parts. Part~I discusses the Isabelle's foundations. Part~II, presents simple on-line sessions, starting with forward proof. It also covers basic tactics and tacticals, and some commands for invoking them. Part~III contains further examples for users with a bit of experience. It explains how to derive rules define theories, and concludes with an extended example: a Prolog interpreter. Isabelle's Reference Manual and Object-Logics manual contain more details. They assume familiarity with the concepts presented here. \subsubsection*{Acknowledgements} Tobias Nipkow contributed most of the section on defining theories. Stefan Berghofer, Sara Kalvala and Viktor Kuncak suggested improvements. Tobias Nipkow has made immense contributions to Isabelle, including the parser generator, type classes, and the simplifier. Carsten Clasohm and Markus Wenzel made major contributions; Sonia Mahjoub and Karin Nimmermann also helped. Isabelle was developed using Dave Matthews's Standard~{\sc ml} compiler, Poly/{\sc ml}. Many people have contributed to Isabelle's standard object-logics, including Martin Coen, Philippe de Groote, Philippe No\"el. The research has been funded by the EPSRC (grants GR/G53279, GR/H40570, GR/K57381, GR/K77051, GR/M75440) and by ESPRIT (projects 3245: Logical Frameworks, and 6453: Types), and by the DFG Schwerpunktprogramm \emph{Deduktion}. \newpage \pagestyle{plain} \tableofcontents \newpage \newfont{\sanssi}{cmssi12} \vspace*{2.5cm} \begin{quote} \raggedleft {\sanssi You can only find truth with logic\\ if you have already found truth without it.}\\ \bigskip G.K. Chesterton, {\em The Man who was Orthodox} \end{quote} \clearfirst \pagestyle{headings} \input{foundations} \input{getting} \input{advanced} \bibliographystyle{plain} \small\raggedright\frenchspacing \bibliography{manual} \printindex \end{document} diff --git a/src/Doc/Isar_Ref/Document_Preparation.thy b/src/Doc/Isar_Ref/Document_Preparation.thy --- a/src/Doc/Isar_Ref/Document_Preparation.thy +++ b/src/Doc/Isar_Ref/Document_Preparation.thy @@ -1,700 +1,724 @@ (*:maxLineLen=78:*) theory Document_Preparation imports Main Base begin chapter \Document preparation \label{ch:document-prep}\ text \ Isabelle/Isar provides a simple document preparation system based on {PDF-\LaTeX}, with support for hyperlinks and bookmarks within that format. This allows to produce papers, books, theses etc.\ from Isabelle theory sources. {\LaTeX} output is generated while processing a \<^emph>\session\ in batch mode, as explained in the \<^emph>\The Isabelle System Manual\ @{cite "isabelle-system"}. The main Isabelle tools to get started with document preparation are @{tool_ref mkroot} and @{tool_ref build}. The classic Isabelle/HOL tutorial @{cite "isabelle-hol-book"} also explains some aspects of theory presentation. \ section \Markup commands \label{sec:markup}\ text \ \begin{matharray}{rcl} @{command_def "chapter"} & : & \any \ any\ \\ @{command_def "section"} & : & \any \ any\ \\ @{command_def "subsection"} & : & \any \ any\ \\ @{command_def "subsubsection"} & : & \any \ any\ \\ @{command_def "paragraph"} & : & \any \ any\ \\ @{command_def "subparagraph"} & : & \any \ any\ \\ @{command_def "text"} & : & \any \ any\ \\ @{command_def "txt"} & : & \any \ any\ \\ @{command_def "text_raw"} & : & \any \ any\ \\ \end{matharray} Markup commands provide a structured way to insert text into the document generated from a theory. Each markup command takes a single @{syntax text} argument, which is passed as argument to a corresponding {\LaTeX} macro. The default macros provided by \<^file>\~~/lib/texinputs/isabelle.sty\ can be redefined according to the needs of the underlying document and {\LaTeX} styles. Note that formal comments (\secref{sec:comments}) are similar to markup commands, but have a different status within Isabelle/Isar syntax. \<^rail>\ (@@{command chapter} | @@{command section} | @@{command subsection} | @@{command subsubsection} | @@{command paragraph} | @@{command subparagraph}) @{syntax text} ';'? | (@@{command text} | @@{command txt} | @@{command text_raw}) @{syntax text} \ \<^descr> @{command chapter}, @{command section}, @{command subsection} etc.\ mark section headings within the theory source. This works in any context, even before the initial @{command theory} command. The corresponding {\LaTeX} macros are \<^verbatim>\\isamarkupchapter\, \<^verbatim>\\isamarkupsection\, \<^verbatim>\\isamarkupsubsection\ etc.\ \<^descr> @{command text} and @{command txt} specify paragraphs of plain text. This corresponds to a {\LaTeX} environment \<^verbatim>\\begin{isamarkuptext}\ \\\ \<^verbatim>\\end{isamarkuptext}\ etc. \<^descr> @{command text_raw} is similar to @{command text}, but without any surrounding markup environment. This allows to inject arbitrary {\LaTeX} source into the generated document. All text passed to any of the above markup commands may refer to formal entities via \<^emph>\document antiquotations\, see also \secref{sec:antiq}. These are interpreted in the present theory or proof context. \<^medskip> The proof markup commands closely resemble those for theory specifications, but have a different formal status and produce different {\LaTeX} macros. \ section \Document antiquotations \label{sec:antiq}\ text \ \begin{matharray}{rcl} @{antiquotation_def "theory"} & : & \antiquotation\ \\ @{antiquotation_def "thm"} & : & \antiquotation\ \\ @{antiquotation_def "lemma"} & : & \antiquotation\ \\ @{antiquotation_def "prop"} & : & \antiquotation\ \\ @{antiquotation_def "term"} & : & \antiquotation\ \\ @{antiquotation_def term_type} & : & \antiquotation\ \\ @{antiquotation_def typeof} & : & \antiquotation\ \\ @{antiquotation_def const} & : & \antiquotation\ \\ @{antiquotation_def abbrev} & : & \antiquotation\ \\ @{antiquotation_def typ} & : & \antiquotation\ \\ @{antiquotation_def type} & : & \antiquotation\ \\ @{antiquotation_def class} & : & \antiquotation\ \\ @{antiquotation_def locale} & : & \antiquotation\ \\ @{antiquotation_def "text"} & : & \antiquotation\ \\ @{antiquotation_def goals} & : & \antiquotation\ \\ @{antiquotation_def subgoals} & : & \antiquotation\ \\ @{antiquotation_def prf} & : & \antiquotation\ \\ @{antiquotation_def full_prf} & : & \antiquotation\ \\ + @{antiquotation_def ML_text} & : & \antiquotation\ \\ @{antiquotation_def ML} & : & \antiquotation\ \\ - @{antiquotation_def ML_op} & : & \antiquotation\ \\ + @{antiquotation_def ML_def} & : & \antiquotation\ \\ + @{antiquotation_def ML_ref} & : & \antiquotation\ \\ + @{antiquotation_def ML_infix} & : & \antiquotation\ \\ + @{antiquotation_def ML_infix_def} & : & \antiquotation\ \\ + @{antiquotation_def ML_infix_ref} & : & \antiquotation\ \\ @{antiquotation_def ML_type} & : & \antiquotation\ \\ + @{antiquotation_def ML_type_def} & : & \antiquotation\ \\ + @{antiquotation_def ML_type_ref} & : & \antiquotation\ \\ @{antiquotation_def ML_structure} & : & \antiquotation\ \\ + @{antiquotation_def ML_structure_def} & : & \antiquotation\ \\ + @{antiquotation_def ML_structure_ref} & : & \antiquotation\ \\ @{antiquotation_def ML_functor} & : & \antiquotation\ \\ + @{antiquotation_def ML_functor_def} & : & \antiquotation\ \\ + @{antiquotation_def ML_functor_ref} & : & \antiquotation\ \\ @{antiquotation_def emph} & : & \antiquotation\ \\ @{antiquotation_def bold} & : & \antiquotation\ \\ @{antiquotation_def verbatim} & : & \antiquotation\ \\ @{antiquotation_def bash_function} & : & \antiquotation\ \\ @{antiquotation_def system_option} & : & \antiquotation\ \\ @{antiquotation_def session} & : & \antiquotation\ \\ @{antiquotation_def "file"} & : & \antiquotation\ \\ @{antiquotation_def "url"} & : & \antiquotation\ \\ @{antiquotation_def "cite"} & : & \antiquotation\ \\ @{command_def "print_antiquotations"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} The overall content of an Isabelle/Isar theory may alternate between formal and informal text. The main body consists of formal specification and proof commands, interspersed with markup commands (\secref{sec:markup}) or document comments (\secref{sec:comments}). The argument of markup commands quotes informal text to be printed in the resulting document, but may again refer to formal entities via \<^emph>\document antiquotations\. For example, embedding \<^verbatim>\@{term [show_types] "f x = a + x"}\ within a text block makes \isa{{\isacharparenleft}f{\isasymColon}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isacharparenright}\ {\isacharparenleft}x{\isasymColon}{\isacharprime}a{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}a{\isasymColon}{\isacharprime}a{\isacharparenright}\ {\isacharplus}\ x} appear in the final {\LaTeX} document. Antiquotations usually spare the author tedious typing of logical entities in full detail. Even more importantly, some degree of consistency-checking between the main body of formal text and its informal explanation is achieved, since terms and types appearing in antiquotations are checked within the current theory or proof context. \<^medskip> Antiquotations are in general written as \<^verbatim>\@{\\name\~\<^verbatim>\[\\options\\<^verbatim>\]\~\arguments\\<^verbatim>\}\. The short form \<^verbatim>\\\\<^verbatim>\<^\\name\\<^verbatim>\>\\\argument_content\\ (without surrounding \<^verbatim>\@{\\\\\<^verbatim>\}\) works for a single argument that is a cartouche. A cartouche without special decoration is equivalent to \<^verbatim>\\<^cartouche>\\\argument_content\\, which is equivalent to \<^verbatim>\@{cartouche\~\\argument_content\\\<^verbatim>\}\. The special name @{antiquotation_def cartouche} is defined in the context: Isabelle/Pure introduces that as an alias to @{antiquotation_ref text} (see below). Consequently, \\foo_bar + baz \ bazar\\ prints literal quasi-formal text (unchecked). A control symbol \<^verbatim>\\\\<^verbatim>\<^\\name\\<^verbatim>\>\ within the body text, but without a subsequent cartouche, is equivalent to \<^verbatim>\@{\\name\\<^verbatim>\}\. \begingroup \def\isasymcontrolstart{\isatt{\isacharbackslash\isacharless\isacharcircum}} \<^rail>\ @{syntax_def antiquotation}: '@{' antiquotation_body '}' | '\' @{syntax_ref name} '>' @{syntax_ref cartouche} | @{syntax_ref cartouche} ; options: '[' (option * ',') ']' ; option: @{syntax name} | @{syntax name} '=' @{syntax name} ; \ \endgroup Note that the syntax of antiquotations may \<^emph>\not\ include source comments \<^verbatim>\(*\~\\\~\<^verbatim>\*)\ nor verbatim text \<^verbatim>\{*\~\\\~\<^verbatim>\*}\. %% FIXME less monolithic presentation, move to individual sections!? \<^rail>\ @{syntax_def antiquotation_body}: (@@{antiquotation text} | @@{antiquotation cartouche} | @@{antiquotation theory_text}) options @{syntax text} | @@{antiquotation theory} options @{syntax embedded} | @@{antiquotation thm} options styles @{syntax thms} | @@{antiquotation lemma} options @{syntax prop} @'by' @{syntax method} @{syntax method}? | @@{antiquotation prop} options styles @{syntax prop} | @@{antiquotation term} options styles @{syntax term} | @@{antiquotation (HOL) value} options styles @{syntax term} | @@{antiquotation term_type} options styles @{syntax term} | @@{antiquotation typeof} options styles @{syntax term} | @@{antiquotation const} options @{syntax term} | @@{antiquotation abbrev} options @{syntax term} | @@{antiquotation typ} options @{syntax type} | @@{antiquotation type} options @{syntax embedded} | @@{antiquotation class} options @{syntax embedded} | @@{antiquotation locale} options @{syntax embedded} | (@@{antiquotation command} | @@{antiquotation method} | @@{antiquotation attribute}) options @{syntax name} ; @{syntax antiquotation}: @@{antiquotation goals} options | @@{antiquotation subgoals} options | @@{antiquotation prf} options @{syntax thms} | @@{antiquotation full_prf} options @{syntax thms} | + @@{antiquotation ML_text} options @{syntax text} | @@{antiquotation ML} options @{syntax text} | - @@{antiquotation ML_op} options @{syntax text} | - @@{antiquotation ML_type} options @{syntax text} | + @@{antiquotation ML_infix} options @{syntax text} | + @@{antiquotation ML_type} options @{syntax typeargs} @{syntax text} | @@{antiquotation ML_structure} options @{syntax text} | @@{antiquotation ML_functor} options @{syntax text} | @@{antiquotation emph} options @{syntax text} | @@{antiquotation bold} options @{syntax text} | @@{antiquotation verbatim} options @{syntax text} | @@{antiquotation bash_function} options @{syntax embedded} | @@{antiquotation system_option} options @{syntax embedded} | @@{antiquotation session} options @{syntax embedded} | @@{antiquotation path} options @{syntax embedded} | @@{antiquotation "file"} options @{syntax name} | @@{antiquotation dir} options @{syntax name} | @@{antiquotation url} options @{syntax embedded} | @@{antiquotation cite} options @{syntax cartouche}? (@{syntax name} + @'and') ; styles: '(' (style + ',') ')' ; style: (@{syntax name} +) ; @@{command print_antiquotations} ('!'?) \ \<^descr> \@{text s}\ prints uninterpreted source text \s\, i.e.\ inner syntax. This is particularly useful to print portions of text according to the Isabelle document style, without demanding well-formedness, e.g.\ small pieces of terms that should not be parsed or type-checked yet. It is also possible to write this in the short form \\s\\ without any further decoration. \<^descr> \@{theory_text s}\ prints uninterpreted theory source text \s\, i.e.\ outer syntax with command keywords and other tokens. \<^descr> \@{theory A}\ prints the session-qualified theory name \A\, which is guaranteed to refer to a valid ancestor theory in the current context. \<^descr> \@{thm a\<^sub>1 \ a\<^sub>n}\ prints theorems \a\<^sub>1 \ a\<^sub>n\. Full fact expressions are allowed here, including attributes (\secref{sec:syn-att}). \<^descr> \@{prop \}\ prints a well-typed proposition \\\. \<^descr> \@{lemma \ by m}\ proves a well-typed proposition \\\ by method \m\ and prints the original \\\. \<^descr> \@{term t}\ prints a well-typed term \t\. \<^descr> \@{value t}\ evaluates a term \t\ and prints its result, see also @{command_ref (HOL) value}. \<^descr> \@{term_type t}\ prints a well-typed term \t\ annotated with its type. \<^descr> \@{typeof t}\ prints the type of a well-typed term \t\. \<^descr> \@{const c}\ prints a logical or syntactic constant \c\. \<^descr> \@{abbrev c x\<^sub>1 \ x\<^sub>n}\ prints a constant abbreviation \c x\<^sub>1 \ x\<^sub>n \ rhs\ as defined in the current context. \<^descr> \@{typ \}\ prints a well-formed type \\\. \<^descr> \@{type \}\ prints a (logical or syntactic) type constructor \\\. \<^descr> \@{class c}\ prints a class \c\. \<^descr> \@{locale c}\ prints a locale \c\. \<^descr> \@{command name}\, \@{method name}\, \@{attribute name}\ print checked entities of the Isar language. \<^descr> \@{goals}\ prints the current \<^emph>\dynamic\ goal state. This is mainly for support of tactic-emulation scripts within Isar. Presentation of goal states does not conform to the idea of human-readable proof documents! When explaining proofs in detail it is usually better to spell out the reasoning via proper Isar proof commands, instead of peeking at the internal machine configuration. \<^descr> \@{subgoals}\ is similar to \@{goals}\, but does not print the main goal. \<^descr> \@{prf a\<^sub>1 \ a\<^sub>n}\ prints the (compact) proof terms corresponding to the theorems \a\<^sub>1 \ a\<^sub>n\. Note that this requires proof terms to be switched on for the current logic session. \<^descr> \@{full_prf a\<^sub>1 \ a\<^sub>n}\ is like \@{prf a\<^sub>1 \ a\<^sub>n}\, but prints the full proof terms, i.e.\ also displays information omitted in the compact proof term, which is denoted by ``\_\'' placeholders there. - - \<^descr> \@{ML s}\, \@{ML_op s}\, \@{ML_type s}\, \@{ML_structure s}\, and + + \<^descr> \@{ML_text s}\ prints ML text verbatim: only the token language is + checked. + + \<^descr> \@{ML s}\, \@{ML_infix s}\, \@{ML_type s}\, \@{ML_structure s}\, and \@{ML_functor s}\ check text \s\ as ML value, infix operator, type, - structure, and functor respectively. The source is printed verbatim. + exception, structure, and functor respectively. The source is printed + verbatim. The variants \@{ML_def s}\ and \@{ML_ref s}\ etc. maintain the + document index: ``def'' means to make a bold entry, ``ref'' means to make a + regular entry. + + There are two forms for type constructors, with or without separate type + arguments: this impacts only the index entry. For example, \@{ML_type_ref + \'a list\}\ makes an entry literally for ``\'a list\'' (sorted under the + letter ``a''), but \@{ML_type_ref 'a \list\}\ makes an entry for the + constructor name ``\list\''. \<^descr> \@{emph s}\ prints document source recursively, with {\LaTeX} markup \<^verbatim>\\emph{\\\\\<^verbatim>\}\. \<^descr> \@{bold s}\ prints document source recursively, with {\LaTeX} markup \<^verbatim>\\textbf{\\\\\<^verbatim>\}\. \<^descr> \@{verbatim s}\ prints uninterpreted source text literally as ASCII characters, using some type-writer font style. \<^descr> \@{bash_function name}\ prints the given GNU bash function verbatim. The name is checked wrt.\ the Isabelle system environment @{cite "isabelle-system"}. \<^descr> \@{system_option name}\ prints the given system option verbatim. The name is checked wrt.\ cumulative \<^verbatim>\etc/options\ of all Isabelle components, notably \<^file>\~~/etc/options\. \<^descr> \@{session name}\ prints given session name verbatim. The name is checked wrt.\ the dependencies of the current session. \<^descr> \@{path name}\ prints the file-system path name verbatim. \<^descr> \@{file name}\ is like \@{path name}\, but ensures that \name\ refers to a plain file. \<^descr> \@{dir name}\ is like \@{path name}\, but ensures that \name\ refers to a directory. \<^descr> \@{url name}\ produces markup for the given URL, which results in an active hyperlink within the text. \<^descr> \@{cite name}\ produces a citation \<^verbatim>\\cite{name}\ in {\LaTeX}, where the name refers to some Bib{\TeX} database entry. This is only checked in batch-mode session builds. The variant \@{cite \opt\ name}\ produces \<^verbatim>\\cite[opt]{name}\ with some free-form optional argument. Multiple names are output with commas, e.g. \@{cite foo \ bar}\ becomes \<^verbatim>\\cite{foo,bar}\. The {\LaTeX} macro name is determined by the antiquotation option @{antiquotation_option_def cite_macro}, or the configuration option @{attribute cite_macro} in the context. For example, \@{cite [cite_macro = nocite] foobar}\ produces \<^verbatim>\\nocite{foobar}\. \<^descr> @{command "print_antiquotations"} prints all document antiquotations that are defined in the current context; the ``\!\'' option indicates extra verbosity. \ subsection \Styled antiquotations\ text \ The antiquotations \thm\, \prop\ and \term\ admit an extra \<^emph>\style\ specification to modify the printed result. A style is specified by a name with a possibly empty number of arguments; multiple styles can be sequenced with commas. The following standard styles are available: \<^descr> \lhs\ extracts the first argument of any application form with at least two arguments --- typically meta-level or object-level equality, or any other binary relation. \<^descr> \rhs\ is like \lhs\, but extracts the second argument. \<^descr> \concl\ extracts the conclusion \C\ from a rule in Horn-clause normal form \A\<^sub>1 \ \ A\<^sub>n \ C\. \<^descr> \prem\ \n\ extract premise number \n\ from from a rule in Horn-clause normal form \A\<^sub>1 \ \ A\<^sub>n \ C\. \ subsection \General options\ text \ The following options are available to tune the printed output of antiquotations. Note that many of these coincide with system and configuration options of the same names. \<^descr> @{antiquotation_option_def show_types}~\= bool\ and @{antiquotation_option_def show_sorts}~\= bool\ control printing of explicit type and sort constraints. \<^descr> @{antiquotation_option_def show_structs}~\= bool\ controls printing of implicit structures. \<^descr> @{antiquotation_option_def show_abbrevs}~\= bool\ controls folding of abbreviations. \<^descr> @{antiquotation_option_def names_long}~\= bool\ forces names of types and constants etc.\ to be printed in their fully qualified internal form. \<^descr> @{antiquotation_option_def names_short}~\= bool\ forces names of types and constants etc.\ to be printed unqualified. Note that internalizing the output again in the current context may well yield a different result. \<^descr> @{antiquotation_option_def names_unique}~\= bool\ determines whether the printed version of qualified names should be made sufficiently long to avoid overlap with names declared further back. Set to \false\ for more concise output. \<^descr> @{antiquotation_option_def eta_contract}~\= bool\ prints terms in \\\-contracted form. \<^descr> @{antiquotation_option_def display}~\= bool\ indicates if the text is to be output as multi-line ``display material'', rather than a small piece of text without line breaks (which is the default). In this mode the embedded entities are printed in the same style as the main theory text. \<^descr> @{antiquotation_option_def break}~\= bool\ controls line breaks in non-display material. \<^descr> @{antiquotation_option_def cartouche}~\= bool\ indicates if the output should be delimited as cartouche. \<^descr> @{antiquotation_option_def quotes}~\= bool\ indicates if the output should be delimited via double quotes (option @{antiquotation_option cartouche} takes precedence). Note that the Isabelle {\LaTeX} styles may suppress quotes on their own account. \<^descr> @{antiquotation_option_def mode}~\= name\ adds \name\ to the print mode to be used for presentation. Note that the standard setup for {\LaTeX} output is already present by default, with mode ``\latex\''. \<^descr> @{antiquotation_option_def margin}~\= nat\ and @{antiquotation_option_def indent}~\= nat\ change the margin or indentation for pretty printing of display material. \<^descr> @{antiquotation_option_def goals_limit}~\= nat\ determines the maximum number of subgoals to be printed (for goal-based antiquotation). \<^descr> @{antiquotation_option_def source}~\= bool\ prints the original source text of the antiquotation arguments, rather than its internal representation. Note that formal checking of @{antiquotation "thm"}, @{antiquotation "term"}, etc. is still enabled; use the @{antiquotation "text"} antiquotation for unchecked output. Regular \term\ and \typ\ antiquotations with \source = false\ involve a full round-trip from the original source to an internalized logical entity back to a source form, according to the syntax of the current context. Thus the printed output is not under direct control of the author, it may even fluctuate a bit as the underlying theory is changed later on. In contrast, @{antiquotation_option source}~\= true\ admits direct printing of the given source text, with the desirable well-formedness check in the background, but without modification of the printed text. Cartouche delimiters of the argument are stripped for antiquotations that are internally categorized as ``embedded''. \<^descr> @{antiquotation_option_def source_cartouche} is like @{antiquotation_option source}, but cartouche delimiters are always preserved in the output. For Boolean flags, ``\name = true\'' may be abbreviated as ``\name\''. All of the above flags are disabled by default, unless changed specifically for a logic session in the corresponding \<^verbatim>\ROOT\ file. \ section \Markdown-like text structure\ text \ The markup commands @{command_ref text}, @{command_ref txt}, @{command_ref text_raw} (\secref{sec:markup}) consist of plain text. Its internal structure consists of paragraphs and (nested) lists, using special Isabelle symbols and some rules for indentation and blank lines. This quasi-visual format resembles \<^emph>\Markdown\\<^footnote>\\<^url>\http://commonmark.org\\, but the full complexity of that notation is avoided. This is a summary of the main principles of minimal Markdown in Isabelle: \<^item> List items start with the following markers \<^descr>[itemize:] \<^verbatim>\\<^item>\ \<^descr>[enumerate:] \<^verbatim>\\<^enum>\ \<^descr>[description:] \<^verbatim>\\<^descr>\ \<^item> Adjacent list items with same indentation and same marker are grouped into a single list. \<^item> Singleton blank lines separate paragraphs. \<^item> Multiple blank lines escape from the current list hierarchy. Notable differences to official Markdown: \<^item> Indentation of list items needs to match exactly. \<^item> Indentation is unlimited (official Markdown interprets four spaces as block quote). \<^item> List items always consist of paragraphs --- there is no notion of ``tight'' list. \<^item> Section headings are expressed via Isar document markup commands (\secref{sec:markup}). \<^item> URLs, font styles, other special content is expressed via antiquotations (\secref{sec:antiq}), usually with proper nesting of sub-languages via text cartouches. \ section \Document markers and command tags \label{sec:document-markers}\ text \ \emph{Document markers} are formal comments of the form \\<^marker>\marker_body\\ (using the control symbol \<^verbatim>\\<^marker>\) and may occur anywhere within the outer syntax of a command: the inner syntax of a marker body resembles that for attributes (\secref{sec:syn-att}). In contrast, \emph{Command tags} may only occur after a command keyword and are treated as special markers as explained below. \<^rail>\ @{syntax_def marker}: '\<^marker>' @{syntax cartouche} ; @{syntax_def marker_body}: (@{syntax name} @{syntax args} * ',') ; @{syntax_def tags}: tag* ; tag: '%' (@{syntax short_ident} | @{syntax string}) \ Document markers are stripped from the document output, but surrounding white-space is preserved: e.g.\ a marker at the end of a line does not affect the subsequent line break. Markers operate within the semantic presentation context of a command, and may modify it to change the overall appearance of a command span (e.g.\ by adding tags). Each document marker has its own syntax defined in the theory context; the following markers are predefined in Isabelle/Pure: \<^rail>\ (@@{document_marker_def title} | @@{document_marker_def creator} | @@{document_marker_def contributor} | @@{document_marker_def date} | @@{document_marker_def license} | @@{document_marker_def description}) @{syntax embedded} ; @@{document_marker_def tag} (scope?) @{syntax name} ; scope: '(' ('proof' | 'command') ')' \ \<^descr> \\<^marker>\title arg\\, \\<^marker>\creator arg\\, \\<^marker>\contributor arg\\, \\<^marker>\date arg\\, \\<^marker>\license arg\\, and \\<^marker>\description arg\\ produce markup in the PIDE document, without any immediate effect on typesetting. This vocabulary is taken from the Dublin Core Metadata Initiative\<^footnote>\\<^url>\https://www.dublincore.org/specifications/dublin-core/dcmi-terms\\. The argument is an uninterpreted string, except for @{document_marker description}, which consists of words that are subject to spell-checking. \<^descr> \\<^marker>\tag name\\ updates the list of command tags in the presentation context: later declarations take precedence, so \\<^marker>\tag a, tag b, tag c\\ produces a reversed list. The default tags are given by the original \<^theory_text>\keywords\ declaration of a command, and the system option @{system_option_ref document_tags}. The optional \scope\ tells how far the tagging is applied to subsequent proof structure: ``\<^theory_text>\("proof")\'' means it applies to the following proof text, and ``\<^theory_text>\(command)\'' means it only applies to the current command. The default within a proof body is ``\<^theory_text>\("proof")\'', but for toplevel goal statements it is ``\<^theory_text>\(command)\''. Thus a \tag\ marker for \<^theory_text>\theorem\, \<^theory_text>\lemma\ etc. does \emph{not} affect its proof by default. An old-style command tag \<^verbatim>\%\\name\ is treated like a document marker \\<^marker>\tag (proof) name\\: the list of command tags precedes the list of document markers. The head of the resulting tags in the presentation context is turned into {\LaTeX} environments to modify the type-setting. The following tags are pre-declared for certain classes of commands, and serve as default markup for certain kinds of commands: \<^medskip> \begin{tabular}{ll} \document\ & document markup commands \\ \theory\ & theory begin/end \\ \proof\ & all proof commands \\ \ML\ & all commands involving ML code \\ \end{tabular} \<^medskip> The Isabelle document preparation system @{cite "isabelle-system"} allows tagged command regions to be presented specifically, e.g.\ to fold proof texts, or drop parts of the text completely. For example ``\<^theory_text>\by auto\~\\<^marker>\tag invisible\\'' causes that piece of proof to be treated as \invisible\ instead of \proof\ (the default), which may be shown or hidden depending on the document setup. In contrast, ``\<^theory_text>\by auto\~\\<^marker>\tag visible\\'' forces this text to be shown invariably. Explicit tag specifications within a proof apply to all subsequent commands of the same level of nesting. For example, ``\<^theory_text>\proof\~\\<^marker>\tag invisible\ \\~\<^theory_text>\qed\'' forces the whole sub-proof to be typeset as \visible\ (unless some of its parts are tagged differently). \<^medskip> Command tags merely produce certain markup environments for type-setting. The meaning of these is determined by {\LaTeX} macros, as defined in \<^file>\~~/lib/texinputs/isabelle.sty\ or by the document author. The Isabelle document preparation tools also provide some high-level options to specify the meaning of arbitrary tags to ``keep'', ``drop'', or ``fold'' the corresponding parts of the text. Logic sessions may also specify ``document versions'', where given tags are interpreted in some particular way. Again see @{cite "isabelle-system"} for further details. \ section \Railroad diagrams\ text \ \begin{matharray}{rcl} @{antiquotation_def "rail"} & : & \antiquotation\ \\ \end{matharray} \<^rail>\ 'rail' @{syntax text} \ The @{antiquotation rail} antiquotation allows to include syntax diagrams into Isabelle documents. {\LaTeX} requires the style file \<^file>\~~/lib/texinputs/railsetup.sty\, which can be used via \<^verbatim>\\usepackage{railsetup}\ in \<^verbatim>\root.tex\, for example. The rail specification language is quoted here as Isabelle @{syntax string} or text @{syntax "cartouche"}; it has its own grammar given below. \begingroup \def\isasymnewline{\isatt{\isacharbackslash\isacharless newline\isachargreater}} \<^rail>\ rule? + ';' ; rule: ((identifier | @{syntax antiquotation}) ':')? body ; body: concatenation + '|' ; concatenation: ((atom '?'?) +) (('*' | '+') atom?)? ; atom: '(' body? ')' | identifier | '@'? (string | @{syntax antiquotation}) | '\' \ \endgroup The lexical syntax of \identifier\ coincides with that of @{syntax short_ident} in regular Isabelle syntax, but \string\ uses single quotes instead of double quotes of the standard @{syntax string} category. Each \rule\ defines a formal language (with optional name), using a notation that is similar to EBNF or regular expressions with recursion. The meaning and visual appearance of these rail language elements is illustrated by the following representative examples. \<^item> Empty \<^verbatim>\()\ \<^rail>\()\ \<^item> Nonterminal \<^verbatim>\A\ \<^rail>\A\ \<^item> Nonterminal via Isabelle antiquotation \<^verbatim>\@{syntax method}\ \<^rail>\@{syntax method}\ \<^item> Terminal \<^verbatim>\'xyz'\ \<^rail>\'xyz'\ \<^item> Terminal in keyword style \<^verbatim>\@'xyz'\ \<^rail>\@'xyz'\ \<^item> Terminal via Isabelle antiquotation \<^verbatim>\@@{method rule}\ \<^rail>\@@{method rule}\ \<^item> Concatenation \<^verbatim>\A B C\ \<^rail>\A B C\ \<^item> Newline inside concatenation \<^verbatim>\A B C \ D E F\ \<^rail>\A B C \ D E F\ \<^item> Variants \<^verbatim>\A | B | C\ \<^rail>\A | B | C\ \<^item> Option \<^verbatim>\A ?\ \<^rail>\A ?\ \<^item> Repetition \<^verbatim>\A *\ \<^rail>\A *\ \<^item> Repetition with separator \<^verbatim>\A * sep\ \<^rail>\A * sep\ \<^item> Strict repetition \<^verbatim>\A +\ \<^rail>\A +\ \<^item> Strict repetition with separator \<^verbatim>\A + sep\ \<^rail>\A + sep\ \ end diff --git a/src/Doc/Isar_Ref/Generic.thy b/src/Doc/Isar_Ref/Generic.thy --- a/src/Doc/Isar_Ref/Generic.thy +++ b/src/Doc/Isar_Ref/Generic.thy @@ -1,1827 +1,1827 @@ (*:maxLineLen=78:*) theory Generic imports Main Base begin chapter \Generic tools and packages \label{ch:gen-tools}\ section \Configuration options \label{sec:config}\ text \ Isabelle/Pure maintains a record of named configuration options within the theory or proof context, with values of type \<^ML_type>\bool\, \<^ML_type>\int\, \<^ML_type>\real\, or \<^ML_type>\string\. Tools may declare options in ML, and then refer to these values (relative to the context). Thus global reference variables are easily avoided. The user may change the value of a configuration option by means of an associated attribute of the same name. This form of context declaration works particularly well with commands such as @{command "declare"} or @{command "using"} like this: \ (*<*)experiment begin(*>*) declare [[show_main_goal = false]] notepad begin note [[show_main_goal = true]] end (*<*)end(*>*) text \ \begin{matharray}{rcll} @{command_def "print_options"} & : & \context \\ \\ \end{matharray} \<^rail>\ @@{command print_options} ('!'?) ; @{syntax name} ('=' ('true' | 'false' | @{syntax int} | @{syntax float} | @{syntax name}))? \ \<^descr> @{command "print_options"} prints the available configuration options, with names, types, and current values; the ``\!\'' option indicates extra verbosity. \<^descr> \name = value\ as an attribute expression modifies the named option, with the syntax of the value depending on the option's type. For \<^ML_type>\bool\ the default value is \true\. Any attempt to change a global option in a local context is ignored. \ section \Basic proof tools\ subsection \Miscellaneous methods and attributes \label{sec:misc-meth-att}\ text \ \begin{matharray}{rcl} @{method_def unfold} & : & \method\ \\ @{method_def fold} & : & \method\ \\ @{method_def insert} & : & \method\ \\[0.5ex] @{method_def erule}\\<^sup>*\ & : & \method\ \\ @{method_def drule}\\<^sup>*\ & : & \method\ \\ @{method_def frule}\\<^sup>*\ & : & \method\ \\ @{method_def intro} & : & \method\ \\ @{method_def elim} & : & \method\ \\ @{method_def fail} & : & \method\ \\ @{method_def succeed} & : & \method\ \\ @{method_def sleep} & : & \method\ \\ \end{matharray} \<^rail>\ (@@{method fold} | @@{method unfold} | @@{method insert}) @{syntax thms} ; (@@{method erule} | @@{method drule} | @@{method frule}) ('(' @{syntax nat} ')')? @{syntax thms} ; (@@{method intro} | @@{method elim}) @{syntax thms}? ; @@{method sleep} @{syntax real} \ \<^descr> @{method unfold}~\a\<^sub>1 \ a\<^sub>n\ and @{method fold}~\a\<^sub>1 \ a\<^sub>n\ expand (or fold back) the given definitions throughout all goals; any chained facts provided are inserted into the goal and subject to rewriting as well. Unfolding works in two stages: first, the given equations are used directly for rewriting; second, the equations are passed through the attribute @{attribute_ref abs_def} before rewriting --- to ensure that definitions are fully expanded, regardless of the actual parameters that are provided. \<^descr> @{method insert}~\a\<^sub>1 \ a\<^sub>n\ inserts theorems as facts into all goals of the proof state. Note that current facts indicated for forward chaining are ignored. \<^descr> @{method erule}~\a\<^sub>1 \ a\<^sub>n\, @{method drule}~\a\<^sub>1 \ a\<^sub>n\, and @{method frule}~\a\<^sub>1 \ a\<^sub>n\ are similar to the basic @{method rule} method (see \secref{sec:pure-meth-att}), but apply rules by elim-resolution, destruct-resolution, and forward-resolution, respectively @{cite "isabelle-implementation"}. The optional natural number argument (default 0) specifies additional assumption steps to be performed here. Note that these methods are improper ones, mainly serving for experimentation and tactic script emulation. Different modes of basic rule application are usually expressed in Isar at the proof language level, rather than via implicit proof state manipulations. For example, a proper single-step elimination would be done using the plain @{method rule} method, with forward chaining of current facts. \<^descr> @{method intro} and @{method elim} repeatedly refine some goal by intro- or elim-resolution, after having inserted any chained facts. Exactly the rules given as arguments are taken into account; this allows fine-tuned decomposition of a proof problem, in contrast to common automated tools. \<^descr> @{method fail} yields an empty result sequence; it is the identity of the ``\|\'' method combinator (cf.\ \secref{sec:proof-meth}). \<^descr> @{method succeed} yields a single (unchanged) result; it is the identity of the ``\,\'' method combinator (cf.\ \secref{sec:proof-meth}). \<^descr> @{method sleep}~\s\ succeeds after a real-time delay of \s\ seconds. This is occasionally useful for demonstration and testing purposes. \begin{matharray}{rcl} @{attribute_def tagged} & : & \attribute\ \\ @{attribute_def untagged} & : & \attribute\ \\[0.5ex] @{attribute_def THEN} & : & \attribute\ \\ @{attribute_def unfolded} & : & \attribute\ \\ @{attribute_def folded} & : & \attribute\ \\ @{attribute_def abs_def} & : & \attribute\ \\[0.5ex] @{attribute_def rotated} & : & \attribute\ \\ @{attribute_def (Pure) elim_format} & : & \attribute\ \\ @{attribute_def no_vars}\\<^sup>*\ & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute tagged} @{syntax name} @{syntax name} ; @@{attribute untagged} @{syntax name} ; @@{attribute THEN} ('[' @{syntax nat} ']')? @{syntax thm} ; (@@{attribute unfolded} | @@{attribute folded}) @{syntax thms} ; @@{attribute rotated} @{syntax int}? \ \<^descr> @{attribute tagged}~\name value\ and @{attribute untagged}~\name\ add and remove \<^emph>\tags\ of some theorem. Tags may be any list of string pairs that serve as formal comment. The first string is considered the tag name, the second its value. Note that @{attribute untagged} removes any tags of the same name. \<^descr> @{attribute THEN}~\a\ composes rules by resolution; it resolves with the first premise of \a\ (an alternative position may be also specified). See - also \<^ML_op>\RS\ in @{cite "isabelle-implementation"}. + also \<^ML_infix>\RS\ in @{cite "isabelle-implementation"}. \<^descr> @{attribute unfolded}~\a\<^sub>1 \ a\<^sub>n\ and @{attribute folded}~\a\<^sub>1 \ a\<^sub>n\ expand and fold back again the given definitions throughout a rule. \<^descr> @{attribute abs_def} turns an equation of the form \<^prop>\f x y \ t\ into \<^prop>\f \ \x y. t\, which ensures that @{method simp} steps always expand it. This also works for object-logic equality. \<^descr> @{attribute rotated}~\n\ rotate the premises of a theorem by \n\ (default 1). \<^descr> @{attribute (Pure) elim_format} turns a destruction rule into elimination rule format, by resolving with the rule \<^prop>\PROP A \ (PROP A \ PROP B) \ PROP B\. Note that the Classical Reasoner (\secref{sec:classical}) provides its own version of this operation. \<^descr> @{attribute no_vars} replaces schematic variables by free ones; this is mainly for tuning output of pretty printed theorems. \ subsection \Low-level equational reasoning\ text \ \begin{matharray}{rcl} @{method_def subst} & : & \method\ \\ @{method_def hypsubst} & : & \method\ \\ @{method_def split} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method subst} ('(' 'asm' ')')? \ ('(' (@{syntax nat}+) ')')? @{syntax thm} ; @@{method split} @{syntax thms} \ These methods provide low-level facilities for equational reasoning that are intended for specialized applications only. Normally, single step calculations would be performed in a structured text (see also \secref{sec:calculation}), while the Simplifier methods provide the canonical way for automated normalization (see \secref{sec:simplifier}). \<^descr> @{method subst}~\eq\ performs a single substitution step using rule \eq\, which may be either a meta or object equality. \<^descr> @{method subst}~\(asm) eq\ substitutes in an assumption. \<^descr> @{method subst}~\(i \ j) eq\ performs several substitutions in the conclusion. The numbers \i\ to \j\ indicate the positions to substitute at. Positions are ordered from the top of the term tree moving down from left to right. For example, in \(a + b) + (c + d)\ there are three positions where commutativity of \+\ is applicable: 1 refers to \a + b\, 2 to the whole term, and 3 to \c + d\. If the positions in the list \(i \ j)\ are non-overlapping (e.g.\ \(2 3)\ in \(a + b) + (c + d)\) you may assume all substitutions are performed simultaneously. Otherwise the behaviour of \subst\ is not specified. \<^descr> @{method subst}~\(asm) (i \ j) eq\ performs the substitutions in the assumptions. The positions refer to the assumptions in order from left to right. For example, given in a goal of the form \P (a + b) \ P (c + d) \ \\, position 1 of commutativity of \+\ is the subterm \a + b\ and position 2 is the subterm \c + d\. \<^descr> @{method hypsubst} performs substitution using some assumption; this only works for equations of the form \x = t\ where \x\ is a free or bound variable. \<^descr> @{method split}~\a\<^sub>1 \ a\<^sub>n\ performs single-step case splitting using the given rules. Splitting is performed in the conclusion or some assumption of the subgoal, depending of the structure of the rule. Note that the @{method simp} method already involves repeated application of split rules as declared in the current context, using @{attribute split}, for example. \ section \The Simplifier \label{sec:simplifier}\ text \ The Simplifier performs conditional and unconditional rewriting and uses contextual information: rule declarations in the background theory or local proof context are taken into account, as well as chained facts and subgoal premises (``local assumptions''). There are several general hooks that allow to modify the simplification strategy, or incorporate other proof tools that solve sub-problems, produce rewrite rules on demand etc. The rewriting strategy is always strictly bottom up, except for congruence rules, which are applied while descending into a term. Conditions in conditional rewrite rules are solved recursively before the rewrite rule is applied. The default Simplifier setup of major object logics (HOL, HOLCF, FOL, ZF) makes the Simplifier ready for immediate use, without engaging into the internal structures. Thus it serves as general-purpose proof tool with the main focus on equational reasoning, and a bit more than that. \ subsection \Simplification methods \label{sec:simp-meth}\ text \ \begin{tabular}{rcll} @{method_def simp} & : & \method\ \\ @{method_def simp_all} & : & \method\ \\ \Pure.\@{method_def (Pure) simp} & : & \method\ \\ \Pure.\@{method_def (Pure) simp_all} & : & \method\ \\ @{attribute_def simp_depth_limit} & : & \attribute\ & default \100\ \\ \end{tabular} \<^medskip> \<^rail>\ (@@{method simp} | @@{method simp_all}) opt? (@{syntax simpmod} * ) ; opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use' | 'asm_lr' ) ')' ; @{syntax_def simpmod}: ('add' | 'del' | 'flip' | 'only' | 'split' (() | '!' | 'del') | 'cong' (() | 'add' | 'del')) ':' @{syntax thms} \ \<^descr> @{method simp} invokes the Simplifier on the first subgoal, after inserting chained facts as additional goal premises; further rule declarations may be included via \(simp add: facts)\. The proof method fails if the subgoal remains unchanged after simplification. Note that the original goal premises and chained facts are subject to simplification themselves, while declarations via \add\/\del\ merely follow the policies of the object-logic to extract rewrite rules from theorems, without further simplification. This may lead to slightly different behavior in either case, which might be required precisely like that in some boundary situations to perform the intended simplification step! \<^medskip> Modifier \flip\ deletes the following theorems from the simpset and adds their symmetric version (i.e.\ lhs and rhs exchanged). No warning is shown if the original theorem was not present. \<^medskip> The \only\ modifier first removes all other rewrite rules, looper tactics (including split rules), congruence rules, and then behaves like \add\. Implicit solvers remain, which means that trivial rules like reflexivity or introduction of \True\ are available to solve the simplified subgoals, but also non-trivial tools like linear arithmetic in HOL. The latter may lead to some surprise of the meaning of ``only'' in Isabelle/HOL compared to English! \<^medskip> The \split\ modifiers add or delete rules for the Splitter (see also \secref{sec:simp-strategies} on the looper). This works only if the Simplifier method has been properly setup to include the Splitter (all major object logics such HOL, HOLCF, FOL, ZF do this already). The \!\ option causes the split rules to be used aggressively: after each application of a split rule in the conclusion, the \safe\ tactic of the classical reasoner (see \secref{sec:classical:partial}) is applied to the new goal. The net effect is that the goal is split into the different cases. This option can speed up simplification of goals with many nested conditional or case expressions significantly. There is also a separate @{method_ref split} method available for single-step case splitting. The effect of repeatedly applying \(split thms)\ can be imitated by ``\(simp only: split: thms)\''. \<^medskip> The \cong\ modifiers add or delete Simplifier congruence rules (see also \secref{sec:simp-rules}); the default is to add. \<^descr> @{method simp_all} is similar to @{method simp}, but acts on all goals, working backwards from the last to the first one as usual in Isabelle.\<^footnote>\The order is irrelevant for goals without schematic variables, so simplification might actually be performed in parallel here.\ Chained facts are inserted into all subgoals, before the simplification process starts. Further rule declarations are the same as for @{method simp}. The proof method fails if all subgoals remain unchanged after simplification. \<^descr> @{attribute simp_depth_limit} limits the number of recursive invocations of the Simplifier during conditional rewriting. By default the Simplifier methods above take local assumptions fully into account, using equational assumptions in the subsequent normalization process, or simplifying assumptions themselves. Further options allow to fine-tune the behavior of the Simplifier in this respect, corresponding to a variety of ML tactics as follows.\<^footnote>\Unlike the corresponding Isar proof methods, the ML tactics do not insist in changing the goal state.\ \begin{center} \small \begin{tabular}{|l|l|p{0.3\textwidth}|} \hline Isar method & ML tactic & behavior \\\hline \(simp (no_asm))\ & \<^ML>\simp_tac\ & assumptions are ignored completely \\\hline \(simp (no_asm_simp))\ & \<^ML>\asm_simp_tac\ & assumptions are used in the simplification of the conclusion but are not themselves simplified \\\hline \(simp (no_asm_use))\ & \<^ML>\full_simp_tac\ & assumptions are simplified but are not used in the simplification of each other or the conclusion \\\hline \(simp)\ & \<^ML>\asm_full_simp_tac\ & assumptions are used in the simplification of the conclusion and to simplify other assumptions \\\hline \(simp (asm_lr))\ & \<^ML>\asm_lr_simp_tac\ & compatibility mode: an assumption is only used for simplifying assumptions which are to the right of it \\\hline \end{tabular} \end{center} \<^medskip> In Isabelle/Pure, proof methods @{method (Pure) simp} and @{method (Pure) simp_all} only know about meta-equality \\\. Any new object-logic needs to re-define these methods via \<^ML>\Simplifier.method_setup\ in ML: Isabelle/FOL or Isabelle/HOL may serve as blue-prints. \ subsubsection \Examples\ text \ We consider basic algebraic simplifications in Isabelle/HOL. The rather trivial goal \<^prop>\0 + (x + 0) = x + 0 + 0\ looks like a good candidate to be solved by a single call of @{method simp}: \ lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops text \ The above attempt \<^emph>\fails\, because \<^term>\0\ and \<^term>\(+)\ in the HOL library are declared as generic type class operations, without stating any algebraic laws yet. More specific types are required to get access to certain standard simplifications of the theory context, e.g.\ like this:\ lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp lemma fixes x :: "'a :: monoid_add" shows "0 + (x + 0) = x + 0 + 0" by simp text \ \<^medskip> In many cases, assumptions of a subgoal are also needed in the simplification process. For example: \ lemma fixes x :: nat shows "x = 0 \ x + x = 0" by simp lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp text \ As seen above, local assumptions that shall contribute to simplification need to be part of the subgoal already, or indicated explicitly for use by the subsequent method invocation. Both too little or too much information can make simplification fail, for different reasons. In the next example the malicious assumption \<^prop>\\x::nat. f x = g (f (g x))\ does not contribute to solve the problem, but makes the default @{method simp} method loop: the rewrite rule \f ?x \ g (f (g ?x))\ extracted from the assumption does not terminate. The Simplifier notices certain simple forms of nontermination, but not this one. The problem can be solved nonetheless, by ignoring assumptions via special options as explained before: \ lemma "(\x::nat. f x = g (f (g x))) \ f 0 = f 0 + 0" by (simp (no_asm)) text \ The latter form is typical for long unstructured proof scripts, where the control over the goal content is limited. In structured proofs it is usually better to avoid pushing too many facts into the goal state in the first place. Assumptions in the Isar proof context do not intrude the reasoning if not used explicitly. This is illustrated for a toplevel statement and a local proof body as follows: \ lemma assumes "\x::nat. f x = g (f (g x))" shows "f 0 = f 0 + 0" by simp notepad begin assume "\x::nat. f x = g (f (g x))" have "f 0 = f 0 + 0" by simp end text \ \<^medskip> Because assumptions may simplify each other, there can be very subtle cases of nontermination. For example, the regular @{method simp} method applied to \<^prop>\P (f x) \ y = x \ f x = f y \ Q\ gives rise to the infinite reduction sequence \[ \P (f x)\ \stackrel{\f x \ f y\}{\longmapsto} \P (f y)\ \stackrel{\y \ x\}{\longmapsto} \P (f x)\ \stackrel{\f x \ f y\}{\longmapsto} \cdots \] whereas applying the same to \<^prop>\y = x \ f x = f y \ P (f x) \ Q\ terminates (without solving the goal): \ lemma "y = x \ f x = f y \ P (f x) \ Q" apply simp oops text \ See also \secref{sec:simp-trace} for options to enable Simplifier trace mode, which often helps to diagnose problems with rewrite systems. \ subsection \Declaring rules \label{sec:simp-rules}\ text \ \begin{matharray}{rcl} @{attribute_def simp} & : & \attribute\ \\ @{attribute_def split} & : & \attribute\ \\ @{attribute_def cong} & : & \attribute\ \\ @{command_def "print_simpset"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} \<^rail>\ (@@{attribute simp} | @@{attribute cong}) (() | 'add' | 'del') | @@{attribute split} (() | '!' | 'del') ; @@{command print_simpset} ('!'?) \ \<^descr> @{attribute simp} declares rewrite rules, by adding or deleting them from the simpset within the theory or proof context. Rewrite rules are theorems expressing some form of equality, for example: \Suc ?m + ?n = ?m + Suc ?n\ \\ \?P \ ?P \ ?P\ \\ \?A \ ?B \ {x. x \ ?A \ x \ ?B}\ \<^medskip> Conditional rewrites such as \?m < ?n \ ?m div ?n = 0\ are also permitted; the conditions can be arbitrary formulas. \<^medskip> Internally, all rewrite rules are translated into Pure equalities, theorems with conclusion \lhs \ rhs\. The simpset contains a function for extracting equalities from arbitrary theorems, which is usually installed when the object-logic is configured initially. For example, \\ ?x \ {}\ could be turned into \?x \ {} \ False\. Theorems that are declared as @{attribute simp} and local assumptions within a goal are treated uniformly in this respect. The Simplifier accepts the following formats for the \lhs\ term: \<^enum> First-order patterns, considering the sublanguage of application of constant operators to variable operands, without \\\-abstractions or functional variables. For example: \(?x + ?y) + ?z \ ?x + (?y + ?z)\ \\ \f (f ?x ?y) ?z \ f ?x (f ?y ?z)\ \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}. These are terms in \\\-normal form (this will always be the case unless you have done something strange) where each occurrence of an unknown is of the form \?F x\<^sub>1 \ x\<^sub>n\, where the \x\<^sub>i\ are distinct bound variables. For example, \(\x. ?P x \ ?Q x) \ (\x. ?P x) \ (\x. ?Q x)\ or its symmetric form, since the \rhs\ is also a higher-order pattern. \<^enum> Physical first-order patterns over raw \\\-term structure without \\\\\-equality; abstractions and bound variables are treated like quasi-constant term material. For example, the rule \?f ?x \ range ?f = True\ rewrites the term \g a \ range g\ to \True\, but will fail to match \g (h b) \ range (\x. g (h x))\. However, offending subterms (in our case \?f ?x\, which is not a pattern) can be replaced by adding new variables and conditions like this: \?y = ?f ?x \ ?y \ range ?f = True\ is acceptable as a conditional rewrite rule of the second category since conditions can be arbitrary terms. \<^descr> @{attribute split} declares case split rules. \<^descr> @{attribute cong} declares congruence rules to the Simplifier context. Congruence rules are equalities of the form @{text [display] "\ \ f ?x\<^sub>1 \ ?x\<^sub>n = f ?y\<^sub>1 \ ?y\<^sub>n"} This controls the simplification of the arguments of \f\. For example, some arguments can be simplified under additional assumptions: @{text [display] "?P\<^sub>1 \ ?Q\<^sub>1 \ (?Q\<^sub>1 \ ?P\<^sub>2 \ ?Q\<^sub>2) \ (?P\<^sub>1 \ ?P\<^sub>2) \ (?Q\<^sub>1 \ ?Q\<^sub>2)"} Given this rule, the Simplifier assumes \?Q\<^sub>1\ and extracts rewrite rules from it when simplifying \?P\<^sub>2\. Such local assumptions are effective for rewriting formulae such as \x = 0 \ y + x = y\. %FIXME %The local assumptions are also provided as theorems to the solver; %see \secref{sec:simp-solver} below. \<^medskip> The following congruence rule for bounded quantifiers also supplies contextual information --- about the bound variable: @{text [display] "(?A = ?B) \ (\x. x \ ?B \ ?P x \ ?Q x) \ (\x \ ?A. ?P x) \ (\x \ ?B. ?Q x)"} \<^medskip> This congruence rule for conditional expressions can supply contextual information for simplifying the arms: @{text [display] "?p = ?q \ (?q \ ?a = ?c) \ (\ ?q \ ?b = ?d) \ (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"} A congruence rule can also \<^emph>\prevent\ simplification of some arguments. Here is an alternative congruence rule for conditional expressions that conforms to non-strict functional evaluation: @{text [display] "?p = ?q \ (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"} Only the first argument is simplified; the others remain unchanged. This can make simplification much faster, but may require an extra case split over the condition \?q\ to prove the goal. \<^descr> @{command "print_simpset"} prints the collection of rules declared to the Simplifier, which is also known as ``simpset'' internally; the ``\!\'' option indicates extra verbosity. The implicit simpset of the theory context is propagated monotonically through the theory hierarchy: forming a new theory, the union of the simpsets of its imports are taken as starting point. Also note that definitional packages like @{command "datatype"}, @{command "primrec"}, @{command "fun"} routinely declare Simplifier rules to the target context, while plain @{command "definition"} is an exception in \<^emph>\not\ declaring anything. \<^medskip> It is up the user to manipulate the current simpset further by explicitly adding or deleting theorems as simplification rules, or installing other tools via simplification procedures (\secref{sec:simproc}). Good simpsets are hard to design. Rules that obviously simplify, like \?n + 0 \ ?n\ are good candidates for the implicit simpset, unless a special non-normalizing behavior of certain operations is intended. More specific rules (such as distributive laws, which duplicate subterms) should be added only for specific proof steps. Conversely, sometimes a rule needs to be deleted just for some part of a proof. The need of frequent additions or deletions may indicate a poorly designed simpset. \begin{warn} The union of simpsets from theory imports (as described above) is not always a good starting point for the new theory. If some ancestors have deleted simplification rules because they are no longer wanted, while others have left those rules in, then the union will contain the unwanted rules, and thus have to be deleted again in the theory body. \end{warn} \ subsection \Ordered rewriting with permutative rules\ text \ A rewrite rule is \<^emph>\permutative\ if the left-hand side and right-hand side are the equal up to renaming of variables. The most common permutative rule is commutativity: \?x + ?y = ?y + ?x\. Other examples include \(?x - ?y) - ?z = (?x - ?z) - ?y\ in arithmetic and \insert ?x (insert ?y ?A) = insert ?y (insert ?x ?A)\ for sets. Such rules are common enough to merit special attention. Because ordinary rewriting loops given such rules, the Simplifier employs a special strategy, called \<^emph>\ordered rewriting\. Permutative rules are detected and only applied if the rewriting step decreases the redex wrt.\ a given term ordering. For example, commutativity rewrites \b + a\ to \a + b\, but then stops, because the redex cannot be decreased further in the sense of the term ordering. The default is lexicographic ordering of term structure, but this could be - also changed locally for special applications via @{index_ML + also changed locally for special applications via @{define_ML Simplifier.set_term_ord} in Isabelle/ML. \<^medskip> Permutative rewrite rules are declared to the Simplifier just like other rewrite rules. Their special status is recognized automatically, and their application is guarded by the term ordering accordingly. \ subsubsection \Rewriting with AC operators\ text \ Ordered rewriting is particularly effective in the case of associative-commutative operators. (Associativity by itself is not permutative.) When dealing with an AC-operator \f\, keep the following points in mind: \<^item> The associative law must always be oriented from left to right, namely \f (f x y) z = f x (f y z)\. The opposite orientation, if used with commutativity, leads to looping in conjunction with the standard term order. \<^item> To complete your set of rewrite rules, you must add not just associativity (A) and commutativity (C) but also a derived rule \<^emph>\left-commutativity\ (LC): \f x (f y z) = f y (f x z)\. Ordered rewriting with the combination of A, C, and LC sorts a term lexicographically --- the rewriting engine imitates bubble-sort. \ experiment fixes f :: "'a \ 'a \ 'a" (infix "\" 60) assumes assoc: "(x \ y) \ z = x \ (y \ z)" assumes commute: "x \ y = y \ x" begin lemma left_commute: "x \ (y \ z) = y \ (x \ z)" proof - have "(x \ y) \ z = (y \ x) \ z" by (simp only: commute) then show ?thesis by (simp only: assoc) qed lemmas AC_rules = assoc commute left_commute text \ Thus the Simplifier is able to establish equalities with arbitrary permutations of subterms, by normalizing to a common standard form. For example: \ lemma "(b \ c) \ a = xxx" apply (simp only: AC_rules) txt \\<^subgoals>\ oops lemma "(b \ c) \ a = a \ (b \ c)" by (simp only: AC_rules) lemma "(b \ c) \ a = c \ (b \ a)" by (simp only: AC_rules) lemma "(b \ c) \ a = (c \ b) \ a" by (simp only: AC_rules) end text \ Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and give many examples; other algebraic structures are amenable to ordered rewriting, such as Boolean rings. The Boyer-Moore theorem prover @{cite bm88book} also employs ordered rewriting. \ subsubsection \Re-orienting equalities\ text \Another application of ordered rewriting uses the derived rule @{thm [source] eq_commute}: @{thm [source = false] eq_commute} to reverse equations. This is occasionally useful to re-orient local assumptions according to the term ordering, when other built-in mechanisms of reorientation and mutual simplification fail to apply.\ subsection \Simplifier tracing and debugging \label{sec:simp-trace}\ text \ \begin{tabular}{rcll} @{attribute_def simp_trace} & : & \attribute\ & default \false\ \\ @{attribute_def simp_trace_depth_limit} & : & \attribute\ & default \1\ \\ @{attribute_def simp_debug} & : & \attribute\ & default \false\ \\ @{attribute_def simp_trace_new} & : & \attribute\ \\ @{attribute_def simp_break} & : & \attribute\ \\ \end{tabular} \<^medskip> \<^rail>\ @@{attribute simp_trace_new} ('interactive')? \ ('mode' '=' ('full' | 'normal'))? \ ('depth' '=' @{syntax nat})? ; @@{attribute simp_break} (@{syntax term}*) \ These attributes and configurations options control various aspects of Simplifier tracing and debugging. \<^descr> @{attribute simp_trace} makes the Simplifier output internal operations. This includes rewrite steps, but also bookkeeping like modifications of the simpset. \<^descr> @{attribute simp_trace_depth_limit} limits the effect of @{attribute simp_trace} to the given depth of recursive Simplifier invocations (when solving conditions of rewrite rules). \<^descr> @{attribute simp_debug} makes the Simplifier output some extra information about internal operations. This includes any attempted invocation of simplification procedures. \<^descr> @{attribute simp_trace_new} controls Simplifier tracing within Isabelle/PIDE applications, notably Isabelle/jEdit @{cite "isabelle-jedit"}. This provides a hierarchical representation of the rewriting steps performed by the Simplifier. Users can configure the behaviour by specifying breakpoints, verbosity and enabling or disabling the interactive mode. In normal verbosity (the default), only rule applications matching a breakpoint will be shown to the user. In full verbosity, all rule applications will be logged. Interactive mode interrupts the normal flow of the Simplifier and defers the decision how to continue to the user via some GUI dialog. \<^descr> @{attribute simp_break} declares term or theorem breakpoints for @{attribute simp_trace_new} as described above. Term breakpoints are patterns which are checked for matches on the redex of a rule application. Theorem breakpoints trigger when the corresponding theorem is applied in a rewrite step. For example: \ (*<*)experiment begin(*>*) declare conjI [simp_break] declare [[simp_break "?x \ ?y"]] (*<*)end(*>*) subsection \Simplification procedures \label{sec:simproc}\ text \ Simplification procedures are ML functions that produce proven rewrite rules on demand. They are associated with higher-order patterns that approximate the left-hand sides of equations. The Simplifier first matches the current redex against one of the LHS patterns; if this succeeds, the corresponding ML function is invoked, passing the Simplifier context and redex term. Thus rules may be specifically fashioned for particular situations, resulting in a more powerful mechanism than term rewriting by a fixed set of rules. Any successful result needs to be a (possibly conditional) rewrite rule \t \ u\ that is applicable to the current redex. The rule will be applied just as any ordinary rewrite rule. It is expected to be already in \<^emph>\internal form\, bypassing the automatic preprocessing of object-level equivalences. \begin{matharray}{rcl} @{command_def "simproc_setup"} & : & \local_theory \ local_theory\ \\ simproc & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{command simproc_setup} @{syntax name} '(' (@{syntax term} + '|') ')' '=' @{syntax text}; @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+) \ \<^descr> @{command "simproc_setup"} defines a named simplification procedure that is invoked by the Simplifier whenever any of the given term patterns match the current redex. The implementation, which is provided as ML source text, needs to be of type \<^ML_type>\morphism -> Proof.context -> cterm -> thm option\, where the \<^ML_type>\cterm\ represents the current redex \r\ and the result is supposed to be some proven rewrite rule \r \ r'\ (or a generalized version), or \<^ML>\NONE\ to indicate failure. The \<^ML_type>\Proof.context\ argument holds the full context of the current Simplifier invocation. The \<^ML_type>\morphism\ informs about the difference of the original compilation context wrt.\ the one of the actual application later on. Morphisms are only relevant for simprocs that are defined within a local target context, e.g.\ in a locale. \<^descr> \simproc add: name\ and \simproc del: name\ add or delete named simprocs to the current Simplifier context. The default is to add a simproc. Note that @{command "simproc_setup"} already adds the new simproc to the subsequent context. \ subsubsection \Example\ text \ The following simplification procedure for @{thm [source = false, show_types] unit_eq} in HOL performs fine-grained control over rule application, beyond higher-order pattern matching. Declaring @{thm unit_eq} as @{attribute simp} directly would make the Simplifier loop! Note that a version of this simplification procedure is already active in Isabelle/HOL. \ (*<*)experiment begin(*>*) simproc_setup unit ("x::unit") = \fn _ => fn _ => fn ct => if HOLogic.is_unit (Thm.term_of ct) then NONE else SOME (mk_meta_eq @{thm unit_eq})\ (*<*)end(*>*) text \ Since the Simplifier applies simplification procedures frequently, it is important to make the failure check in ML reasonably fast.\ subsection \Configurable Simplifier strategies \label{sec:simp-strategies}\ text \ The core term-rewriting engine of the Simplifier is normally used in combination with some add-on components that modify the strategy and allow to integrate other non-Simplifier proof tools. These may be reconfigured in ML as explained below. Even if the default strategies of object-logics like Isabelle/HOL are used unchanged, it helps to understand how the standard Simplifier strategies work.\ subsubsection \The subgoaler\ text \ \begin{mldecls} - @{index_ML Simplifier.set_subgoaler: "(Proof.context -> int -> tactic) -> + @{define_ML Simplifier.set_subgoaler: "(Proof.context -> int -> tactic) -> Proof.context -> Proof.context"} \\ - @{index_ML Simplifier.prems_of: "Proof.context -> thm list"} \\ + @{define_ML Simplifier.prems_of: "Proof.context -> thm list"} \\ \end{mldecls} The subgoaler is the tactic used to solve subgoals arising out of conditional rewrite rules or congruence rules. The default should be simplification itself. In rare situations, this strategy may need to be changed. For example, if the premise of a conditional rule is an instance of its conclusion, as in \Suc ?m < ?n \ ?m < ?n\, the default strategy could loop. % FIXME !?? \<^descr> \<^ML>\Simplifier.set_subgoaler\~\tac ctxt\ sets the subgoaler of the context to \tac\. The tactic will be applied to the context of the running Simplifier instance. \<^descr> \<^ML>\Simplifier.prems_of\~\ctxt\ retrieves the current set of premises from the context. This may be non-empty only if the Simplifier has been told to utilize local assumptions in the first place (cf.\ the options in \secref{sec:simp-meth}). As an example, consider the following alternative subgoaler: \ ML_val \ fun subgoaler_tac ctxt = assume_tac ctxt ORELSE' resolve_tac ctxt (Simplifier.prems_of ctxt) ORELSE' asm_simp_tac ctxt \ text \ This tactic first tries to solve the subgoal by assumption or by resolving with with one of the premises, calling simplification only if that fails.\ subsubsection \The solver\ text \ \begin{mldecls} - @{index_ML_type solver} \\ - @{index_ML Simplifier.mk_solver: "string -> + @{define_ML_type solver} \\ + @{define_ML Simplifier.mk_solver: "string -> (Proof.context -> int -> tactic) -> solver"} \\ - @{index_ML_op setSolver: "Proof.context * solver -> Proof.context"} \\ - @{index_ML_op addSolver: "Proof.context * solver -> Proof.context"} \\ - @{index_ML_op setSSolver: "Proof.context * solver -> Proof.context"} \\ - @{index_ML_op addSSolver: "Proof.context * solver -> Proof.context"} \\ + @{define_ML_infix setSolver: "Proof.context * solver -> Proof.context"} \\ + @{define_ML_infix addSolver: "Proof.context * solver -> Proof.context"} \\ + @{define_ML_infix setSSolver: "Proof.context * solver -> Proof.context"} \\ + @{define_ML_infix addSSolver: "Proof.context * solver -> Proof.context"} \\ \end{mldecls} A solver is a tactic that attempts to solve a subgoal after simplification. Its core functionality is to prove trivial subgoals such as \<^prop>\True\ and \t = t\, but object-logics might be more ambitious. For example, Isabelle/HOL performs a restricted version of linear arithmetic here. Solvers are packaged up in abstract type \<^ML_type>\solver\, with \<^ML>\Simplifier.mk_solver\ as the only operation to create a solver. \<^medskip> Rewriting does not instantiate unknowns. For example, rewriting alone cannot prove \a \ ?A\ since this requires instantiating \?A\. The solver, however, is an arbitrary tactic and may instantiate unknowns as it pleases. This is the only way the Simplifier can handle a conditional rewrite rule whose condition contains extra variables. When a simplification tactic is to be combined with other provers, especially with the Classical Reasoner, it is important whether it can be considered safe or not. For this reason a simpset contains two solvers: safe and unsafe. The standard simplification strategy solely uses the unsafe solver, which is appropriate in most cases. For special applications where the simplification process is not allowed to instantiate unknowns within the goal, simplification starts with the safe solver, but may still apply the ordinary unsafe one in nested simplifications for conditional rules or congruences. Note that in this way the overall tactic is not totally safe: it may instantiate unknowns that appear also in other subgoals. \<^descr> \<^ML>\Simplifier.mk_solver\~\name tac\ turns \tac\ into a solver; the \name\ is only attached as a comment and has no further significance. \<^descr> \ctxt setSSolver solver\ installs \solver\ as the safe solver of \ctxt\. \<^descr> \ctxt addSSolver solver\ adds \solver\ as an additional safe solver; it will be tried after the solvers which had already been present in \ctxt\. \<^descr> \ctxt setSolver solver\ installs \solver\ as the unsafe solver of \ctxt\. \<^descr> \ctxt addSolver solver\ adds \solver\ as an additional unsafe solver; it will be tried after the solvers which had already been present in \ctxt\. \<^medskip> The solver tactic is invoked with the context of the running Simplifier. Further operations may be used to retrieve relevant information, such as the list of local Simplifier premises via \<^ML>\Simplifier.prems_of\ --- this list may be non-empty only if the Simplifier runs in a mode that utilizes local assumptions (see also \secref{sec:simp-meth}). The solver is also presented the full goal including its assumptions in any case. Thus it can use these (e.g.\ by calling \<^ML>\assume_tac\), even if the Simplifier proper happens to ignore local premises at the moment. \<^medskip> As explained before, the subgoaler is also used to solve the premises of congruence rules. These are usually of the form \s = ?x\, where \s\ needs to be simplified and \?x\ needs to be instantiated with the result. Typically, the subgoaler will invoke the Simplifier at some point, which will eventually call the solver. For this reason, solver tactics must be prepared to solve goals of the form \t = ?x\, usually by reflexivity. In particular, reflexivity should be tried before any of the fancy automated proof tools. It may even happen that due to simplification the subgoal is no longer an equality. For example, \False \ ?Q\ could be rewritten to \\ ?Q\. To cover this case, the solver could try resolving with the theorem \\ False\ of the object-logic. \<^medskip> \begin{warn} If a premise of a congruence rule cannot be proved, then the congruence is ignored. This should only happen if the rule is \<^emph>\conditional\ --- that is, contains premises not of the form \t = ?x\. Otherwise it indicates that some congruence rule, or possibly the subgoaler or solver, is faulty. \end{warn} \ subsubsection \The looper\ text \ \begin{mldecls} - @{index_ML_op setloop: "Proof.context * + @{define_ML_infix setloop: "Proof.context * (Proof.context -> int -> tactic) -> Proof.context"} \\ - @{index_ML_op addloop: "Proof.context * + @{define_ML_infix addloop: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ - @{index_ML_op delloop: "Proof.context * string -> Proof.context"} \\ - @{index_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ - @{index_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ - @{index_ML Splitter.add_split_bang: " + @{define_ML_infix delloop: "Proof.context * string -> Proof.context"} \\ + @{define_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ + @{define_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\ + @{define_ML Splitter.add_split_bang: " thm -> Proof.context -> Proof.context"} \\ - @{index_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\ + @{define_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\ \end{mldecls} The looper is a list of tactics that are applied after simplification, in case the solver failed to solve the simplified goal. If the looper succeeds, the simplification process is started all over again. Each of the subgoals generated by the looper is attacked in turn, in reverse order. A typical looper is \<^emph>\case splitting\: the expansion of a conditional. Another possibility is to apply an elimination rule on the assumptions. More adventurous loopers could start an induction. \<^descr> \ctxt setloop tac\ installs \tac\ as the only looper tactic of \ctxt\. \<^descr> \ctxt addloop (name, tac)\ adds \tac\ as an additional looper tactic with name \name\, which is significant for managing the collection of loopers. The tactic will be tried after the looper tactics that had already been present in \ctxt\. \<^descr> \ctxt delloop name\ deletes the looper tactic that was associated with \name\ from \ctxt\. \<^descr> \<^ML>\Splitter.add_split\~\thm ctxt\ adds split tactic for \thm\ as additional looper tactic of \ctxt\ (overwriting previous split tactic for the same constant). \<^descr> \<^ML>\Splitter.add_split_bang\~\thm ctxt\ adds aggressive (see \S\ref{sec:simp-meth}) split tactic for \thm\ as additional looper tactic of \ctxt\ (overwriting previous split tactic for the same constant). \<^descr> \<^ML>\Splitter.del_split\~\thm ctxt\ deletes the split tactic corresponding to \thm\ from the looper tactics of \ctxt\. The splitter replaces applications of a given function; the right-hand side of the replacement can be anything. For example, here is a splitting rule for conditional expressions: @{text [display] "?P (if ?Q ?x ?y) \ (?Q \ ?P ?x) \ (\ ?Q \ ?P ?y)"} Another example is the elimination operator for Cartesian products (which happens to be called \<^const>\case_prod\ in Isabelle/HOL: @{text [display] "?P (case_prod ?f ?p) \ (\a b. ?p = (a, b) \ ?P (f a b))"} For technical reasons, there is a distinction between case splitting in the conclusion and in the premises of a subgoal. The former is done by \<^ML>\Splitter.split_tac\ with rules like @{thm [source] if_split} or @{thm [source] option.split}, which do not split the subgoal, while the latter is done by \<^ML>\Splitter.split_asm_tac\ with rules like @{thm [source] if_split_asm} or @{thm [source] option.split_asm}, which split the subgoal. The function \<^ML>\Splitter.add_split\ automatically takes care of which tactic to call, analyzing the form of the rules given as argument; it is the same operation behind \split\ attribute or method modifier syntax in the Isar source language. Case splits should be allowed only when necessary; they are expensive and hard to control. Case-splitting on if-expressions in the conclusion is usually beneficial, so it is enabled by default in Isabelle/HOL and Isabelle/FOL/ZF. \begin{warn} With \<^ML>\Splitter.split_asm_tac\ as looper component, the Simplifier may split subgoals! This might cause unexpected problems in tactic expressions that silently assume 0 or 1 subgoals after simplification. \end{warn} \ subsection \Forward simplification \label{sec:simp-forward}\ text \ \begin{matharray}{rcl} @{attribute_def simplified} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute simplified} opt? @{syntax thms}? ; opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')' \ \<^descr> @{attribute simplified}~\a\<^sub>1 \ a\<^sub>n\ causes a theorem to be simplified, either by exactly the specified rules \a\<^sub>1, \, a\<^sub>n\, or the implicit Simplifier context if no arguments are given. The result is fully simplified by default, including assumptions and conclusion; the options \no_asm\ etc.\ tune the Simplifier in the same way as the for the \simp\ method. Note that forward simplification restricts the Simplifier to its most basic operation of term rewriting; solver and looper tactics (\secref{sec:simp-strategies}) are \<^emph>\not\ involved here. The @{attribute simplified} attribute should be only rarely required under normal circumstances. \ section \The Classical Reasoner \label{sec:classical}\ subsection \Basic concepts\ text \Although Isabelle is generic, many users will be working in some extension of classical first-order logic. Isabelle/ZF is built upon theory FOL, while Isabelle/HOL conceptually contains first-order logic as a fragment. Theorem-proving in predicate logic is undecidable, but many automated strategies have been developed to assist in this task. Isabelle's classical reasoner is a generic package that accepts certain information about a logic and delivers a suite of automatic proof tools, based on rules that are classified and declared in the context. These proof procedures are slow and simplistic compared with high-end automated theorem provers, but they can save considerable time and effort in practice. They can prove theorems such as Pelletier's @{cite pelletier86} problems 40 and 41 in a few milliseconds (including full proof reconstruction):\ lemma "(\y. \x. F x y \ F x x) \ \ (\x. \y. \z. F z y \ \ F z x)" by blast lemma "(\z. \y. \x. f x y \ f x z \ \ f x x) \ \ (\z. \x. f x z)" by blast text \The proof tools are generic. They are not restricted to first-order logic, and have been heavily used in the development of the Isabelle/HOL library and applications. The tactics can be traced, and their components can be called directly; in this manner, any proof can be viewed interactively.\ subsubsection \The sequent calculus\ text \Isabelle supports natural deduction, which is easy to use for interactive proof. But natural deduction does not easily lend itself to automation, and has a bias towards intuitionism. For certain proofs in classical logic, it can not be called natural. The \<^emph>\sequent calculus\, a generalization of natural deduction, is easier to automate. A \<^bold>\sequent\ has the form \\ \ \\, where \\\ and \\\ are sets of formulae.\<^footnote>\For first-order logic, sequents can equivalently be made from lists or multisets of formulae.\ The sequent \P\<^sub>1, \, P\<^sub>m \ Q\<^sub>1, \, Q\<^sub>n\ is \<^bold>\valid\ if \P\<^sub>1 \ \ \ P\<^sub>m\ implies \Q\<^sub>1 \ \ \ Q\<^sub>n\. Thus \P\<^sub>1, \, P\<^sub>m\ represent assumptions, each of which is true, while \Q\<^sub>1, \, Q\<^sub>n\ represent alternative goals. A sequent is \<^bold>\basic\ if its left and right sides have a common formula, as in \P, Q \ Q, R\; basic sequents are trivially valid. Sequent rules are classified as \<^bold>\right\ or \<^bold>\left\, indicating which side of the \\\ symbol they operate on. Rules that operate on the right side are analogous to natural deduction's introduction rules, and left rules are analogous to elimination rules. The sequent calculus analogue of \(\I)\ is the rule \[ \infer[\(\R)\]{\\ \ \, P \ Q\}{\P, \ \ \, Q\} \] Applying the rule backwards, this breaks down some implication on the right side of a sequent; \\\ and \\\ stand for the sets of formulae that are unaffected by the inference. The analogue of the pair \(\I1)\ and \(\I2)\ is the single rule \[ \infer[\(\R)\]{\\ \ \, P \ Q\}{\\ \ \, P, Q\} \] This breaks down some disjunction on the right side, replacing it by both disjuncts. Thus, the sequent calculus is a kind of multiple-conclusion logic. To illustrate the use of multiple formulae on the right, let us prove the classical theorem \(P \ Q) \ (Q \ P)\. Working backwards, we reduce this formula to a basic sequent: \[ \infer[\(\R)\]{\\ (P \ Q) \ (Q \ P)\} {\infer[\(\R)\]{\\ (P \ Q), (Q \ P)\} {\infer[\(\R)\]{\P \ Q, (Q \ P)\} {\P, Q \ Q, P\}}} \] This example is typical of the sequent calculus: start with the desired theorem and apply rules backwards in a fairly arbitrary manner. This yields a surprisingly effective proof procedure. Quantifiers add only few complications, since Isabelle handles parameters and schematic variables. See @{cite \Chapter 10\ "paulson-ml2"} for further discussion.\ subsubsection \Simulating sequents by natural deduction\ text \Isabelle can represent sequents directly, as in the object-logic LK. But natural deduction is easier to work with, and most object-logics employ it. Fortunately, we can simulate the sequent \P\<^sub>1, \, P\<^sub>m \ Q\<^sub>1, \, Q\<^sub>n\ by the Isabelle formula \P\<^sub>1 \ \ \ P\<^sub>m \ \ Q\<^sub>2 \ ... \ \ Q\<^sub>n \ Q\<^sub>1\ where the order of the assumptions and the choice of \Q\<^sub>1\ are arbitrary. Elim-resolution plays a key role in simulating sequent proofs. We can easily handle reasoning on the left. Elim-resolution with the rules \(\E)\, \(\E)\ and \(\E)\ achieves a similar effect as the corresponding sequent rules. For the other connectives, we use sequent-style elimination rules instead of destruction rules such as \(\E1, 2)\ and \(\E)\. But note that the rule \(\L)\ has no effect under our representation of sequents! \[ \infer[\(\L)\]{\\ P, \ \ \\}{\\ \ \, P\} \] What about reasoning on the right? Introduction rules can only affect the formula in the conclusion, namely \Q\<^sub>1\. The other right-side formulae are represented as negated assumptions, \\ Q\<^sub>2, \, \ Q\<^sub>n\. In order to operate on one of these, it must first be exchanged with \Q\<^sub>1\. Elim-resolution with the \swap\ rule has this effect: \\ P \ (\ R \ P) \ R\ To ensure that swaps occur only when necessary, each introduction rule is converted into a swapped form: it is resolved with the second premise of \(swap)\. The swapped form of \(\I)\, which might be called \(\\E)\, is @{text [display] "\ (P \ Q) \ (\ R \ P) \ (\ R \ Q) \ R"} Similarly, the swapped form of \(\I)\ is @{text [display] "\ (P \ Q) \ (\ R \ P \ Q) \ R"} Swapped introduction rules are applied using elim-resolution, which deletes the negated formula. Our representation of sequents also requires the use of ordinary introduction rules. If we had no regard for readability of intermediate goal states, we could treat the right side more uniformly by representing sequents as @{text [display] "P\<^sub>1 \ \ \ P\<^sub>m \ \ Q\<^sub>1 \ \ \ \ Q\<^sub>n \ \"} \ subsubsection \Extra rules for the sequent calculus\ text \As mentioned, destruction rules such as \(\E1, 2)\ and \(\E)\ must be replaced by sequent-style elimination rules. In addition, we need rules to embody the classical equivalence between \P \ Q\ and \\ P \ Q\. The introduction rules \(\I1, 2)\ are replaced by a rule that simulates \(\R)\: @{text [display] "(\ Q \ P) \ P \ Q"} The destruction rule \(\E)\ is replaced by @{text [display] "(P \ Q) \ (\ P \ R) \ (Q \ R) \ R"} Quantifier replication also requires special rules. In classical logic, \\x. P x\ is equivalent to \\ (\x. \ P x)\; the rules \(\R)\ and \(\L)\ are dual: \[ \infer[\(\R)\]{\\ \ \, \x. P x\}{\\ \ \, \x. P x, P t\} \qquad \infer[\(\L)\]{\\x. P x, \ \ \\}{\P t, \x. P x, \ \ \\} \] Thus both kinds of quantifier may be replicated. Theorems requiring multiple uses of a universal formula are easy to invent; consider @{text [display] "(\x. P x \ P (f x)) \ P a \ P (f\<^sup>n a)"} for any \n > 1\. Natural examples of the multiple use of an existential formula are rare; a standard one is \\x. \y. P x \ P y\. Forgoing quantifier replication loses completeness, but gains decidability, since the search space becomes finite. Many useful theorems can be proved without replication, and the search generally delivers its verdict in a reasonable time. To adopt this approach, represent the sequent rules \(\R)\, \(\L)\ and \(\R)\ by \(\I)\, \(\E)\ and \(\I)\, respectively, and put \(\E)\ into elimination form: @{text [display] "\x. P x \ (P t \ Q) \ Q"} Elim-resolution with this rule will delete the universal formula after a single use. To replicate universal quantifiers, replace the rule by @{text [display] "\x. P x \ (P t \ \x. P x \ Q) \ Q"} To replicate existential quantifiers, replace \(\I)\ by @{text [display] "(\ (\x. P x) \ P t) \ \x. P x"} All introduction rules mentioned above are also useful in swapped form. Replication makes the search space infinite; we must apply the rules with care. The classical reasoner distinguishes between safe and unsafe rules, applying the latter only when there is no alternative. Depth-first search may well go down a blind alley; best-first search is better behaved in an infinite search space. However, quantifier replication is too expensive to prove any but the simplest theorems. \ subsection \Rule declarations\ text \The proof tools of the Classical Reasoner depend on collections of rules declared in the context, which are classified as introduction, elimination or destruction and as \<^emph>\safe\ or \<^emph>\unsafe\. In general, safe rules can be attempted blindly, while unsafe rules must be used with care. A safe rule must never reduce a provable goal to an unprovable set of subgoals. The rule \P \ P \ Q\ is unsafe because it reduces \P \ Q\ to \P\, which might turn out as premature choice of an unprovable subgoal. Any rule whose premises contain new unknowns is unsafe. The elimination rule \\x. P x \ (P t \ Q) \ Q\ is unsafe, since it is applied via elim-resolution, which discards the assumption \\x. P x\ and replaces it by the weaker assumption \P t\. The rule \P t \ \x. P x\ is unsafe for similar reasons. The quantifier duplication rule \\x. P x \ (P t \ \x. P x \ Q) \ Q\ is unsafe in a different sense: since it keeps the assumption \\x. P x\, it is prone to looping. In classical first-order logic, all rules are safe except those mentioned above. The safe~/ unsafe distinction is vague, and may be regarded merely as a way of giving some rules priority over others. One could argue that \(\E)\ is unsafe, because repeated application of it could generate exponentially many subgoals. Induction rules are unsafe because inductive proofs are difficult to set up automatically. Any inference that instantiates an unknown in the proof state is unsafe --- thus matching must be used, rather than unification. Even proof by assumption is unsafe if it instantiates unknowns shared with other subgoals. \begin{matharray}{rcl} @{command_def "print_claset"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def intro} & : & \attribute\ \\ @{attribute_def elim} & : & \attribute\ \\ @{attribute_def dest} & : & \attribute\ \\ @{attribute_def rule} & : & \attribute\ \\ @{attribute_def iff} & : & \attribute\ \\ @{attribute_def swapped} & : & \attribute\ \\ \end{matharray} \<^rail>\ (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}? ; @@{attribute rule} 'del' ; @@{attribute iff} (((() | 'add') '?'?) | 'del') \ \<^descr> @{command "print_claset"} prints the collection of rules declared to the Classical Reasoner, i.e.\ the \<^ML_type>\claset\ within the context. \<^descr> @{attribute intro}, @{attribute elim}, and @{attribute dest} declare introduction, elimination, and destruction rules, respectively. By default, rules are considered as \<^emph>\unsafe\ (i.e.\ not applied blindly without backtracking), while ``\!\'' classifies as \<^emph>\safe\. Rule declarations marked by ``\?\'' coincide with those of Isabelle/Pure, cf.\ \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps of the @{method rule} method). The optional natural number specifies an explicit weight argument, which is ignored by the automated reasoning tools, but determines the search order of single rule steps. Introduction rules are those that can be applied using ordinary resolution. Their swapped forms are generated internally, which will be applied using elim-resolution. Elimination rules are applied using elim-resolution. Rules are sorted by the number of new subgoals they will yield; rules that generate the fewest subgoals will be tried first. Otherwise, later declarations take precedence over earlier ones. Rules already present in the context with the same classification are ignored. A warning is printed if the rule has already been added with some other classification, but the rule is added anyway as requested. \<^descr> @{attribute rule}~\del\ deletes all occurrences of a rule from the classical context, regardless of its classification as introduction~/ elimination~/ destruction and safe~/ unsafe. \<^descr> @{attribute iff} declares logical equivalences to the Simplifier and the Classical reasoner at the same time. Non-conditional rules result in a safe introduction and elimination pair; conditional ones are considered unsafe. Rules with negative conclusion are automatically inverted (using \\\-elimination internally). The ``\?\'' version of @{attribute iff} declares rules to the Isabelle/Pure context only, and omits the Simplifier declaration. \<^descr> @{attribute swapped} turns an introduction rule into an elimination, by resolving with the classical swap principle \\ P \ (\ R \ P) \ R\ in the second position. This is mainly for illustrative purposes: the Classical Reasoner already swaps rules internally as explained above. \ subsection \Structured methods\ text \ \begin{matharray}{rcl} @{method_def rule} & : & \method\ \\ @{method_def contradiction} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method rule} @{syntax thms}? \ \<^descr> @{method rule} as offered by the Classical Reasoner is a refinement over the Pure one (see \secref{sec:pure-meth-att}). Both versions work the same, but the classical version observes the classical rule context in addition to that of Isabelle/Pure. Common object logics (HOL, ZF, etc.) declare a rich collection of classical rules (even if these would qualify as intuitionistic ones), but only few declarations to the rule context of Isabelle/Pure (\secref{sec:pure-meth-att}). \<^descr> @{method contradiction} solves some goal by contradiction, deriving any result from both \\ A\ and \A\. Chained facts, which are guaranteed to participate, may appear in either order. \ subsection \Fully automated methods\ text \ \begin{matharray}{rcl} @{method_def blast} & : & \method\ \\ @{method_def auto} & : & \method\ \\ @{method_def force} & : & \method\ \\ @{method_def fast} & : & \method\ \\ @{method_def slow} & : & \method\ \\ @{method_def best} & : & \method\ \\ @{method_def fastforce} & : & \method\ \\ @{method_def slowsimp} & : & \method\ \\ @{method_def bestsimp} & : & \method\ \\ @{method_def deepen} & : & \method\ \\ \end{matharray} \<^rail>\ @@{method blast} @{syntax nat}? (@{syntax clamod} * ) ; @@{method auto} (@{syntax nat} @{syntax nat})? (@{syntax clasimpmod} * ) ; @@{method force} (@{syntax clasimpmod} * ) ; (@@{method fast} | @@{method slow} | @@{method best}) (@{syntax clamod} * ) ; (@@{method fastforce} | @@{method slowsimp} | @@{method bestsimp}) (@{syntax clasimpmod} * ) ; @@{method deepen} (@{syntax nat} ?) (@{syntax clamod} * ) ; @{syntax_def clamod}: (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del') ':' @{syntax thms} ; @{syntax_def clasimpmod}: ('simp' (() | 'add' | 'del' | 'only') | 'cong' (() | 'add' | 'del') | 'split' (() | '!' | 'del') | 'iff' (((() | 'add') '?'?) | 'del') | (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del')) ':' @{syntax thms} \ \<^descr> @{method blast} is a separate classical tableau prover that uses the same classical rule declarations as explained before. Proof search is coded directly in ML using special data structures. A successful proof is then reconstructed using regular Isabelle inferences. It is faster and more powerful than the other classical reasoning tools, but has major limitations too. \<^item> It does not use the classical wrapper tacticals, such as the integration with the Simplifier of @{method fastforce}. \<^item> It does not perform higher-order unification, as needed by the rule @{thm [source=false] rangeI} in HOL. There are often alternatives to such rules, for example @{thm [source=false] range_eqI}. \<^item> Function variables may only be applied to parameters of the subgoal. (This restriction arises because the prover does not use higher-order unification.) If other function variables are present then the prover will fail with the message @{verbatim [display] \Function unknown's argument not a bound variable\} \<^item> Its proof strategy is more general than @{method fast} but can be slower. If @{method blast} fails or seems to be running forever, try @{method fast} and the other proof tools described below. The optional integer argument specifies a bound for the number of unsafe steps used in a proof. By default, @{method blast} starts with a bound of 0 and increases it successively to 20. In contrast, \(blast lim)\ tries to prove the goal using a search bound of \lim\. Sometimes a slow proof using @{method blast} can be made much faster by supplying the successful search bound to this proof method instead. \<^descr> @{method auto} combines classical reasoning with simplification. It is intended for situations where there are a lot of mostly trivial subgoals; it proves all the easy ones, leaving the ones it cannot prove. Occasionally, attempting to prove the hard ones may take a long time. The optional depth arguments in \(auto m n)\ refer to its builtin classical reasoning procedures: \m\ (default 4) is for @{method blast}, which is tried first, and \n\ (default 2) is for a slower but more general alternative that also takes wrappers into account. \<^descr> @{method force} is intended to prove the first subgoal completely, using many fancy proof tools and performing a rather exhaustive search. As a result, proof attempts may take rather long or diverge easily. \<^descr> @{method fast}, @{method best}, @{method slow} attempt to prove the first subgoal using sequent-style reasoning as explained before. Unlike @{method blast}, they construct proofs directly in Isabelle. There is a difference in search strategy and back-tracking: @{method fast} uses depth-first search and @{method best} uses best-first search (guided by a heuristic function: normally the total size of the proof state). Method @{method slow} is like @{method fast}, but conducts a broader search: it may, when backtracking from a failed proof attempt, undo even the step of proving a subgoal by assumption. \<^descr> @{method fastforce}, @{method slowsimp}, @{method bestsimp} are like @{method fast}, @{method slow}, @{method best}, respectively, but use the Simplifier as additional wrapper. The name @{method fastforce}, reflects the behaviour of this popular method better without requiring an understanding of its implementation. \<^descr> @{method deepen} works by exhaustive search up to a certain depth. The start depth is 4 (unless specified explicitly), and the depth is increased iteratively up to 10. Unsafe rules are modified to preserve the formula they act on, so that it be used repeatedly. This method can prove more goals than @{method fast}, but is much slower, for example if the assumptions have many universal quantifiers. Any of the above methods support additional modifiers of the context of classical (and simplifier) rules, but the ones related to the Simplifier are explicitly prefixed by \simp\ here. The semantics of these ad-hoc rule declarations is analogous to the attributes given before. Facts provided by forward chaining are inserted into the goal before commencing proof search. \ subsection \Partially automated methods\label{sec:classical:partial}\ text \These proof methods may help in situations when the fully-automated tools fail. The result is a simpler subgoal that can be tackled by other means, such as by manual instantiation of quantifiers. \begin{matharray}{rcl} @{method_def safe} & : & \method\ \\ @{method_def clarify} & : & \method\ \\ @{method_def clarsimp} & : & \method\ \\ \end{matharray} \<^rail>\ (@@{method safe} | @@{method clarify}) (@{syntax clamod} * ) ; @@{method clarsimp} (@{syntax clasimpmod} * ) \ \<^descr> @{method safe} repeatedly performs safe steps on all subgoals. It is deterministic, with at most one outcome. \<^descr> @{method clarify} performs a series of safe steps without splitting subgoals; see also @{method clarify_step}. \<^descr> @{method clarsimp} acts like @{method clarify}, but also does simplification. Note that if the Simplifier context includes a splitter for the premises, the subgoal may still be split. \ subsection \Single-step tactics\ text \ \begin{matharray}{rcl} @{method_def safe_step} & : & \method\ \\ @{method_def inst_step} & : & \method\ \\ @{method_def step} & : & \method\ \\ @{method_def slow_step} & : & \method\ \\ @{method_def clarify_step} & : & \method\ \\ \end{matharray} These are the primitive tactics behind the automated proof methods of the Classical Reasoner. By calling them yourself, you can execute these procedures one step at a time. \<^descr> @{method safe_step} performs a safe step on the first subgoal. The safe wrapper tacticals are applied to a tactic that may include proof by assumption or Modus Ponens (taking care not to instantiate unknowns), or substitution. \<^descr> @{method inst_step} is like @{method safe_step}, but allows unknowns to be instantiated. \<^descr> @{method step} is the basic step of the proof procedure, it operates on the first subgoal. The unsafe wrapper tacticals are applied to a tactic that tries @{method safe}, @{method inst_step}, or applies an unsafe rule from the context. \<^descr> @{method slow_step} resembles @{method step}, but allows backtracking between using safe rules with instantiation (@{method inst_step}) and using unsafe rules. The resulting search space is larger. \<^descr> @{method clarify_step} performs a safe step on the first subgoal; no splitting step is applied. For example, the subgoal \A \ B\ is left as a conjunction. Proof by assumption, Modus Ponens, etc., may be performed provided they do not instantiate unknowns. Assumptions of the form \x = t\ may be eliminated. The safe wrapper tactical is applied. \ subsection \Modifying the search step\ text \ \begin{mldecls} - @{index_ML_type wrapper: "(int -> tactic) -> (int -> tactic)"} \\[0.5ex] - @{index_ML_op addSWrapper: "Proof.context * + @{define_ML_type wrapper = "(int -> tactic) -> (int -> tactic)"} \\[0.5ex] + @{define_ML_infix addSWrapper: "Proof.context * (string * (Proof.context -> wrapper)) -> Proof.context"} \\ - @{index_ML_op addSbefore: "Proof.context * - (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ - @{index_ML_op addSafter: "Proof.context * + @{define_ML_infix addSbefore: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ - @{index_ML_op delSWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] - @{index_ML_op addWrapper: "Proof.context * - (string * (Proof.context -> wrapper)) -> Proof.context"} \\ - @{index_ML_op addbefore: "Proof.context * + @{define_ML_infix addSafter: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ - @{index_ML_op addafter: "Proof.context * + @{define_ML_infix delSWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] + @{define_ML_infix addWrapper: "Proof.context * + (string * (Proof.context -> wrapper)) -> Proof.context"} \\ + @{define_ML_infix addbefore: "Proof.context * (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ - @{index_ML_op delWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] - @{index_ML addSss: "Proof.context -> Proof.context"} \\ - @{index_ML addss: "Proof.context -> Proof.context"} \\ + @{define_ML_infix addafter: "Proof.context * + (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\ + @{define_ML_infix delWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex] + @{define_ML addSss: "Proof.context -> Proof.context"} \\ + @{define_ML addss: "Proof.context -> Proof.context"} \\ \end{mldecls} The proof strategy of the Classical Reasoner is simple. Perform as many safe inferences as possible; or else, apply certain safe rules, allowing instantiation of unknowns; or else, apply an unsafe rule. The tactics also eliminate assumptions of the form \x = t\ by substitution if they have been set up to do so. They may perform a form of Modus Ponens: if there are assumptions \P \ Q\ and \P\, then replace \P \ Q\ by \Q\. The classical reasoning tools --- except @{method blast} --- allow to modify this basic proof strategy by applying two lists of arbitrary \<^emph>\wrapper tacticals\ to it. The first wrapper list, which is considered to contain safe wrappers only, affects @{method safe_step} and all the tactics that call it. The second one, which may contain unsafe wrappers, affects the unsafe parts of @{method step}, @{method slow_step}, and the tactics that call them. A wrapper transforms each step of the search, for example by attempting other tactics before or after the original step tactic. All members of a wrapper list are applied in turn to the respective step tactic. Initially the two wrapper lists are empty, which means no modification of the step tactics. Safe and unsafe wrappers are added to the context with the functions given below, supplying them with wrapper names. These names may be used to selectively delete wrappers. \<^descr> \ctxt addSWrapper (name, wrapper)\ adds a new wrapper, which should yield a safe tactic, to modify the existing safe step tactic. \<^descr> \ctxt addSbefore (name, tac)\ adds the given tactic as a safe wrapper, such that it is tried \<^emph>\before\ each safe step of the search. \<^descr> \ctxt addSafter (name, tac)\ adds the given tactic as a safe wrapper, such that it is tried when a safe step of the search would fail. \<^descr> \ctxt delSWrapper name\ deletes the safe wrapper with the given name. \<^descr> \ctxt addWrapper (name, wrapper)\ adds a new wrapper to modify the existing (unsafe) step tactic. \<^descr> \ctxt addbefore (name, tac)\ adds the given tactic as an unsafe wrapper, such that it its result is concatenated \<^emph>\before\ the result of each unsafe step. \<^descr> \ctxt addafter (name, tac)\ adds the given tactic as an unsafe wrapper, such that it its result is concatenated \<^emph>\after\ the result of each unsafe step. \<^descr> \ctxt delWrapper name\ deletes the unsafe wrapper with the given name. \<^descr> \addSss\ adds the simpset of the context to its classical set. The assumptions and goal will be simplified, in a rather safe way, after each safe step of the search. \<^descr> \addss\ adds the simpset of the context to its classical set. The assumptions and goal will be simplified, before the each unsafe step of the search. \ section \Object-logic setup \label{sec:object-logic}\ text \ \begin{matharray}{rcl} @{command_def "judgment"} & : & \theory \ theory\ \\ @{method_def atomize} & : & \method\ \\ @{attribute_def atomize} & : & \attribute\ \\ @{attribute_def rule_format} & : & \attribute\ \\ @{attribute_def rulify} & : & \attribute\ \\ \end{matharray} The very starting point for any Isabelle object-logic is a ``truth judgment'' that links object-level statements to the meta-logic (with its minimal language of \prop\ that covers universal quantification \\\ and implication \\\). Common object-logics are sufficiently expressive to internalize rule statements over \\\ and \\\ within their own language. This is useful in certain situations where a rule needs to be viewed as an atomic statement from the meta-level perspective, e.g.\ \\x. x \ A \ P x\ versus \\x \ A. P x\. From the following language elements, only the @{method atomize} method and @{attribute rule_format} attribute are occasionally required by end-users, the rest is for those who need to setup their own object-logic. In the latter case existing formulations of Isabelle/FOL or Isabelle/HOL may be taken as realistic examples. Generic tools may refer to the information provided by object-logic declarations internally. \<^rail>\ @@{command judgment} @{syntax name} '::' @{syntax type} @{syntax mixfix}? ; @@{attribute atomize} ('(' 'full' ')')? ; @@{attribute rule_format} ('(' 'noasm' ')')? \ \<^descr> @{command "judgment"}~\c :: \ (mx)\ declares constant \c\ as the truth judgment of the current object-logic. Its type \\\ should specify a coercion of the category of object-level propositions to \prop\ of the Pure meta-logic; the mixfix annotation \(mx)\ would typically just link the object language (internally of syntactic category \logic\) with that of \prop\. Only one @{command "judgment"} declaration may be given in any theory development. \<^descr> @{method atomize} (as a method) rewrites any non-atomic premises of a sub-goal, using the meta-level equations declared via @{attribute atomize} (as an attribute) beforehand. As a result, heavily nested goals become amenable to fundamental operations such as resolution (cf.\ the @{method (Pure) rule} method). Giving the ``\(full)\'' option here means to turn the whole subgoal into an object-statement (if possible), including the outermost parameters and assumptions as well. A typical collection of @{attribute atomize} rules for a particular object-logic would provide an internalization for each of the connectives of \\\, \\\, and \\\. Meta-level conjunction should be covered as well (this is particularly important for locales, see \secref{sec:locale}). \<^descr> @{attribute rule_format} rewrites a theorem by the equalities declared as @{attribute rulify} rules in the current object-logic. By default, the result is fully normalized, including assumptions and conclusions at any depth. The \(no_asm)\ option restricts the transformation to the conclusion of a rule. In common object-logics (HOL, FOL, ZF), the effect of @{attribute rule_format} is to replace (bounded) universal quantification (\\\) and implication (\\\) by the corresponding rule statements over \\\ and \\\. \ section \Tracing higher-order unification\ text \ \begin{tabular}{rcll} @{attribute_def unify_trace_simp} & : & \attribute\ & default \false\ \\ @{attribute_def unify_trace_types} & : & \attribute\ & default \false\ \\ @{attribute_def unify_trace_bound} & : & \attribute\ & default \50\ \\ @{attribute_def unify_search_bound} & : & \attribute\ & default \60\ \\ \end{tabular} \<^medskip> Higher-order unification works well in most practical situations, but sometimes needs extra care to identify problems. These tracing options may help. \<^descr> @{attribute unify_trace_simp} controls tracing of the simplification phase of higher-order unification. \<^descr> @{attribute unify_trace_types} controls warnings of incompleteness, when unification is not considering all possible instantiations of schematic type variables. \<^descr> @{attribute unify_trace_bound} determines the depth where unification starts to print tracing information once it reaches depth; 0 for full tracing. At the default value, tracing information is almost never printed in practice. \<^descr> @{attribute unify_search_bound} prevents unification from searching past the given depth. Because of this bound, higher-order unification cannot return an infinite sequence, though it can return an exponentially long one. The search rarely approaches the default value in practice. If the search is cut off, unification prints a warning ``Unification bound exceeded''. \begin{warn} Options for unification cannot be modified in a local context. Only the global theory content is taken into account. \end{warn} \ end diff --git a/src/Doc/Isar_Ref/Inner_Syntax.thy b/src/Doc/Isar_Ref/Inner_Syntax.thy --- a/src/Doc/Isar_Ref/Inner_Syntax.thy +++ b/src/Doc/Isar_Ref/Inner_Syntax.thy @@ -1,1531 +1,1531 @@ (*:maxLineLen=78:*) theory Inner_Syntax imports Main Base begin chapter \Inner syntax --- the term language \label{ch:inner-syntax}\ text \ The inner syntax of Isabelle provides concrete notation for the main entities of the logical framework, notably \\\-terms with types and type classes. Applications may either extend existing syntactic categories by additional notation, or define new sub-languages that are linked to the standard term language via some explicit markers. For example \<^verbatim>\FOO\~\foo\ could embed the syntax corresponding for some user-defined nonterminal \foo\ --- within the bounds of the given lexical syntax of Isabelle/Pure. The most basic way to specify concrete syntax for logical entities works via mixfix annotations (\secref{sec:mixfix}), which may be usually given as part of the original declaration or via explicit notation commands later on (\secref{sec:notation}). This already covers many needs of concrete syntax without having to understand the full complexity of inner syntax layers. Further details of the syntax engine involves the classical distinction of lexical language versus context-free grammar (see \secref{sec:pure-syntax}), and various mechanisms for \<^emph>\syntax transformations\ (see \secref{sec:syntax-transformations}). \ section \Printing logical entities\ subsection \Diagnostic commands \label{sec:print-diag}\ text \ \begin{matharray}{rcl} @{command_def "typ"}\\<^sup>*\ & : & \context \\ \\ @{command_def "term"}\\<^sup>*\ & : & \context \\ \\ @{command_def "prop"}\\<^sup>*\ & : & \context \\ \\ @{command_def "thm"}\\<^sup>*\ & : & \context \\ \\ @{command_def "prf"}\\<^sup>*\ & : & \context \\ \\ @{command_def "full_prf"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_state"}\\<^sup>*\ & : & \any \\ \\ \end{matharray} These diagnostic commands assist interactive development by printing internal logical entities in a human-readable fashion. \<^rail>\ @@{command typ} @{syntax modes}? @{syntax type} ('::' @{syntax sort})? ; @@{command term} @{syntax modes}? @{syntax term} ; @@{command prop} @{syntax modes}? @{syntax prop} ; @@{command thm} @{syntax modes}? @{syntax thms} ; ( @@{command prf} | @@{command full_prf} ) @{syntax modes}? @{syntax thms}? ; @@{command print_state} @{syntax modes}? ; @{syntax_def modes}: '(' (@{syntax name} + ) ')' \ \<^descr> @{command "typ"}~\\\ reads and prints a type expression according to the current context. \<^descr> @{command "typ"}~\\ :: s\ uses type-inference to determine the most general way to make \\\ conform to sort \s\. For concrete \\\ this checks if the type belongs to that sort. Dummy type parameters ``\_\'' (underscore) are assigned to fresh type variables with most general sorts, according the the principles of type-inference. \<^descr> @{command "term"}~\t\ and @{command "prop"}~\\\ read, type-check and print terms or propositions according to the current theory or proof context; the inferred type of \t\ is output as well. Note that these commands are also useful in inspecting the current environment of term abbreviations. \<^descr> @{command "thm"}~\a\<^sub>1 \ a\<^sub>n\ retrieves theorems from the current theory or proof context. Note that any attributes included in the theorem specifications are applied to a temporary context derived from the current theory or proof; the result is discarded, i.e.\ attributes involved in \a\<^sub>1, \, a\<^sub>n\ do not have any permanent effect. \<^descr> @{command "prf"} displays the (compact) proof term of the current proof state (if present), or of the given theorems. Note that this requires an underlying logic image with proof terms enabled, e.g. \HOL-Proofs\. \<^descr> @{command "full_prf"} is like @{command "prf"}, but displays the full proof term, i.e.\ also displays information omitted in the compact proof term, which is denoted by ``\_\'' placeholders there. \<^descr> @{command "print_state"} prints the current proof state (if present), including current facts and goals. All of the diagnostic commands above admit a list of \modes\ to be specified, which is appended to the current print mode; see also \secref{sec:print-modes}. Thus the output behavior may be modified according particular print mode features. For example, @{command "print_state"}~\(latex)\ prints the current proof state with mathematical symbols and special characters represented in {\LaTeX} source, according to the Isabelle style @{cite "isabelle-system"}. Note that antiquotations (cf.\ \secref{sec:antiq}) provide a more systematic way to include formal items into the printed text document. \ subsection \Details of printed content\ text \ \begin{tabular}{rcll} @{attribute_def show_markup} & : & \attribute\ \\ @{attribute_def show_types} & : & \attribute\ & default \false\ \\ @{attribute_def show_sorts} & : & \attribute\ & default \false\ \\ @{attribute_def show_consts} & : & \attribute\ & default \false\ \\ @{attribute_def show_abbrevs} & : & \attribute\ & default \true\ \\ @{attribute_def show_brackets} & : & \attribute\ & default \false\ \\ @{attribute_def names_long} & : & \attribute\ & default \false\ \\ @{attribute_def names_short} & : & \attribute\ & default \false\ \\ @{attribute_def names_unique} & : & \attribute\ & default \true\ \\ @{attribute_def eta_contract} & : & \attribute\ & default \true\ \\ @{attribute_def goals_limit} & : & \attribute\ & default \10\ \\ @{attribute_def show_main_goal} & : & \attribute\ & default \false\ \\ @{attribute_def show_hyps} & : & \attribute\ & default \false\ \\ @{attribute_def show_tags} & : & \attribute\ & default \false\ \\ @{attribute_def show_question_marks} & : & \attribute\ & default \true\ \\ \end{tabular} \<^medskip> These configuration options control the detail of information that is displayed for types, terms, theorems, goals etc. See also \secref{sec:config}. \<^descr> @{attribute show_markup} controls direct inlining of markup into the printed representation of formal entities --- notably type and sort constraints. This enables Prover IDE users to retrieve that information via tooltips or popups while hovering with the mouse over the output window, for example. Consequently, this option is enabled by default for Isabelle/jEdit. \<^descr> @{attribute show_types} and @{attribute show_sorts} control printing of type constraints for term variables, and sort constraints for type variables. By default, neither of these are shown in output. If @{attribute show_sorts} is enabled, types are always shown as well. In Isabelle/jEdit, manual setting of these options is normally not required thanks to @{attribute show_markup} above. Note that displaying types and sorts may explain why a polymorphic inference rule fails to resolve with some goal, or why a rewrite rule does not apply as expected. \<^descr> @{attribute show_consts} controls printing of types of constants when displaying a goal state. Note that the output can be enormous, because polymorphic constants often occur at several different type instances. \<^descr> @{attribute show_abbrevs} controls folding of constant abbreviations. \<^descr> @{attribute show_brackets} controls bracketing in pretty printed output. If enabled, all sub-expressions of the pretty printing tree will be parenthesized, even if this produces malformed term syntax! This crude way of showing the internal structure of pretty printed entities may occasionally help to diagnose problems with operator priorities, for example. \<^descr> @{attribute names_long}, @{attribute names_short}, and @{attribute names_unique} control the way of printing fully qualified internal names in external form. See also \secref{sec:antiq} for the document antiquotation options of the same names. \<^descr> @{attribute eta_contract} controls \\\-contracted printing of terms. The \\\-contraction law asserts \<^prop>\(\x. f x) \ f\, provided \x\ is not free in \f\. It asserts \<^emph>\extensionality\ of functions: \<^prop>\f \ g\ if \<^prop>\f x \ g x\ for all \x\. Higher-order unification frequently puts terms into a fully \\\-expanded form. For example, if \F\ has type \(\ \ \) \ \\ then its expanded form is \<^term>\\h. F (\x. h x)\. Enabling @{attribute eta_contract} makes Isabelle perform \\\-contractions before printing, so that \<^term>\\h. F (\x. h x)\ appears simply as \F\. Note that the distinction between a term and its \\\-expanded form occasionally matters. While higher-order resolution and rewriting operate modulo \\\\\-conversion, some other tools might look at terms more discretely. \<^descr> @{attribute goals_limit} controls the maximum number of subgoals to be printed. \<^descr> @{attribute show_main_goal} controls whether the main result to be proven should be displayed. This information might be relevant for schematic goals, to inspect the current claim that has been synthesized so far. \<^descr> @{attribute show_hyps} controls printing of implicit hypotheses of local facts. Normally, only those hypotheses are displayed that are \<^emph>\not\ covered by the assumptions of the current context: this situation indicates a fault in some tool being used. By enabling @{attribute show_hyps}, output of \<^emph>\all\ hypotheses can be enforced, which is occasionally useful for diagnostic purposes. \<^descr> @{attribute show_tags} controls printing of extra annotations within theorems, such as internal position information, or the case names being attached by the attribute @{attribute case_names}. Note that the @{attribute tagged} and @{attribute untagged} attributes provide low-level access to the collection of tags associated with a theorem. \<^descr> @{attribute show_question_marks} controls printing of question marks for schematic variables, such as \?x\. Only the leading question mark is affected, the remaining text is unchanged (including proper markup for schematic variables that might be relevant for user interfaces). \ subsection \Alternative print modes \label{sec:print-modes}\ text \ \begin{mldecls} - @{index_ML print_mode_value: "unit -> string list"} \\ - @{index_ML Print_Mode.with_modes: "string list -> ('a -> 'b) -> 'a -> 'b"} \\ + @{define_ML print_mode_value: "unit -> string list"} \\ + @{define_ML Print_Mode.with_modes: "string list -> ('a -> 'b) -> 'a -> 'b"} \\ \end{mldecls} The \<^emph>\print mode\ facility allows to modify various operations for printing. Commands like @{command typ}, @{command term}, @{command thm} (see \secref{sec:print-diag}) take additional print modes as optional argument. The underlying ML operations are as follows. \<^descr> \<^ML>\print_mode_value ()\ yields the list of currently active print mode names. This should be understood as symbolic representation of certain individual features for printing (with precedence from left to right). \<^descr> \<^ML>\Print_Mode.with_modes\~\modes f x\ evaluates \f x\ in an execution context where the print mode is prepended by the given \modes\. This provides a thread-safe way to augment print modes. It is also monotonic in the set of mode names: it retains the default print mode that certain user-interfaces might have installed for their proper functioning! \<^medskip> The pretty printer for inner syntax maintains alternative mixfix productions for any print mode name invented by the user, say in commands like @{command notation} or @{command abbreviation}. Mode names can be arbitrary, but the following ones have a specific meaning by convention: \<^item> \<^verbatim>\""\ (the empty string): default mode; implicitly active as last element in the list of modes. \<^item> \<^verbatim>\input\: dummy print mode that is never active; may be used to specify notation that is only available for input. \<^item> \<^verbatim>\internal\ dummy print mode that is never active; used internally in Isabelle/Pure. \<^item> \<^verbatim>\ASCII\: prefer ASCII art over mathematical symbols. \<^item> \<^verbatim>\latex\: additional mode that is active in {\LaTeX} document preparation of Isabelle theory sources; allows to provide alternative output notation. \ section \Mixfix annotations \label{sec:mixfix}\ text \ Mixfix annotations specify concrete \<^emph>\inner syntax\ of Isabelle types and terms. Locally fixed parameters in toplevel theorem statements, locale and class specifications also admit mixfix annotations in a fairly uniform manner. A mixfix annotation describes the concrete syntax, the translation to abstract syntax, and the pretty printing. Special case annotations provide a simple means of specifying infix operators and binders. Isabelle mixfix syntax is inspired by {\OBJ} @{cite OBJ}. It allows to specify any context-free priority grammar, which is more general than the fixity declarations of ML and Prolog. \<^rail>\ @{syntax_def mixfix}: '(' (@{syntax template} prios? @{syntax nat}? | (@'infix' | @'infixl' | @'infixr') @{syntax template} @{syntax nat} | @'binder' @{syntax template} prio? @{syntax nat} | @'structure') ')' ; @{syntax template}: (string | cartouche) ; prios: '[' (@{syntax nat} + ',') ']' ; prio: '[' @{syntax nat} ']' \ The mixfix \template\ may include literal text, spacing, blocks, and arguments (denoted by ``\_\''); the special symbol ``\<^verbatim>\\\'' (printed as ``\\\'') represents an index argument that specifies an implicit @{keyword "structure"} reference (see also \secref{sec:locale}). Only locally fixed variables may be declared as @{keyword "structure"}. Infix and binder declarations provide common abbreviations for particular mixfix declarations. So in practice, mixfix templates mostly degenerate to literal text for concrete syntax, such as ``\<^verbatim>\++\'' for an infix symbol. \ subsection \The general mixfix form\ text \ In full generality, mixfix declarations work as follows. Suppose a constant \c :: \\<^sub>1 \ \ \\<^sub>n \ \\ is annotated by \(mixfix [p\<^sub>1, \, p\<^sub>n] p)\, where \mixfix\ is a string \d\<^sub>0 _ d\<^sub>1 _ \ _ d\<^sub>n\ consisting of delimiters that surround argument positions as indicated by underscores. Altogether this determines a production for a context-free priority grammar, where for each argument \i\ the syntactic category is determined by \\\<^sub>i\ (with priority \p\<^sub>i\), and the result category is determined from \\\ (with priority \p\). Priority specifications are optional, with default 0 for arguments and 1000 for the result.\<^footnote>\Omitting priorities is prone to syntactic ambiguities unless the delimiter tokens determine fully bracketed notation, as in \if _ then _ else _ fi\.\ Since \\\ may be again a function type, the constant type scheme may have more argument positions than the mixfix pattern. Printing a nested application \c t\<^sub>1 \ t\<^sub>m\ for \m > n\ works by attaching concrete notation only to the innermost part, essentially by printing \(c t\<^sub>1 \ t\<^sub>n) \ t\<^sub>m\ instead. If a term has fewer arguments than specified in the mixfix template, the concrete syntax is ignored. \<^medskip> A mixfix template may also contain additional directives for pretty printing, notably spaces, blocks, and breaks. The general template format is a sequence over any of the following entities. \<^descr> \d\ is a delimiter, namely a non-empty sequence delimiter items of the following form: \<^enum> a control symbol followed by a cartouche \<^enum> a single symbol, excluding the following special characters: \<^medskip> \begin{tabular}{ll} \<^verbatim>\'\ & single quote \\ \<^verbatim>\_\ & underscore \\ \\\ & index symbol \\ \<^verbatim>\(\ & open parenthesis \\ \<^verbatim>\)\ & close parenthesis \\ \<^verbatim>\/\ & slash \\ \\ \\ & cartouche delimiters \\ \end{tabular} \<^medskip> \<^descr> \<^verbatim>\'\ escapes the special meaning of these meta-characters, producing a literal version of the following character, unless that is a blank. A single quote followed by a blank separates delimiters, without affecting printing, but input tokens may have additional white space here. \<^descr> \<^verbatim>\_\ is an argument position, which stands for a certain syntactic category in the underlying grammar. \<^descr> \\\ is an indexed argument position; this is the place where implicit structure arguments can be attached. \<^descr> \s\ is a non-empty sequence of spaces for printing. This and the following specifications do not affect parsing at all. \<^descr> \<^verbatim>\(\\n\ opens a pretty printing block. The optional natural number specifies the block indentation, i.e. how much spaces to add when a line break occurs within the block. The default indentation is 0. \<^descr> \<^verbatim>\(\\\properties\\ opens a pretty printing block, with properties specified within the given text cartouche. The syntax and semantics of the category @{syntax_ref mixfix_properties} is described below. \<^descr> \<^verbatim>\)\ closes a pretty printing block. \<^descr> \<^verbatim>\//\ forces a line break. \<^descr> \<^verbatim>\/\\s\ allows a line break. Here \s\ stands for the string of spaces (zero or more) right after the slash. These spaces are printed if the break is \<^emph>\not\ taken. \<^medskip> Block properties allow more control over the details of pretty-printed output. The concrete syntax is defined as follows. \<^rail>\ @{syntax_def "mixfix_properties"}: (entry *) ; entry: atom ('=' atom)? ; atom: @{syntax short_ident} | @{syntax int} | @{syntax float} | @{syntax cartouche} \ Each @{syntax entry} is a name-value pair: if the value is omitted, it defaults to \<^verbatim>\true\ (intended for Boolean properties). The following standard block properties are supported: \<^item> \indent\ (natural number): the block indentation --- the same as for the simple syntax without block properties. \<^item> \consistent\ (Boolean): this block has consistent breaks (if one break is taken, all breaks are taken). \<^item> \unbreakable\ (Boolean): all possible breaks of the block are disabled (turned into spaces). \<^item> \markup\ (string): the optional name of the markup node. If this is provided, all remaining properties are turned into its XML attributes. This allows to specify free-form PIDE markup, e.g.\ for specialized output. \<^medskip> Note that the general idea of pretty printing with blocks and breaks is described in @{cite "paulson-ml2"}; it goes back to @{cite "Oppen:1980"}. \ subsection \Infixes\ text \ Infix operators are specified by convenient short forms that abbreviate general mixfix annotations as follows: \begin{center} \begin{tabular}{lll} \<^verbatim>\(\@{keyword_def "infix"}~\<^verbatim>\"\\sy\\<^verbatim>\"\ \p\\<^verbatim>\)\ & \\\ & \<^verbatim>\("(_\~\sy\\<^verbatim>\/ _)" [\\p + 1\\<^verbatim>\,\~\p + 1\\<^verbatim>\]\~\p\\<^verbatim>\)\ \\ \<^verbatim>\(\@{keyword_def "infixl"}~\<^verbatim>\"\\sy\\<^verbatim>\"\ \p\\<^verbatim>\)\ & \\\ & \<^verbatim>\("(_\~\sy\\<^verbatim>\/ _)" [\\p\\<^verbatim>\,\~\p + 1\\<^verbatim>\]\~\p\\<^verbatim>\)\ \\ \<^verbatim>\(\@{keyword_def "infixr"}~\<^verbatim>\"\\sy\\<^verbatim>\"\~\p\\<^verbatim>\)\ & \\\ & \<^verbatim>\("(_\~\sy\\<^verbatim>\/ _)" [\\p + 1\\<^verbatim>\,\~\p\\<^verbatim>\]\~\p\\<^verbatim>\)\ \\ \end{tabular} \end{center} The mixfix template \<^verbatim>\"(_\~\sy\\<^verbatim>\/ _)"\ specifies two argument positions; the delimiter is preceded by a space and followed by a space or line break; the entire phrase is a pretty printing block. The alternative notation \<^verbatim>\(\\sy\\<^verbatim>\)\ is introduced in addition. Thus any infix operator may be written in prefix form (as in Haskell), independently of the number of arguments. \ subsection \Binders\ text \ A \<^emph>\binder\ is a variable-binding construct such as a quantifier. The idea to formalize \\x. b\ as \All (\x. b)\ for \All :: ('a \ bool) \ bool\ already goes back to @{cite church40}. Isabelle declarations of certain higher-order operators may be annotated with @{keyword_def "binder"} annotations as follows: \begin{center} \c ::\~\<^verbatim>\"\\(\\<^sub>1 \ \\<^sub>2) \ \\<^sub>3\\<^verbatim>\" (\@{keyword "binder"}~\<^verbatim>\"\\sy\\<^verbatim>\" [\\p\\<^verbatim>\]\~\q\\<^verbatim>\)\ \end{center} This introduces concrete binder syntax \sy x. b\, where \x\ is a bound variable of type \\\<^sub>1\, the body \b\ has type \\\<^sub>2\ and the whole term has type \\\<^sub>3\. The optional integer \p\ specifies the syntactic priority of the body; the default is \q\, which is also the priority of the whole construct. Internally, the binder syntax is expanded to something like this: \begin{center} \c_binder ::\~\<^verbatim>\"\\idts \ \\<^sub>2 \ \\<^sub>3\\<^verbatim>\" ("(3\\sy\\<^verbatim>\_./ _)" [0,\~\p\\<^verbatim>\]\~\q\\<^verbatim>\)\ \end{center} Here @{syntax (inner) idts} is the nonterminal symbol for a list of identifiers with optional type constraints (see also \secref{sec:pure-grammar}). The mixfix template \<^verbatim>\"(3\\sy\\<^verbatim>\_./ _)"\ defines argument positions for the bound identifiers and the body, separated by a dot with optional line break; the entire phrase is a pretty printing block of indentation level 3. Note that there is no extra space after \sy\, so it needs to be included user specification if the binder syntax ends with a token that may be continued by an identifier token at the start of @{syntax (inner) idts}. Furthermore, a syntax translation to transforms \c_binder x\<^sub>1 \ x\<^sub>n b\ into iterated application \c (\x\<^sub>1. \ c (\x\<^sub>n. b)\)\. This works in both directions, for parsing and printing. \ section \Explicit notation \label{sec:notation}\ text \ \begin{matharray}{rcll} @{command_def "type_notation"} & : & \local_theory \ local_theory\ \\ @{command_def "no_type_notation"} & : & \local_theory \ local_theory\ \\ @{command_def "notation"} & : & \local_theory \ local_theory\ \\ @{command_def "no_notation"} & : & \local_theory \ local_theory\ \\ @{command_def "write"} & : & \proof(state) \ proof(state)\ \\ \end{matharray} Commands that introduce new logical entities (terms or types) usually allow to provide mixfix annotations on the spot, which is convenient for default notation. Nonetheless, the syntax may be modified later on by declarations for explicit notation. This allows to add or delete mixfix annotations for of existing logical entities within the current context. \<^rail>\ (@@{command type_notation} | @@{command no_type_notation}) @{syntax mode}? \ (@{syntax name} @{syntax mixfix} + @'and') ; (@@{command notation} | @@{command no_notation}) @{syntax mode}? \ (@{syntax name} @{syntax mixfix} + @'and') ; @@{command write} @{syntax mode}? (@{syntax name} @{syntax mixfix} + @'and') \ \<^descr> @{command "type_notation"}~\c (mx)\ associates mixfix syntax with an existing type constructor. The arity of the constructor is retrieved from the context. \<^descr> @{command "no_type_notation"} is similar to @{command "type_notation"}, but removes the specified syntax annotation from the present context. \<^descr> @{command "notation"}~\c (mx)\ associates mixfix syntax with an existing constant or fixed variable. The type declaration of the given entity is retrieved from the context. \<^descr> @{command "no_notation"} is similar to @{command "notation"}, but removes the specified syntax annotation from the present context. \<^descr> @{command "write"} is similar to @{command "notation"}, but works within an Isar proof body. \ section \The Pure syntax \label{sec:pure-syntax}\ subsection \Lexical matters \label{sec:inner-lex}\ text \ The inner lexical syntax vaguely resembles the outer one (\secref{sec:outer-lex}), but some details are different. There are two main categories of inner syntax tokens: \<^enum> \<^emph>\delimiters\ --- the literal tokens occurring in productions of the given priority grammar (cf.\ \secref{sec:priority-grammar}); \<^enum> \<^emph>\named tokens\ --- various categories of identifiers etc. Delimiters override named tokens and may thus render certain identifiers inaccessible. Sometimes the logical context admits alternative ways to refer to the same entity, potentially via qualified names. \<^medskip> The categories for named tokens are defined once and for all as follows, reusing some categories of the outer token syntax (\secref{sec:outer-lex}). \begin{center} \begin{supertabular}{rcl} @{syntax_def (inner) id} & = & @{syntax_ref short_ident} \\ @{syntax_def (inner) longid} & = & @{syntax_ref long_ident} \\ @{syntax_def (inner) var} & = & @{syntax_ref var} \\ @{syntax_def (inner) tid} & = & @{syntax_ref type_ident} \\ @{syntax_def (inner) tvar} & = & @{syntax_ref type_var} \\ @{syntax_def (inner) num_token} & = & @{syntax_ref nat} \\ @{syntax_def (inner) float_token} & = & @{syntax_ref nat}\<^verbatim>\.\@{syntax_ref nat} \\ @{syntax_def (inner) str_token} & = & \<^verbatim>\''\ \\\ \<^verbatim>\''\ \\ @{syntax_def (inner) string_token} & = & \<^verbatim>\"\ \\\ \<^verbatim>\"\ \\ @{syntax_def (inner) cartouche} & = & \<^verbatim>\\\ \\\ \<^verbatim>\\\ \\ \end{supertabular} \end{center} The token categories @{syntax (inner) num_token}, @{syntax (inner) float_token}, @{syntax (inner) str_token}, @{syntax (inner) string_token}, and @{syntax (inner) cartouche} are not used in Pure. Object-logics may implement numerals and string literals by adding appropriate syntax declarations, together with some translation functions (e.g.\ see \<^file>\~~/src/HOL/Tools/string_syntax.ML\). The derived categories @{syntax_def (inner) num_const}, and @{syntax_def (inner) float_const}, provide robust access to the respective tokens: the syntax tree holds a syntactic constant instead of a free variable. Formal document comments (\secref{sec:comments}) may be also used within the inner syntax. \ subsection \Priority grammars \label{sec:priority-grammar}\ text \ A context-free grammar consists of a set of \<^emph>\terminal symbols\, a set of \<^emph>\nonterminal symbols\ and a set of \<^emph>\productions\. Productions have the form \A = \\, where \A\ is a nonterminal and \\\ is a string of terminals and nonterminals. One designated nonterminal is called the \<^emph>\root symbol\. The language defined by the grammar consists of all strings of terminals that can be derived from the root symbol by applying productions as rewrite rules. The standard Isabelle parser for inner syntax uses a \<^emph>\priority grammar\. Each nonterminal is decorated by an integer priority: \A\<^sup>(\<^sup>p\<^sup>)\. In a derivation, \A\<^sup>(\<^sup>p\<^sup>)\ may be rewritten using a production \A\<^sup>(\<^sup>q\<^sup>) = \\ only if \p \ q\. Any priority grammar can be translated into a normal context-free grammar by introducing new nonterminals and productions. \<^medskip> Formally, a set of context free productions \G\ induces a derivation relation \\\<^sub>G\ as follows. Let \\\ and \\\ denote strings of terminal or nonterminal symbols. Then \\ A\<^sup>(\<^sup>p\<^sup>) \ \\<^sub>G \ \ \\ holds if and only if \G\ contains some production \A\<^sup>(\<^sup>q\<^sup>) = \\ for \p \ q\. \<^medskip> The following grammar for arithmetic expressions demonstrates how binding power and associativity of operators can be enforced by priorities. \begin{center} \begin{tabular}{rclr} \A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\ & \=\ & \<^verbatim>\(\ \A\<^sup>(\<^sup>0\<^sup>)\ \<^verbatim>\)\ \\ \A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\ & \=\ & \<^verbatim>\0\ \\ \A\<^sup>(\<^sup>0\<^sup>)\ & \=\ & \A\<^sup>(\<^sup>0\<^sup>)\ \<^verbatim>\+\ \A\<^sup>(\<^sup>1\<^sup>)\ \\ \A\<^sup>(\<^sup>2\<^sup>)\ & \=\ & \A\<^sup>(\<^sup>3\<^sup>)\ \<^verbatim>\*\ \A\<^sup>(\<^sup>2\<^sup>)\ \\ \A\<^sup>(\<^sup>3\<^sup>)\ & \=\ & \<^verbatim>\-\ \A\<^sup>(\<^sup>3\<^sup>)\ \\ \end{tabular} \end{center} The choice of priorities determines that \<^verbatim>\-\ binds tighter than \<^verbatim>\*\, which binds tighter than \<^verbatim>\+\. Furthermore \<^verbatim>\+\ associates to the left and \<^verbatim>\*\ to the right. \<^medskip> For clarity, grammars obey these conventions: \<^item> All priorities must lie between 0 and 1000. \<^item> Priority 0 on the right-hand side and priority 1000 on the left-hand side may be omitted. \<^item> The production \A\<^sup>(\<^sup>p\<^sup>) = \\ is written as \A = \ (p)\, i.e.\ the priority of the left-hand side actually appears in a column on the far right. \<^item> Alternatives are separated by \|\. \<^item> Repetition is indicated by dots \(\)\ in an informal but obvious way. Using these conventions, the example grammar specification above takes the form: \begin{center} \begin{tabular}{rclc} \A\ & \=\ & \<^verbatim>\(\ \A\ \<^verbatim>\)\ \\ & \|\ & \<^verbatim>\0\ & \qquad\qquad \\ & \|\ & \A\ \<^verbatim>\+\ \A\<^sup>(\<^sup>1\<^sup>)\ & \(0)\ \\ & \|\ & \A\<^sup>(\<^sup>3\<^sup>)\ \<^verbatim>\*\ \A\<^sup>(\<^sup>2\<^sup>)\ & \(2)\ \\ & \|\ & \<^verbatim>\-\ \A\<^sup>(\<^sup>3\<^sup>)\ & \(3)\ \\ \end{tabular} \end{center} \ subsection \The Pure grammar \label{sec:pure-grammar}\ text \ The priority grammar of the \Pure\ theory is defined approximately like this: \begin{center} \begin{supertabular}{rclr} @{syntax_def (inner) any} & = & \prop | logic\ \\\\ @{syntax_def (inner) prop} & = & \<^verbatim>\(\ \prop\ \<^verbatim>\)\ \\ & \|\ & \prop\<^sup>(\<^sup>4\<^sup>)\ \<^verbatim>\::\ \type\ & \(3)\ \\ & \|\ & \any\<^sup>(\<^sup>3\<^sup>)\ \<^verbatim>\==\ \any\<^sup>(\<^sup>3\<^sup>)\ & \(2)\ \\ & \|\ & \any\<^sup>(\<^sup>3\<^sup>)\ \\\ \any\<^sup>(\<^sup>3\<^sup>)\ & \(2)\ \\ & \|\ & \prop\<^sup>(\<^sup>3\<^sup>)\ \<^verbatim>\&&&\ \prop\<^sup>(\<^sup>2\<^sup>)\ & \(2)\ \\ & \|\ & \prop\<^sup>(\<^sup>2\<^sup>)\ \<^verbatim>\==>\ \prop\<^sup>(\<^sup>1\<^sup>)\ & \(1)\ \\ & \|\ & \prop\<^sup>(\<^sup>2\<^sup>)\ \\\ \prop\<^sup>(\<^sup>1\<^sup>)\ & \(1)\ \\ & \|\ & \<^verbatim>\[|\ \prop\ \<^verbatim>\;\ \\\ \<^verbatim>\;\ \prop\ \<^verbatim>\|]\ \<^verbatim>\==>\ \prop\<^sup>(\<^sup>1\<^sup>)\ & \(1)\ \\ & \|\ & \\\ \prop\ \<^verbatim>\;\ \\\ \<^verbatim>\;\ \prop\ \\\ \\\ \prop\<^sup>(\<^sup>1\<^sup>)\ & \(1)\ \\ & \|\ & \<^verbatim>\!!\ \idts\ \<^verbatim>\.\ \prop\ & \(0)\ \\ & \|\ & \\\ \idts\ \<^verbatim>\.\ \prop\ & \(0)\ \\ & \|\ & \<^verbatim>\OFCLASS\ \<^verbatim>\(\ \type\ \<^verbatim>\,\ \logic\ \<^verbatim>\)\ \\ & \|\ & \<^verbatim>\SORT_CONSTRAINT\ \<^verbatim>\(\ \type\ \<^verbatim>\)\ \\ & \|\ & \<^verbatim>\TERM\ \logic\ \\ & \|\ & \<^verbatim>\PROP\ \aprop\ \\\\ @{syntax_def (inner) aprop} & = & \<^verbatim>\(\ \aprop\ \<^verbatim>\)\ \\ & \|\ & \id | longid | var |\~~\<^verbatim>\_\~~\|\~~\<^verbatim>\...\ \\ & \|\ & \<^verbatim>\CONST\ \id |\~~\<^verbatim>\CONST\ \longid\ \\ & \|\ & \<^verbatim>\XCONST\ \id |\~~\<^verbatim>\XCONST\ \longid\ \\ & \|\ & \logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \ any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\ & \(999)\ \\\\ @{syntax_def (inner) logic} & = & \<^verbatim>\(\ \logic\ \<^verbatim>\)\ \\ & \|\ & \logic\<^sup>(\<^sup>4\<^sup>)\ \<^verbatim>\::\ \type\ & \(3)\ \\ & \|\ & \id | longid | var |\~~\<^verbatim>\_\~~\|\~~\<^verbatim>\...\ \\ & \|\ & \<^verbatim>\CONST\ \id |\~~\<^verbatim>\CONST\ \longid\ \\ & \|\ & \<^verbatim>\XCONST\ \id |\~~\<^verbatim>\XCONST\ \longid\ \\ & \|\ & \logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \ any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\ & \(999)\ \\ & \|\ & \<^verbatim>\%\ \pttrns\ \<^verbatim>\.\ \any\<^sup>(\<^sup>3\<^sup>)\ & \(3)\ \\ & \|\ & \\\ \pttrns\ \<^verbatim>\.\ \any\<^sup>(\<^sup>3\<^sup>)\ & \(3)\ \\ & \|\ & \<^verbatim>\(==)\~~\|\~~\<^verbatim>\(\\\\\<^verbatim>\)\~~\|\~~\<^verbatim>\(&&&)\ \\ & \|\ & \<^verbatim>\(==>)\~~\|\~~\<^verbatim>\(\\\\\<^verbatim>\)\ \\ & \|\ & \<^verbatim>\TYPE\ \<^verbatim>\(\ \type\ \<^verbatim>\)\ \\\\ @{syntax_def (inner) idt} & = & \<^verbatim>\(\ \idt\ \<^verbatim>\)\~~\| id |\~~\<^verbatim>\_\ \\ & \|\ & \id\ \<^verbatim>\::\ \type\ & \(0)\ \\ & \|\ & \<^verbatim>\_\ \<^verbatim>\::\ \type\ & \(0)\ \\\\ @{syntax_def (inner) index} & = & \<^verbatim>\\<^bsub>\ \logic\<^sup>(\<^sup>0\<^sup>)\ \<^verbatim>\\<^esub>\~~\| | \\ \\\\ @{syntax_def (inner) idts} & = & \idt | idt\<^sup>(\<^sup>1\<^sup>) idts\ & \(0)\ \\\\ @{syntax_def (inner) pttrn} & = & \idt\ \\\\ @{syntax_def (inner) pttrns} & = & \pttrn | pttrn\<^sup>(\<^sup>1\<^sup>) pttrns\ & \(0)\ \\\\ @{syntax_def (inner) type} & = & \<^verbatim>\(\ \type\ \<^verbatim>\)\ \\ & \|\ & \tid | tvar |\~~\<^verbatim>\_\ \\ & \|\ & \tid\ \<^verbatim>\::\ \sort | tvar\~~\<^verbatim>\::\ \sort |\~~\<^verbatim>\_\ \<^verbatim>\::\ \sort\ \\ & \|\ & \type_name | type\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) type_name\ \\ & \|\ & \<^verbatim>\(\ \type\ \<^verbatim>\,\ \\\ \<^verbatim>\,\ \type\ \<^verbatim>\)\ \type_name\ \\ & \|\ & \type\<^sup>(\<^sup>1\<^sup>)\ \<^verbatim>\=>\ \type\ & \(0)\ \\ & \|\ & \type\<^sup>(\<^sup>1\<^sup>)\ \\\ \type\ & \(0)\ \\ & \|\ & \<^verbatim>\[\ \type\ \<^verbatim>\,\ \\\ \<^verbatim>\,\ \type\ \<^verbatim>\]\ \<^verbatim>\=>\ \type\ & \(0)\ \\ & \|\ & \<^verbatim>\[\ \type\ \<^verbatim>\,\ \\\ \<^verbatim>\,\ \type\ \<^verbatim>\]\ \\\ \type\ & \(0)\ \\ @{syntax_def (inner) type_name} & = & \id | longid\ \\\\ @{syntax_def (inner) sort} & = & @{syntax class_name}~~\|\~~\<^verbatim>\_\~~\|\~~\<^verbatim>\{}\ \\ & \|\ & \<^verbatim>\{\ @{syntax class_name} \<^verbatim>\,\ \\\ \<^verbatim>\,\ @{syntax class_name} \<^verbatim>\}\ \\ @{syntax_def (inner) class_name} & = & \id | longid\ \\ \end{supertabular} \end{center} \<^medskip> Here literal terminals are printed \<^verbatim>\verbatim\; see also \secref{sec:inner-lex} for further token categories of the inner syntax. The meaning of the nonterminals defined by the above grammar is as follows: \<^descr> @{syntax_ref (inner) any} denotes any term. \<^descr> @{syntax_ref (inner) prop} denotes meta-level propositions, which are terms of type \<^typ>\prop\. The syntax of such formulae of the meta-logic is carefully distinguished from usual conventions for object-logics. In particular, plain \\\-term notation is \<^emph>\not\ recognized as @{syntax (inner) prop}. \<^descr> @{syntax_ref (inner) aprop} denotes atomic propositions, which are embedded into regular @{syntax (inner) prop} by means of an explicit \<^verbatim>\PROP\ token. Terms of type \<^typ>\prop\ with non-constant head, e.g.\ a plain variable, are printed in this form. Constants that yield type \<^typ>\prop\ are expected to provide their own concrete syntax; otherwise the printed version will appear like @{syntax (inner) logic} and cannot be parsed again as @{syntax (inner) prop}. \<^descr> @{syntax_ref (inner) logic} denotes arbitrary terms of a logical type, excluding type \<^typ>\prop\. This is the main syntactic category of object-logic entities, covering plain \\\-term notation (variables, abstraction, application), plus anything defined by the user. When specifying notation for logical entities, all logical types (excluding \<^typ>\prop\) are \<^emph>\collapsed\ to this single category of @{syntax (inner) logic}. \<^descr> @{syntax_ref (inner) index} denotes an optional index term for indexed syntax. If omitted, it refers to the first @{keyword_ref "structure"} variable in the context. The special dummy ``\\\'' serves as pattern variable in mixfix annotations that introduce indexed notation. \<^descr> @{syntax_ref (inner) idt} denotes identifiers, possibly constrained by types. \<^descr> @{syntax_ref (inner) idts} denotes a sequence of @{syntax_ref (inner) idt}. This is the most basic category for variables in iterated binders, such as \\\ or \\\. \<^descr> @{syntax_ref (inner) pttrn} and @{syntax_ref (inner) pttrns} denote patterns for abstraction, cases bindings etc. In Pure, these categories start as a merely copy of @{syntax (inner) idt} and @{syntax (inner) idts}, respectively. Object-logics may add additional productions for binding forms. \<^descr> @{syntax_ref (inner) type} denotes types of the meta-logic. \<^descr> @{syntax_ref (inner) sort} denotes meta-level sorts. Here are some further explanations of certain syntax features. \<^item> In @{syntax (inner) idts}, note that \x :: nat y\ is parsed as \x :: (nat y)\, treating \y\ like a type constructor applied to \nat\. To avoid this interpretation, write \(x :: nat) y\ with explicit parentheses. \<^item> Similarly, \x :: nat y :: nat\ is parsed as \x :: (nat y :: nat)\. The correct form is \(x :: nat) (y :: nat)\, or \(x :: nat) y :: nat\ if \y\ is last in the sequence of identifiers. \<^item> Type constraints for terms bind very weakly. For example, \x < y :: nat\ is normally parsed as \(x < y) :: nat\, unless \<\ has a very low priority, in which case the input is likely to be ambiguous. The correct form is \x < (y :: nat)\. \<^item> Dummy variables (written as underscore) may occur in different roles. \<^descr> A sort ``\_\'' refers to a vacuous constraint for type variables, which is effectively ignored in type-inference. \<^descr> A type ``\_\'' or ``\_ :: sort\'' acts like an anonymous inference parameter, which is filled-in according to the most general type produced by the type-checking phase. \<^descr> A bound ``\_\'' refers to a vacuous abstraction, where the body does not refer to the binding introduced here. As in the term \<^term>\\x _. x\, which is \\\-equivalent to \\x y. x\. \<^descr> A free ``\_\'' refers to an implicit outer binding. Higher definitional packages usually allow forms like \f x _ = x\. \<^descr> A schematic ``\_\'' (within a term pattern, see \secref{sec:term-decls}) refers to an anonymous variable that is implicitly abstracted over its context of locally bound variables. For example, this allows pattern matching of \{x. f x = g x}\ against \{x. _ = _}\, or even \{_. _ = _}\ by using both bound and schematic dummies. \<^descr> The three literal dots ``\<^verbatim>\...\'' may be also written as ellipsis symbol \<^verbatim>\\\. In both cases this refers to a special schematic variable, which is bound in the context. This special term abbreviation works nicely with calculational reasoning (\secref{sec:calculation}). \<^descr> \<^verbatim>\CONST\ ensures that the given identifier is treated as constant term, and passed through the parse tree in fully internalized form. This is particularly relevant for translation rules (\secref{sec:syn-trans}), notably on the RHS. \<^descr> \<^verbatim>\XCONST\ is similar to \<^verbatim>\CONST\, but retains the constant name as given. This is only relevant to translation rules (\secref{sec:syn-trans}), notably on the LHS. \ subsection \Inspecting the syntax\ text \ \begin{matharray}{rcl} @{command_def "print_syntax"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} \<^descr> @{command "print_syntax"} prints the inner syntax of the current context. The output can be quite large; the most important sections are explained below. \<^descr> \lexicon\ lists the delimiters of the inner token language; see \secref{sec:inner-lex}. \<^descr> \productions\ lists the productions of the underlying priority grammar; see \secref{sec:priority-grammar}. Many productions have an extra \\ \<^bold>\ name\. These names later become the heads of parse trees; they also guide the pretty printer. Productions without such parse tree names are called \<^emph>\copy productions\. Their right-hand side must have exactly one nonterminal symbol (or named token). The parser does not create a new parse tree node for copy productions, but simply returns the parse tree of the right-hand symbol. If the right-hand side of a copy production consists of a single nonterminal without any delimiters, then it is called a \<^emph>\chain production\. Chain productions act as abbreviations: conceptually, they are removed from the grammar by adding new productions. Priority information attached to chain productions is ignored. \<^descr> \print modes\ lists the alternative print modes provided by this grammar; see \secref{sec:print-modes}. \<^descr> \parse_rules\ and \print_rules\ relate to syntax translations (macros); see \secref{sec:syn-trans}. \<^descr> \parse_ast_translation\ and \print_ast_translation\ list sets of constants that invoke translation functions for abstract syntax trees, which are only required in very special situations; see \secref{sec:tr-funs}. \<^descr> \parse_translation\ and \print_translation\ list the sets of constants that invoke regular translation functions; see \secref{sec:tr-funs}. \ subsection \Ambiguity of parsed expressions\ text \ \begin{tabular}{rcll} @{attribute_def syntax_ambiguity_warning} & : & \attribute\ & default \true\ \\ @{attribute_def syntax_ambiguity_limit} & : & \attribute\ & default \10\ \\ \end{tabular} Depending on the grammar and the given input, parsing may be ambiguous. Isabelle lets the Earley parser enumerate all possible parse trees, and then tries to make the best out of the situation. Terms that cannot be type-checked are filtered out, which often leads to a unique result in the end. Unlike regular type reconstruction, which is applied to the whole collection of input terms simultaneously, the filtering stage only treats each given term in isolation. Filtering is also not attempted for individual types or raw ASTs (as required for @{command translations}). Certain warning or error messages are printed, depending on the situation and the given configuration options. Parsing ultimately fails, if multiple results remain after the filtering phase. \<^descr> @{attribute syntax_ambiguity_warning} controls output of explicit warning messages about syntax ambiguity. \<^descr> @{attribute syntax_ambiguity_limit} determines the number of resulting parse trees that are shown as part of the printed message in case of an ambiguity. \ section \Syntax transformations \label{sec:syntax-transformations}\ text \ The inner syntax engine of Isabelle provides separate mechanisms to transform parse trees either via rewrite systems on first-order ASTs (\secref{sec:syn-trans}), or ML functions on ASTs or syntactic \\\-terms (\secref{sec:tr-funs}). This works both for parsing and printing, as outlined in \figref{fig:parse-print}. \begin{figure}[htbp] \begin{center} \begin{tabular}{cl} string & \\ \\\ & lexer + parser \\ parse tree & \\ \\\ & parse AST translation \\ AST & \\ \\\ & AST rewriting (macros) \\ AST & \\ \\\ & parse translation \\ --- pre-term --- & \\ \\\ & print translation \\ AST & \\ \\\ & AST rewriting (macros) \\ AST & \\ \\\ & print AST translation \\ string & \end{tabular} \end{center} \caption{Parsing and printing with translations}\label{fig:parse-print} \end{figure} These intermediate syntax tree formats eventually lead to a pre-term with all names and binding scopes resolved, but most type information still missing. Explicit type constraints might be given by the user, or implicit position information by the system --- both need to be passed-through carefully by syntax transformations. Pre-terms are further processed by the so-called \<^emph>\check\ and \<^emph>\uncheck\ phases that are intertwined with type-inference (see also @{cite "isabelle-implementation"}). The latter allows to operate on higher-order abstract syntax with proper binding and type information already available. As a rule of thumb, anything that manipulates bindings of variables or constants needs to be implemented as syntax transformation (see below). Anything else is better done via check/uncheck: a prominent example application is the @{command abbreviation} concept of Isabelle/Pure. \ subsection \Abstract syntax trees \label{sec:ast}\ text \ The ML datatype \<^ML_type>\Ast.ast\ explicitly represents the intermediate AST format that is used for syntax rewriting (\secref{sec:syn-trans}). It is defined in ML as follows: @{verbatim [display] \datatype ast = Constant of string | Variable of string | Appl of ast list\} An AST is either an atom (constant or variable) or a list of (at least two) subtrees. Occasional diagnostic output of ASTs uses notation that resembles S-expression of LISP. Constant atoms are shown as quoted strings, variable atoms as non-quoted strings and applications as a parenthesized list of subtrees. For example, the AST @{ML [display] \Ast.Appl [Ast.Constant "_abs", Ast.Variable "x", Ast.Variable "t"]\} is pretty-printed as \<^verbatim>\("_abs" x t)\. Note that \<^verbatim>\()\ and \<^verbatim>\(x)\ are excluded as ASTs, because they have too few subtrees. \<^medskip> AST application is merely a pro-forma mechanism to indicate certain syntactic structures. Thus \<^verbatim>\(c a b)\ could mean either term application or type application, depending on the syntactic context. Nested application like \<^verbatim>\(("_abs" x t) u)\ is also possible, but ASTs are definitely first-order: the syntax constant \<^verbatim>\"_abs"\ does not bind the \<^verbatim>\x\ in any way. Proper bindings are introduced in later stages of the term syntax, where \<^verbatim>\("_abs" x t)\ becomes an \<^ML>\Abs\ node and occurrences of \<^verbatim>\x\ in \<^verbatim>\t\ are replaced by bound variables (represented as de-Bruijn indices). \ subsubsection \AST constants versus variables\ text \ Depending on the situation --- input syntax, output syntax, translation patterns --- the distinction of atomic ASTs as \<^ML>\Ast.Constant\ versus \<^ML>\Ast.Variable\ serves slightly different purposes. Input syntax of a term such as \f a b = c\ does not yet indicate the scopes of atomic entities \f, a, b, c\: they could be global constants or local variables, even bound ones depending on the context of the term. \<^ML>\Ast.Variable\ leaves this choice still open: later syntax layers (or translation functions) may capture such a variable to determine its role specifically, to make it a constant, bound variable, free variable etc. In contrast, syntax translations that introduce already known constants would rather do it via \<^ML>\Ast.Constant\ to prevent accidental re-interpretation later on. Output syntax turns term constants into \<^ML>\Ast.Constant\ and variables (free or schematic) into \<^ML>\Ast.Variable\. This information is precise when printing fully formal \\\-terms. \<^medskip> AST translation patterns (\secref{sec:syn-trans}) that represent terms cannot distinguish constants and variables syntactically. Explicit indication of \CONST c\ inside the term language is required, unless \c\ is known as special \<^emph>\syntax constant\ (see also @{command syntax}). It is also possible to use @{command syntax} declarations (without mixfix annotation) to enforce that certain unqualified names are always treated as constant within the syntax machinery. The situation is simpler for ASTs that represent types or sorts, since the concrete syntax already distinguishes type variables from type constants (constructors). So \('a, 'b) foo\ corresponds to an AST application of some constant for \foo\ and variable arguments for \'a\ and \'b\. Note that the postfix application is merely a feature of the concrete syntax, while in the AST the constructor occurs in head position. \ subsubsection \Authentic syntax names\ text \ Naming constant entities within ASTs is another delicate issue. Unqualified names are resolved in the name space tables in the last stage of parsing, after all translations have been applied. Since syntax transformations do not know about this later name resolution, there can be surprises in boundary cases. \<^emph>\Authentic syntax names\ for \<^ML>\Ast.Constant\ avoid this problem: the fully-qualified constant name with a special prefix for its formal category (\class\, \type\, \const\, \fixed\) represents the information faithfully within the untyped AST format. Accidental overlap with free or bound variables is excluded as well. Authentic syntax names work implicitly in the following situations: \<^item> Input of term constants (or fixed variables) that are introduced by concrete syntax via @{command notation}: the correspondence of a particular grammar production to some known term entity is preserved. \<^item> Input of type constants (constructors) and type classes --- thanks to explicit syntactic distinction independently on the context. \<^item> Output of term constants, type constants, type classes --- this information is already available from the internal term to be printed. In other words, syntax transformations that operate on input terms written as prefix applications are difficult to make robust. Luckily, this case rarely occurs in practice, because syntax forms to be translated usually correspond to some concrete notation. \ subsection \Raw syntax and translations \label{sec:syn-trans}\ text \ \begin{tabular}{rcll} @{command_def "nonterminal"} & : & \theory \ theory\ \\ @{command_def "syntax"} & : & \theory \ theory\ \\ @{command_def "no_syntax"} & : & \theory \ theory\ \\ @{command_def "translations"} & : & \theory \ theory\ \\ @{command_def "no_translations"} & : & \theory \ theory\ \\ @{attribute_def syntax_ast_trace} & : & \attribute\ & default \false\ \\ @{attribute_def syntax_ast_stats} & : & \attribute\ & default \false\ \\ \end{tabular} \<^medskip> Unlike mixfix notation for existing formal entities (\secref{sec:notation}), raw syntax declarations provide full access to the priority grammar of the inner syntax, without any sanity checks. This includes additional syntactic categories (via @{command nonterminal}) and free-form grammar productions (via @{command syntax}). Additional syntax translations (or macros, via @{command translations}) are required to turn resulting parse trees into proper representations of formal entities again. \<^rail>\ @@{command nonterminal} (@{syntax name} + @'and') ; (@@{command syntax} | @@{command no_syntax}) @{syntax mode}? (constdecl +) ; (@@{command translations} | @@{command no_translations}) (transpat ('==' | '=>' | '<=' | '\' | '\' | '\') transpat +) ; constdecl: @{syntax name} '::' @{syntax type} @{syntax mixfix}? ; mode: ('(' ( @{syntax name} | @'output' | @{syntax name} @'output' ) ')') ; transpat: ('(' @{syntax name} ')')? @{syntax string} \ \<^descr> @{command "nonterminal"}~\c\ declares a type constructor \c\ (without arguments) to act as purely syntactic type: a nonterminal symbol of the inner syntax. \<^descr> @{command "syntax"}~\(mode) c :: \ (mx)\ augments the priority grammar and the pretty printer table for the given print mode (default \<^verbatim>\""\). An optional keyword @{keyword_ref "output"} means that only the pretty printer table is affected. Following \secref{sec:mixfix}, the mixfix annotation \mx = template ps q\ together with type \\ = \\<^sub>1 \ \ \\<^sub>n \ \\ and specify a grammar production. The \template\ contains delimiter tokens that surround \n\ argument positions (\<^verbatim>\_\). The latter correspond to nonterminal symbols \A\<^sub>i\ derived from the argument types \\\<^sub>i\ as follows: \<^item> \prop\ if \\\<^sub>i = prop\ \<^item> \logic\ if \\\<^sub>i = (\)\\ for logical type constructor \\ \ prop\ \<^item> \any\ if \\\<^sub>i = \\ for type variables \<^item> \\\ if \\\<^sub>i = \\ for nonterminal \\\ (syntactic type constructor) Each \A\<^sub>i\ is decorated by priority \p\<^sub>i\ from the given list \ps\; missing priorities default to 0. The resulting nonterminal of the production is determined similarly from type \\\, with priority \q\ and default 1000. \<^medskip> Parsing via this production produces parse trees \t\<^sub>1, \, t\<^sub>n\ for the argument slots. The resulting parse tree is composed as \c t\<^sub>1 \ t\<^sub>n\, by using the syntax constant \c\ of the syntax declaration. Such syntactic constants are invented on the spot, without formal check wrt.\ existing declarations. It is conventional to use plain identifiers prefixed by a single underscore (e.g.\ \_foobar\). Names should be chosen with care, to avoid clashes with other syntax declarations. \<^medskip> The special case of copy production is specified by \c =\~\<^verbatim>\""\ (empty string). It means that the resulting parse tree \t\ is copied directly, without any further decoration. \<^descr> @{command "no_syntax"}~\(mode) decls\ removes grammar declarations (and translations) resulting from \decls\, which are interpreted in the same manner as for @{command "syntax"} above. \<^descr> @{command "translations"}~\rules\ specifies syntactic translation rules (i.e.\ macros) as first-order rewrite rules on ASTs (\secref{sec:ast}). The theory context maintains two independent lists translation rules: parse rules (\<^verbatim>\=>\ or \\\) and print rules (\<^verbatim>\<=\ or \\\). For convenience, both can be specified simultaneously as parse~/ print rules (\<^verbatim>\==\ or \\\). Translation patterns may be prefixed by the syntactic category to be used for parsing; the default is \logic\ which means that regular term syntax is used. Both sides of the syntax translation rule undergo parsing and parse AST translations \secref{sec:tr-funs}, in order to perform some fundamental normalization like \\x y. b \ \x. \y. b\, but other AST translation rules are \<^emph>\not\ applied recursively here. When processing AST patterns, the inner syntax lexer runs in a different mode that allows identifiers to start with underscore. This accommodates the usual naming convention for auxiliary syntax constants --- those that do not have a logical counter part --- by allowing to specify arbitrary AST applications within the term syntax, independently of the corresponding concrete syntax. Atomic ASTs are distinguished as \<^ML>\Ast.Constant\ versus \<^ML>\Ast.Variable\ as follows: a qualified name or syntax constant declared via @{command syntax}, or parse tree head of concrete notation becomes \<^ML>\Ast.Constant\, anything else \<^ML>\Ast.Variable\. Note that \CONST\ and \XCONST\ within the term language (\secref{sec:pure-grammar}) allow to enforce treatment as constants. AST rewrite rules \(lhs, rhs)\ need to obey the following side-conditions: \<^item> Rules must be left linear: \lhs\ must not contain repeated variables.\<^footnote>\The deeper reason for this is that AST equality is not well-defined: different occurrences of the ``same'' AST could be decorated differently by accidental type-constraints or source position information, for example.\ \<^item> Every variable in \rhs\ must also occur in \lhs\. \<^descr> @{command "no_translations"}~\rules\ removes syntactic translation rules, which are interpreted in the same manner as for @{command "translations"} above. \<^descr> @{attribute syntax_ast_trace} and @{attribute syntax_ast_stats} control diagnostic output in the AST normalization process, when translation rules are applied to concrete input or output. Raw syntax and translations provides a slightly more low-level access to the grammar and the form of resulting parse trees. It is often possible to avoid this untyped macro mechanism, and use type-safe @{command abbreviation} or @{command notation} instead. Some important situations where @{command syntax} and @{command translations} are really need are as follows: \<^item> Iterated replacement via recursive @{command translations}. For example, consider list enumeration \<^term>\[a, b, c, d]\ as defined in theory \<^theory>\HOL.List\. \<^item> Change of binding status of variables: anything beyond the built-in @{keyword "binder"} mixfix annotation requires explicit syntax translations. For example, consider the set comprehension syntax \<^term>\{x. P}\ as defined in theory \<^theory>\HOL.Set\. \ subsubsection \Applying translation rules\ text \ As a term is being parsed or printed, an AST is generated as an intermediate form according to \figref{fig:parse-print}. The AST is normalized by applying translation rules in the manner of a first-order term rewriting system. We first examine how a single rule is applied. Let \t\ be the abstract syntax tree to be normalized and \(lhs, rhs)\ some translation rule. A subtree \u\ of \t\ is called \<^emph>\redex\ if it is an instance of \lhs\; in this case the pattern \lhs\ is said to match the object \u\. A redex matched by \lhs\ may be replaced by the corresponding instance of \rhs\, thus \<^emph>\rewriting\ the AST \t\. Matching requires some notion of \<^emph>\place-holders\ in rule patterns: \<^ML>\Ast.Variable\ serves this purpose. More precisely, the matching of the object \u\ against the pattern \lhs\ is performed as follows: \<^item> Objects of the form \<^ML>\Ast.Variable\~\x\ or \<^ML>\Ast.Constant\~\x\ are matched by pattern \<^ML>\Ast.Constant\~\x\. Thus all atomic ASTs in the object are treated as (potential) constants, and a successful match makes them actual constants even before name space resolution (see also \secref{sec:ast}). \<^item> Object \u\ is matched by pattern \<^ML>\Ast.Variable\~\x\, binding \x\ to \u\. \<^item> Object \<^ML>\Ast.Appl\~\us\ is matched by \<^ML>\Ast.Appl\~\ts\ if \us\ and \ts\ have the same length and each corresponding subtree matches. \<^item> In every other case, matching fails. A successful match yields a substitution that is applied to \rhs\, generating the instance that replaces \u\. Normalizing an AST involves repeatedly applying translation rules until none are applicable. This works yoyo-like: top-down, bottom-up, top-down, etc. At each subtree position, rules are chosen in order of appearance in the theory definitions. The configuration options @{attribute syntax_ast_trace} and @{attribute syntax_ast_stats} might help to understand this process and diagnose problems. \begin{warn} If syntax translation rules work incorrectly, the output of @{command_ref print_syntax} with its \<^emph>\rules\ sections reveals the actual internal forms of AST pattern, without potentially confusing concrete syntax. Recall that AST constants appear as quoted strings and variables without quotes. \end{warn} \begin{warn} If @{attribute_ref eta_contract} is set to \true\, terms will be \\\-contracted \<^emph>\before\ the AST rewriter sees them. Thus some abstraction nodes needed for print rules to match may vanish. For example, \Ball A (\x. P x)\ would contract to \Ball A P\ and the standard print rule would fail to apply. This problem can be avoided by hand-written ML translation functions (see also \secref{sec:tr-funs}), which is in fact the same mechanism used in built-in @{keyword "binder"} declarations. \end{warn} \ subsection \Syntax translation functions \label{sec:tr-funs}\ text \ \begin{matharray}{rcl} @{command_def "parse_ast_translation"} & : & \theory \ theory\ \\ @{command_def "parse_translation"} & : & \theory \ theory\ \\ @{command_def "print_translation"} & : & \theory \ theory\ \\ @{command_def "typed_print_translation"} & : & \theory \ theory\ \\ @{command_def "print_ast_translation"} & : & \theory \ theory\ \\ @{ML_antiquotation_def "class_syntax"} & : & \ML antiquotation\ \\ @{ML_antiquotation_def "type_syntax"} & : & \ML antiquotation\ \\ @{ML_antiquotation_def "const_syntax"} & : & \ML antiquotation\ \\ @{ML_antiquotation_def "syntax_const"} & : & \ML antiquotation\ \\ \end{matharray} Syntax translation functions written in ML admit almost arbitrary manipulations of inner syntax, at the expense of some complexity and obscurity in the implementation. \<^rail>\ ( @@{command parse_ast_translation} | @@{command parse_translation} | @@{command print_translation} | @@{command typed_print_translation} | @@{command print_ast_translation}) @{syntax text} ; (@@{ML_antiquotation class_syntax} | @@{ML_antiquotation type_syntax} | @@{ML_antiquotation const_syntax} | @@{ML_antiquotation syntax_const}) embedded \ \<^descr> @{command parse_translation} etc. declare syntax translation functions to the theory. Any of these commands have a single @{syntax text} argument that refers to an ML expression of appropriate type as follows: \<^medskip> {\footnotesize \begin{tabular}{l} @{command parse_ast_translation} : \\ \quad \<^ML_type>\(string * (Proof.context -> Ast.ast list -> Ast.ast)) list\ \\ @{command parse_translation} : \\ \quad \<^ML_type>\(string * (Proof.context -> term list -> term)) list\ \\ @{command print_translation} : \\ \quad \<^ML_type>\(string * (Proof.context -> term list -> term)) list\ \\ @{command typed_print_translation} : \\ \quad \<^ML_type>\(string * (Proof.context -> typ -> term list -> term)) list\ \\ @{command print_ast_translation} : \\ \quad \<^ML_type>\(string * (Proof.context -> Ast.ast list -> Ast.ast)) list\ \\ \end{tabular}} \<^medskip> The argument list consists of \(c, tr)\ pairs, where \c\ is the syntax name of the formal entity involved, and \tr\ a function that translates a syntax form \c args\ into \tr ctxt args\ (depending on the context). The Isabelle/ML naming convention for parse translations is \c_tr\ and for print translations \c_tr'\. The @{command_ref print_syntax} command displays the sets of names associated with the translation functions of a theory under \parse_ast_translation\ etc. \<^descr> \@{class_syntax c}\, \@{type_syntax c}\, \@{const_syntax c}\ inline the authentic syntax name of the given formal entities into the ML source. This is the fully-qualified logical name prefixed by a special marker to indicate its kind: thus different logical name spaces are properly distinguished within parse trees. \<^descr> \@{const_syntax c}\ inlines the name \c\ of the given syntax constant, having checked that it has been declared via some @{command syntax} commands within the theory context. Note that the usual naming convention makes syntax constants start with underscore, to reduce the chance of accidental clashes with other names occurring in parse trees (unqualified constants etc.). \ subsubsection \The translation strategy\ text \ The different kinds of translation functions are invoked during the transformations between parse trees, ASTs and syntactic terms (cf.\ \figref{fig:parse-print}). Whenever a combination of the form \c x\<^sub>1 \ x\<^sub>n\ is encountered, and a translation function \f\ of appropriate kind is declared for \c\, the result is produced by evaluation of \f [x\<^sub>1, \, x\<^sub>n]\ in ML. For AST translations, the arguments \x\<^sub>1, \, x\<^sub>n\ are ASTs. A combination has the form \<^ML>\Ast.Constant\~\c\ or \<^ML>\Ast.Appl\~\[\\<^ML>\Ast.Constant\~\c, x\<^sub>1, \, x\<^sub>n]\. For term translations, the arguments are terms and a combination has the form \<^ML>\Const\~\(c, \)\ or \<^ML>\Const\~\(c, \) $ x\<^sub>1 $ \ $ x\<^sub>n\. Terms allow more sophisticated transformations than ASTs do, typically involving abstractions and bound variables. \<^emph>\Typed\ print translations may even peek at the type \\\ of the constant they are invoked on, although some information might have been suppressed for term output already. Regardless of whether they act on ASTs or terms, translation functions called during the parsing process differ from those for printing in their overall behaviour: \<^descr>[Parse translations] are applied bottom-up. The arguments are already in translated form. The translations must not fail; exceptions trigger an error message. There may be at most one function associated with any syntactic name. \<^descr>[Print translations] are applied top-down. They are supplied with arguments that are partly still in internal form. The result again undergoes translation; therefore a print translation should not introduce as head the very constant that invoked it. The function may raise exception \<^ML>\Match\ to indicate failure; in this event it has no effect. Multiple functions associated with some syntactic name are tried in the order of declaration in the theory. Only constant atoms --- constructor \<^ML>\Ast.Constant\ for ASTs and \<^ML>\Const\ for terms --- can invoke translation functions. This means that parse translations can only be associated with parse tree heads of concrete syntax, or syntactic constants introduced via other translations. For plain identifiers within the term language, the status of constant versus variable is not yet know during parsing. This is in contrast to print translations, where constants are explicitly known from the given term in its fully internal form. \ subsection \Built-in syntax transformations\ text \ Here are some further details of the main syntax transformation phases of \figref{fig:parse-print}. \ subsubsection \Transforming parse trees to ASTs\ text \ The parse tree is the raw output of the parser. It is transformed into an AST according to some basic scheme that may be augmented by AST translation functions as explained in \secref{sec:tr-funs}. The parse tree is constructed by nesting the right-hand sides of the productions used to recognize the input. Such parse trees are simply lists of tokens and constituent parse trees, the latter representing the nonterminals of the productions. Ignoring AST translation functions, parse trees are transformed to ASTs by stripping out delimiters and copy productions, while retaining some source position information from input tokens. The Pure syntax provides predefined AST translations to make the basic \\\-term structure more apparent within the (first-order) AST representation, and thus facilitate the use of @{command translations} (see also \secref{sec:syn-trans}). This covers ordinary term application, type application, nested abstraction, iterated meta implications and function types. The effect is illustrated on some representative input strings is as follows: \begin{center} \begin{tabular}{ll} input source & AST \\ \hline \f x y z\ & \<^verbatim>\(f x y z)\ \\ \'a ty\ & \<^verbatim>\(ty 'a)\ \\ \('a, 'b)ty\ & \<^verbatim>\(ty 'a 'b)\ \\ \\x y z. t\ & \<^verbatim>\("_abs" x ("_abs" y ("_abs" z t)))\ \\ \\x :: 'a. t\ & \<^verbatim>\("_abs" ("_constrain" x 'a) t)\ \\ \\P; Q; R\ \ S\ & \<^verbatim>\("Pure.imp" P ("Pure.imp" Q ("Pure.imp" R S)))\ \\ \['a, 'b, 'c] \ 'd\ & \<^verbatim>\("fun" 'a ("fun" 'b ("fun" 'c 'd)))\ \\ \end{tabular} \end{center} Note that type and sort constraints may occur in further places --- translations need to be ready to cope with them. The built-in syntax transformation from parse trees to ASTs insert additional constraints that represent source positions. \ subsubsection \Transforming ASTs to terms\ text \ After application of macros (\secref{sec:syn-trans}), the AST is transformed into a term. This term still lacks proper type information, but it might contain some constraints consisting of applications with head \<^verbatim>\_constrain\, where the second argument is a type encoded as a pre-term within the syntax. Type inference later introduces correct types, or indicates type errors in the input. Ignoring parse translations, ASTs are transformed to terms by mapping AST constants to term constants, AST variables to term variables or constants (according to the name space), and AST applications to iterated term applications. The outcome is still a first-order term. Proper abstractions and bound variables are introduced by parse translations associated with certain syntax constants. Thus \<^verbatim>\("_abs" x x)\ eventually becomes a de-Bruijn term \<^verbatim>\Abs ("x", _, Bound 0)\. \ subsubsection \Printing of terms\ text \ The output phase is essentially the inverse of the input phase. Terms are translated via abstract syntax trees into pretty-printed text. Ignoring print translations, the transformation maps term constants, variables and applications to the corresponding constructs on ASTs. Abstractions are mapped to applications of the special constant \<^verbatim>\_abs\ as seen before. Type constraints are represented via special \<^verbatim>\_constrain\ forms, according to various policies of type annotation determined elsewhere. Sort constraints of type variables are handled in a similar fashion. After application of macros (\secref{sec:syn-trans}), the AST is finally pretty-printed. The built-in print AST translations reverse the corresponding parse AST translations. \<^medskip> For the actual printing process, the priority grammar (\secref{sec:priority-grammar}) plays a vital role: productions are used as templates for pretty printing, with argument slots stemming from nonterminals, and syntactic sugar stemming from literal tokens. Each AST application with constant head \c\ and arguments \t\<^sub>1\, \dots, \t\<^sub>n\ (for \n = 0\ the AST is just the constant \c\ itself) is printed according to the first grammar production of result name \c\. The required syntax priority of the argument slot is given by its nonterminal \A\<^sup>(\<^sup>p\<^sup>)\. The argument \t\<^sub>i\ that corresponds to the position of \A\<^sup>(\<^sup>p\<^sup>)\ is printed recursively, and then put in parentheses \<^emph>\if\ its priority \p\ requires this. The resulting output is concatenated with the syntactic sugar according to the grammar production. If an AST application \(c x\<^sub>1 \ x\<^sub>m)\ has more arguments than the corresponding production, it is first split into \((c x\<^sub>1 \ x\<^sub>n) x\<^sub>n\<^sub>+\<^sub>1 \ x\<^sub>m)\ and then printed recursively as above. Applications with too few arguments or with non-constant head or without a corresponding production are printed in prefix-form like \f t\<^sub>1 \ t\<^sub>n\ for terms. Multiple productions associated with some name \c\ are tried in order of appearance within the grammar. An occurrence of some AST variable \x\ is printed as \x\ outright. \<^medskip> White space is \<^emph>\not\ inserted automatically. If blanks (or breaks) are required to separate tokens, they need to be specified in the mixfix declaration (\secref{sec:mixfix}). \ end diff --git a/src/Doc/Isar_Ref/Outer_Syntax.thy b/src/Doc/Isar_Ref/Outer_Syntax.thy --- a/src/Doc/Isar_Ref/Outer_Syntax.thy +++ b/src/Doc/Isar_Ref/Outer_Syntax.thy @@ -1,603 +1,607 @@ (*:maxLineLen=78:*) theory Outer_Syntax imports Main Base begin chapter \Outer syntax --- the theory language \label{ch:outer-syntax}\ text \ The rather generic framework of Isabelle/Isar syntax emerges from three main syntactic categories: \<^emph>\commands\ of the top-level Isar engine (covering theory and proof elements), \<^emph>\methods\ for general goal refinements (analogous to traditional ``tactics''), and \<^emph>\attributes\ for operations on facts (within a certain context). Subsequently we give a reference of basic syntactic entities underlying Isabelle/Isar syntax in a bottom-up manner. Concrete theory and proof language elements will be introduced later on. \<^medskip> In order to get started with writing well-formed Isabelle/Isar documents, the most important aspect to be noted is the difference of \<^emph>\inner\ versus \<^emph>\outer\ syntax. Inner syntax is that of Isabelle types and terms of the logic, while outer syntax is that of Isabelle/Isar theory sources (specifications and proofs). As a general rule, inner syntax entities may occur only as \<^emph>\atomic entities\ within outer syntax. For example, the string \<^verbatim>\"x + y"\ and identifier \<^verbatim>\z\ are legal term specifications within a theory, while \<^verbatim>\x + y\ without quotes is not. Printed theory documents usually omit quotes to gain readability (this is a matter of {\LaTeX} macro setup, say via \<^verbatim>\\isabellestyle\, see also @{cite "isabelle-system"}). Experienced users of Isabelle/Isar may easily reconstruct the lost technical information, while mere readers need not care about quotes at all. \ section \Commands\ text \ \begin{matharray}{rcl} @{command_def "print_commands"}\\<^sup>*\ & : & \any \\ \\ @{command_def "help"}\\<^sup>*\ & : & \any \\ \\ \end{matharray} \<^rail>\ @@{command help} (@{syntax name} * ) \ \<^descr> @{command "print_commands"} prints all outer syntax keywords and commands. \<^descr> @{command "help"}~\pats\ retrieves outer syntax commands according to the specified name patterns. \ subsubsection \Examples\ text \ Some common diagnostic commands are retrieved like this (according to usual naming conventions): \ help "print" help "find" section \Lexical matters \label{sec:outer-lex}\ text \ The outer lexical syntax consists of three main categories of syntax tokens: \<^enum> \<^emph>\major keywords\ --- the command names that are available in the present logic session; \<^enum> \<^emph>\minor keywords\ --- additional literal tokens required by the syntax of commands; \<^enum> \<^emph>\named tokens\ --- various categories of identifiers etc. Major keywords and minor keywords are guaranteed to be disjoint. This helps user-interfaces to determine the overall structure of a theory text, without knowing the full details of command syntax. Internally, there is some additional information about the kind of major keywords, which approximates the command type (theory command, proof command etc.). Keywords override named tokens. For example, the presence of a command called \<^verbatim>\term\ inhibits the identifier \<^verbatim>\term\, but the string \<^verbatim>\"term"\ can be used instead. By convention, the outer syntax always allows quoted strings in addition to identifiers, wherever a named entity is expected. When tokenizing a given input sequence, the lexer repeatedly takes the longest prefix of the input that forms a valid token. Spaces, tabs, newlines and formfeeds between tokens serve as explicit separators. \<^medskip> The categories for named tokens are defined once and for all as follows. \begin{center} \begin{supertabular}{rcl} @{syntax_def short_ident} & = & \letter (subscript\<^sup>? quasiletter)\<^sup>*\ \\ @{syntax_def long_ident} & = & \short_ident(\\<^verbatim>\.\\short_ident)\<^sup>+\ \\ @{syntax_def sym_ident} & = & \sym\<^sup>+ |\~~\<^verbatim>\\\\<^verbatim>\<\\short_ident\\<^verbatim>\>\ \\ @{syntax_def nat} & = & \digit\<^sup>+\ \\ @{syntax_def float} & = & @{syntax_ref nat}\<^verbatim>\.\@{syntax_ref nat}~~\|\~~\<^verbatim>\-\@{syntax_ref nat}\<^verbatim>\.\@{syntax_ref nat} \\ @{syntax_def term_var} & = & \<^verbatim>\?\\short_ident |\~~\<^verbatim>\?\\short_ident\\<^verbatim>\.\\nat\ \\ @{syntax_def type_ident} & = & \<^verbatim>\'\\short_ident\ \\ @{syntax_def type_var} & = & \<^verbatim>\?\\type_ident |\~~\<^verbatim>\?\\type_ident\\<^verbatim>\.\\nat\ \\ @{syntax_def string} & = & \<^verbatim>\"\ \\\ \<^verbatim>\"\ \\ @{syntax_def altstring} & = & \<^verbatim>\`\ \\\ \<^verbatim>\`\ \\ @{syntax_def cartouche} & = & \<^verbatim>\\\ \\\ \<^verbatim>\\\ \\ @{syntax_def verbatim} & = & \<^verbatim>\{*\ \\\ \<^verbatim>\*}\ \\[1ex] \letter\ & = & \latin |\~~\<^verbatim>\\\\<^verbatim>\<\\latin\\<^verbatim>\>\~~\|\~~\<^verbatim>\\\\<^verbatim>\<\\latin latin\\<^verbatim>\>\~~\| greek |\ \\ \subscript\ & = & \<^verbatim>\\<^sub>\ \\ \quasiletter\ & = & \letter | digit |\~~\<^verbatim>\_\~~\|\~~\<^verbatim>\'\ \\ \latin\ & = & \<^verbatim>\a\~~\| \ |\~~\<^verbatim>\z\~~\|\~~\<^verbatim>\A\~~\| \ |\~~\<^verbatim>\Z\ \\ \digit\ & = & \<^verbatim>\0\~~\| \ |\~~\<^verbatim>\9\ \\ \sym\ & = & \<^verbatim>\!\~~\|\~~\<^verbatim>\#\~~\|\~~\<^verbatim>\$\~~\|\~~\<^verbatim>\%\~~\|\~~\<^verbatim>\&\~~\|\~~\<^verbatim>\*\~~\|\~~\<^verbatim>\+\~~\|\~~\<^verbatim>\-\~~\|\~~\<^verbatim>\/\~~\|\ \\ & & \<^verbatim>\<\~~\|\~~\<^verbatim>\=\~~\|\~~\<^verbatim>\>\~~\|\~~\<^verbatim>\?\~~\|\~~\<^verbatim>\@\~~\|\~~\<^verbatim>\^\~~\|\~~\<^verbatim>\_\~~\|\~~\<^verbatim>\|\~~\|\~~\<^verbatim>\~\ \\ \greek\ & = & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\ \\ & & \<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\~~\|\~~\<^verbatim>\\\ \\ \end{supertabular} \end{center} A @{syntax_ref term_var} or @{syntax_ref type_var} describes an unknown, which is internally a pair of base name and index (ML type \<^ML_type>\indexname\). These components are either separated by a dot as in \?x.1\ or \?x7.3\ or run together as in \?x1\. The latter form is possible if the base name does not end with digits. If the index is 0, it may be dropped altogether: \?x\ and \?x0\ and \?x.0\ all refer to the same unknown, with basename \x\ and index 0. The syntax of @{syntax_ref string} admits any characters, including newlines; ``\<^verbatim>\"\'' (double-quote) and ``\<^verbatim>\\\'' (backslash) need to be escaped by a backslash; arbitrary character codes may be specified as ``\<^verbatim>\\\\ddd\'', with three decimal digits. Alternative strings according to @{syntax_ref altstring} are analogous, using single back-quotes instead. The body of @{syntax_ref verbatim} may consist of any text not containing ``\<^verbatim>\*}\''; this allows to include quotes without further escapes, but there is no way to escape ``\<^verbatim>\*}\''. Cartouches do not have this limitation. A @{syntax_ref cartouche} consists of arbitrary text, with properly balanced blocks of ``@{verbatim "\"}~\\\~@{verbatim "\"}''. Note that the rendering of cartouche delimiters is usually like this: ``\\ \ \\''. Source comments take the form \<^verbatim>\(*\~\\\~\<^verbatim>\*)\ and may be nested: the text is removed after lexical analysis of the input and thus not suitable for documentation. The Isar syntax also provides proper \<^emph>\document comments\ that are considered as part of the text (see \secref{sec:comments}). Common mathematical symbols such as \\\ are represented in Isabelle as \<^verbatim>\\\. There are infinitely many Isabelle symbols like this, although proper presentation is left to front-end tools such as {\LaTeX} or Isabelle/jEdit. A list of predefined Isabelle symbols that work well with these tools is given in \appref{app:symbols}. Note that \<^verbatim>\\\ does not belong to the \letter\ category, since it is already used differently in the Pure term language. \ section \Common syntax entities\ text \ We now introduce several basic syntactic entities, such as names, terms, and theorem specifications, which are factored out of the actual Isar language elements to be described later. \ subsection \Names\ text \ Entity @{syntax name} usually refers to any name of types, constants, theorems etc.\ Quoted strings provide an escape for non-identifier names or those ruled out by outer syntax keywords (e.g.\ quoted \<^verbatim>\"let"\). \<^rail>\ @{syntax_def name}: @{syntax short_ident} | @{syntax long_ident} | @{syntax sym_ident} | @{syntax nat} | @{syntax string} ; @{syntax_def par_name}: '(' @{syntax name} ')' \ A @{syntax_def system_name} is like @{syntax name}, but it excludes white-space characters and needs to conform to file-name notation. Name components that are special on Windows (e.g.\ \<^verbatim>\CON\, \<^verbatim>\PRN\, \<^verbatim>\AUX\) are excluded on all platforms. \ subsection \Numbers\ text \ The outer lexical syntax (\secref{sec:outer-lex}) admits natural numbers and floating point numbers. These are combined as @{syntax int} and @{syntax real} as follows. \<^rail>\ @{syntax_def int}: @{syntax nat} | '-' @{syntax nat} ; @{syntax_def real}: @{syntax float} | @{syntax int} \ Note that there is an overlap with the category @{syntax name}, which also includes @{syntax nat}. \ subsection \Embedded content\ text \ Entity @{syntax embedded} refers to content of other languages: cartouches allow arbitrary nesting of sub-languages that respect the recursive balancing of cartouche delimiters. Quoted strings are possible as well, but require escaped quotes when nested. As a shortcut, tokens that appear as plain identifiers in the outer language may be used as inner language content without delimiters. \<^rail>\ @{syntax_def embedded}: @{syntax cartouche} | @{syntax string} | @{syntax short_ident} | @{syntax long_ident} | @{syntax sym_ident} | @{syntax term_var} | @{syntax type_ident} | @{syntax type_var} | @{syntax nat} \ \ subsection \Document text\ text \ A chunk of document @{syntax text} is usually given as @{syntax cartouche} \\\\\ or @{syntax verbatim}, i.e.\ enclosed in \<^verbatim>\{*\~\\\~\<^verbatim>\*}\. For convenience, any of the smaller text unit that conforms to @{syntax name} is admitted as well. \<^rail>\ @{syntax_def text}: @{syntax embedded} | @{syntax verbatim} \ Typical uses are document markup commands, like \<^theory_text>\chapter\, \<^theory_text>\section\ etc. (\secref{sec:markup}). \ subsection \Document comments \label{sec:comments}\ text \ Formal comments are an integral part of the document, but are logically void and removed from the resulting theory or term content. The output of document preparation (\chref{ch:document-prep}) supports various styles, according to the following kinds of comments. \<^item> Marginal comment of the form \<^verbatim>\\\~\\text\\ or \\\~\\text\\, usually with a single space between the comment symbol and the argument cartouche. The given argument is typeset as regular text, with formal antiquotations (\secref{sec:antiq}). \<^item> Canceled text of the form \<^verbatim>\\<^cancel>\\\text\\ (no white space between the control symbol and the argument cartouche). The argument is typeset as formal Isabelle source and overlaid with a ``strike-through'' pattern, e.g. \<^theory_text>\\<^cancel>\bad\\. \<^item> Raw {\LaTeX} source of the form \<^verbatim>\\<^latex>\\\argument\\ (no white space between the control symbol and the argument cartouche). This allows to augment the generated {\TeX} source arbitrarily, without any sanity checks! These formal comments work uniformly in outer syntax, inner syntax (term language), Isabelle/ML, and some other embedded languages of Isabelle. \ subsection \Type classes, sorts and arities\ text \ Classes are specified by plain names. Sorts have a very simple inner syntax, which is either a single class name \c\ or a list \{c\<^sub>1, \, c\<^sub>n}\ referring to the intersection of these classes. The syntax of type arities is given directly at the outer level. \<^rail>\ @{syntax_def classdecl}: @{syntax name} (('<' | '\') (@{syntax name} + ','))? ; @{syntax_def sort}: @{syntax embedded} ; @{syntax_def arity}: ('(' (@{syntax sort} + ',') ')')? @{syntax sort} \ \ subsection \Types and terms \label{sec:types-terms}\ text \ The actual inner Isabelle syntax, that of types and terms of the logic, is far too sophisticated in order to be modelled explicitly at the outer theory level. Basically, any such entity has to be quoted to turn it into a single token (the parsing and type-checking is performed internally later). For convenience, a slightly more liberal convention is adopted: quotes may be omitted for any type or term that is already atomic at the outer level. For example, one may just write \<^verbatim>\x\ instead of quoted \<^verbatim>\"x"\. Note that symbolic identifiers (e.g.\ \<^verbatim>\++\ or \\\ are available as well, provided these have not been superseded by commands or other keywords already (such as \<^verbatim>\=\ or \<^verbatim>\+\). \<^rail>\ @{syntax_def type}: @{syntax embedded} ; @{syntax_def term}: @{syntax embedded} ; @{syntax_def prop}: @{syntax embedded} \ Positional instantiations are specified as a sequence of terms, or the placeholder ``\_\'' (underscore), which means to skip a position. \<^rail>\ @{syntax_def inst}: '_' | @{syntax term} ; @{syntax_def insts}: (@{syntax inst} *) \ Named instantiations are specified as pairs of assignments \v = t\, which refer to schematic variables in some theorem that is instantiated. Both type and terms instantiations are admitted, and distinguished by the usual syntax of variable names. \<^rail>\ @{syntax_def named_inst}: variable '=' (type | term) ; @{syntax_def named_insts}: (named_inst @'and' +) ; variable: @{syntax name} | @{syntax term_var} | @{syntax type_ident} | @{syntax type_var} \ Type declarations and definitions usually refer to @{syntax typespec} on the left-hand side. This models basic type constructor application at the outer syntax level. Note that only plain postfix notation is available here, but no infixes. \<^rail>\ - @{syntax_def typespec}: - (() | @{syntax type_ident} | '(' ( @{syntax type_ident} + ',' ) ')') @{syntax name} + @{syntax_def typeargs}: + (() | @{syntax type_ident} | '(' ( @{syntax type_ident} + ',' ) ')') ; - @{syntax_def typespec_sorts}: + @{syntax_def typeargs_sorts}: (() | (@{syntax type_ident} ('::' @{syntax sort})?) | - '(' ( (@{syntax type_ident} ('::' @{syntax sort})?) + ',' ) ')') @{syntax name} + '(' ( (@{syntax type_ident} ('::' @{syntax sort})?) + ',' ) ')') + ; + @{syntax_def typespec}: @{syntax typeargs} @{syntax name} + ; + @{syntax_def typespec_sorts}: @{syntax typeargs_sorts} @{syntax name} \ \ subsection \Term patterns and declarations \label{sec:term-decls}\ text \ Wherever explicit propositions (or term fragments) occur in a proof text, casual binding of schematic term variables may be given specified via patterns of the form ``\<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\''. This works both for @{syntax term} and @{syntax prop}. \<^rail>\ @{syntax_def term_pat}: '(' (@'is' @{syntax term} +) ')' ; @{syntax_def prop_pat}: '(' (@'is' @{syntax prop} +) ')' \ \<^medskip> Declarations of local variables \x :: \\ and logical propositions \a : \\ represent different views on the same principle of introducing a local scope. In practice, one may usually omit the typing of @{syntax vars} (due to type-inference), and the naming of propositions (due to implicit references of current facts). In any case, Isar proof elements usually admit to introduce multiple such items simultaneously. \<^rail>\ @{syntax_def vars}: (((@{syntax name} +) ('::' @{syntax type})? | @{syntax name} ('::' @{syntax type})? @{syntax mixfix}) + @'and') ; @{syntax_def props}: @{syntax thmdecl}? (@{syntax prop} @{syntax prop_pat}? +) ; @{syntax_def props'}: (@{syntax prop} @{syntax prop_pat}? +) \ The treatment of multiple declarations corresponds to the complementary focus of @{syntax vars} versus @{syntax props}. In ``\x\<^sub>1 \ x\<^sub>n :: \\'' the typing refers to all variables, while in \a: \\<^sub>1 \ \\<^sub>n\ the naming refers to all propositions collectively. Isar language elements that refer to @{syntax vars} or @{syntax props} typically admit separate typings or namings via another level of iteration, with explicit @{keyword_ref "and"} separators; e.g.\ see @{command "fix"} and @{command "assume"} in \secref{sec:proof-context}. \ subsection \Attributes and theorems \label{sec:syn-att}\ text \ Attributes have their own ``semi-inner'' syntax, in the sense that input conforming to @{syntax args} below is parsed by the attribute a second time. The attribute argument specifications may be any sequence of atomic entities (identifiers, strings etc.), or properly bracketed argument lists. Below @{syntax atom} refers to any atomic entity, including any @{syntax keyword} conforming to @{syntax sym_ident}. \<^rail>\ @{syntax_def atom}: @{syntax name} | @{syntax type_ident} | @{syntax type_var} | @{syntax term_var} | @{syntax nat} | @{syntax float} | @{syntax keyword} | @{syntax cartouche} ; arg: @{syntax atom} | '(' @{syntax args} ')' | '[' @{syntax args} ']' ; @{syntax_def args}: arg * ; @{syntax_def attributes}: '[' (@{syntax name} @{syntax args} * ',') ']' \ Theorem specifications come in several flavors: @{syntax axmdecl} and @{syntax thmdecl} usually refer to axioms, assumptions or results of goal statements, while @{syntax thmdef} collects lists of existing theorems. Existing theorems are given by @{syntax thm} and @{syntax thms}, the former requires an actual singleton result. There are three forms of theorem references: \<^enum> named facts \a\, \<^enum> selections from named facts \a(i)\ or \a(j - k)\, \<^enum> literal fact propositions using token syntax @{syntax_ref altstring} \<^verbatim>\`\\\\\<^verbatim>\`\ or @{syntax_ref cartouche} \\\\\ (see also method @{method_ref fact}). Any kind of theorem specification may include lists of attributes both on the left and right hand sides; attributes are applied to any immediately preceding fact. If names are omitted, the theorems are not stored within the theorem database of the theory or proof context, but any given attributes are applied nonetheless. An extra pair of brackets around attributes (like ``\[[simproc a]]\'') abbreviates a theorem reference involving an internal dummy fact, which will be ignored later on. So only the effect of the attribute on the background context will persist. This form of in-place declarations is particularly useful with commands like @{command "declare"} and @{command "using"}. \<^rail>\ @{syntax_def axmdecl}: @{syntax name} @{syntax attributes}? ':' ; @{syntax_def thmbind}: @{syntax name} @{syntax attributes} | @{syntax name} | @{syntax attributes} ; @{syntax_def thmdecl}: thmbind ':' ; @{syntax_def thmdef}: thmbind '=' ; @{syntax_def thm}: (@{syntax name} selection? | @{syntax altstring} | @{syntax cartouche}) @{syntax attributes}? | '[' @{syntax attributes} ']' ; @{syntax_def thms}: @{syntax thm} + ; selection: '(' ((@{syntax nat} | @{syntax nat} '-' @{syntax nat}?) + ',') ')' \ \ subsection \Structured specifications\ text \ Structured specifications use propositions with explicit notation for the ``eigen-context'' to describe rule structure: \\x. A x \ \ \ B x\ is expressed as \<^theory_text>\B x if A x and \ for x\. It is also possible to use dummy terms ``\_\'' (underscore) to refer to locally fixed variables anonymously. Multiple specifications are delimited by ``\|\'' to emphasize separate cases: each with its own scope of inferred types for free variables. \<^rail>\ @{syntax_def for_fixes}: (@'for' @{syntax vars})? ; @{syntax_def multi_specs}: (@{syntax structured_spec} + '|') ; @{syntax_def structured_spec}: @{syntax thmdecl}? @{syntax prop} @{syntax spec_prems} @{syntax for_fixes} ; @{syntax_def spec_prems}: (@'if' ((@{syntax prop}+) + @'and'))? ; @{syntax_def specification}: @{syntax vars} @'where' @{syntax multi_specs} \ \ section \Diagnostic commands\ text \ \begin{matharray}{rcl} @{command_def "print_theory"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_definitions"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_methods"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_attributes"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_theorems"}\\<^sup>*\ & : & \context \\ \\ @{command_def "find_theorems"}\\<^sup>*\ & : & \context \\ \\ @{command_def "find_consts"}\\<^sup>*\ & : & \context \\ \\ @{command_def "thm_deps"}\\<^sup>*\ & : & \context \\ \\ @{command_def "unused_thms"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_facts"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_term_bindings"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} \<^rail>\ (@@{command print_theory} | @@{command print_definitions} | @@{command print_methods} | @@{command print_attributes} | @@{command print_theorems} | @@{command print_facts}) ('!'?) ; @@{command find_theorems} ('(' @{syntax nat}? 'with_dups'? ')')? \ (thm_criterion*) ; thm_criterion: ('-'?) ('name' ':' @{syntax name} | 'intro' | 'elim' | 'dest' | 'solves' | 'simp' ':' @{syntax term} | @{syntax term}) ; @@{command find_consts} (const_criterion*) ; const_criterion: ('-'?) ('name' ':' @{syntax name} | 'strict' ':' @{syntax type} | @{syntax type}) ; @@{command thm_deps} @{syntax thmrefs} ; @@{command unused_thms} ((@{syntax name} +) '-' (@{syntax name} * ))? \ These commands print certain parts of the theory and proof context. Note that there are some further ones available, such as for the set of rules declared for simplifications. \<^descr> @{command "print_theory"} prints the main logical content of the background theory; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_definitions"} prints dependencies of definitional specifications within the background theory, which may be constants (\secref{sec:term-definitions}, \secref{sec:overloading}) or types (\secref{sec:types-pure}, \secref{sec:hol-typedef}); the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_methods"} prints all proof methods available in the current theory context; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_attributes"} prints all attributes available in the current theory context; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_theorems"} prints theorems of the background theory resulting from the last command; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_facts"} prints all local facts of the current context, both named and unnamed ones; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_term_bindings"} prints all term bindings that are present in the context. \<^descr> @{command "find_theorems"}~\criteria\ retrieves facts from the theory or proof context matching all of given search criteria. The criterion \name: p\ selects all theorems whose fully qualified name matches pattern \p\, which may contain ``\*\'' wildcards. The criteria \intro\, \elim\, and \dest\ select theorems that match the current goal as introduction, elimination or destruction rules, respectively. The criterion \solves\ returns all rules that would directly solve the current goal. The criterion \simp: t\ selects all rewrite rules whose left-hand side matches the given term. The criterion term \t\ selects all theorems that contain the pattern \t\ -- as usual, patterns may contain occurrences of the dummy ``\_\'', schematic variables, and type constraints. Criteria can be preceded by ``\-\'' to select theorems that do \<^emph>\not\ match. Note that giving the empty list of criteria yields \<^emph>\all\ currently known facts. An optional limit for the number of printed facts may be given; the default is 40. By default, duplicates are removed from the search result. Use \with_dups\ to display duplicates. \<^descr> @{command "find_consts"}~\criteria\ prints all constants whose type meets all of the given criteria. The criterion \strict: ty\ is met by any type that matches the type pattern \ty\. Patterns may contain both the dummy type ``\_\'' and sort constraints. The criterion \ty\ is similar, but it also matches against subtypes. The criterion \name: p\ and the prefix ``\-\'' function as described for @{command "find_theorems"}. \<^descr> @{command "thm_deps"}~\thms\ prints immediate theorem dependencies, i.e.\ the union of all theorems that are used directly to prove the argument facts, without going deeper into the dependency graph. \<^descr> @{command "unused_thms"}~\A\<^sub>1 \ A\<^sub>m - B\<^sub>1 \ B\<^sub>n\ displays all theorems that are proved in theories \B\<^sub>1 \ B\<^sub>n\ or their parents but not in \A\<^sub>1 \ A\<^sub>m\ or their parents and that are never used. If \n\ is \0\, the end of the range of theories defaults to the current theory. If no range is specified, only the unused theorems in the current theory are displayed. \ end diff --git a/src/Doc/Isar_Ref/Proof.thy b/src/Doc/Isar_Ref/Proof.thy --- a/src/Doc/Isar_Ref/Proof.thy +++ b/src/Doc/Isar_Ref/Proof.thy @@ -1,1440 +1,1440 @@ (*:maxLineLen=78:*) theory Proof imports Main Base begin chapter \Proofs \label{ch:proofs}\ text \ Proof commands perform transitions of Isar/VM machine configurations, which are block-structured, consisting of a stack of nodes with three main components: logical proof context, current facts, and open goals. Isar/VM transitions are typed according to the following three different modes of operation: \<^descr> \proof(prove)\ means that a new goal has just been stated that is now to be \<^emph>\proven\; the next command may refine it by some proof method, and enter a sub-proof to establish the actual result. \<^descr> \proof(state)\ is like a nested theory mode: the context may be augmented by \<^emph>\stating\ additional assumptions, intermediate results etc. \<^descr> \proof(chain)\ is intermediate between \proof(state)\ and \proof(prove)\: existing facts (i.e.\ the contents of the special @{fact_ref this} register) have been just picked up in order to be used when refining the goal claimed next. The proof mode indicator may be understood as an instruction to the writer, telling what kind of operation may be performed next. The corresponding typings of proof commands restricts the shape of well-formed proof texts to particular command sequences. So dynamic arrangements of commands eventually turn out as static texts of a certain structure. \Appref{ap:refcard} gives a simplified grammar of the (extensible) language emerging that way from the different types of proof commands. The main ideas of the overall Isar framework are explained in \chref{ch:isar-framework}. \ section \Proof structure\ subsection \Formal notepad\ text \ \begin{matharray}{rcl} @{command_def "notepad"} & : & \local_theory \ proof(state)\ \\ \end{matharray} \<^rail>\ @@{command notepad} @'begin' ; @@{command end} \ \<^descr> @{command "notepad"}~@{keyword "begin"} opens a proof state without any goal statement. This allows to experiment with Isar, without producing any persistent result. The notepad is closed by @{command "end"}. \ subsection \Blocks\ text \ \begin{matharray}{rcl} @{command_def "next"} & : & \proof(state) \ proof(state)\ \\ @{command_def "{"} & : & \proof(state) \ proof(state)\ \\ @{command_def "}"} & : & \proof(state) \ proof(state)\ \\ \end{matharray} While Isar is inherently block-structured, opening and closing blocks is mostly handled rather casually, with little explicit user-intervention. Any local goal statement automatically opens \<^emph>\two\ internal blocks, which are closed again when concluding the sub-proof (by @{command "qed"} etc.). Sections of different context within a sub-proof may be switched via @{command "next"}, which is just a single block-close followed by block-open again. The effect of @{command "next"} is to reset the local proof context; there is no goal focus involved here! For slightly more advanced applications, there are explicit block parentheses as well. These typically achieve a stronger forward style of reasoning. \<^descr> @{command "next"} switches to a fresh block within a sub-proof, resetting the local context to the initial one. \<^descr> @{command "{"} and @{command "}"} explicitly open and close blocks. Any current facts pass through ``@{command "{"}'' unchanged, while ``@{command "}"}'' causes any result to be \<^emph>\exported\ into the enclosing context. Thus fixed variables are generalized, assumptions discharged, and local definitions unfolded (cf.\ \secref{sec:proof-context}). There is no difference of @{command "assume"} and @{command "presume"} in this mode of forward reasoning --- in contrast to plain backward reasoning with the result exported at @{command "show"} time. \ subsection \Omitting proofs\ text \ \begin{matharray}{rcl} @{command_def "oops"} & : & \proof \ local_theory | theory\ \\ \end{matharray} The @{command "oops"} command discontinues the current proof attempt, while considering the partial proof text as properly processed. This is conceptually quite different from ``faking'' actual proofs via @{command_ref "sorry"} (see \secref{sec:proof-steps}): @{command "oops"} does not observe the proof structure at all, but goes back right to the theory level. Furthermore, @{command "oops"} does not produce any result theorem --- there is no intended claim to be able to complete the proof in any way. A typical application of @{command "oops"} is to explain Isar proofs \<^emph>\within\ the system itself, in conjunction with the document preparation tools of Isabelle described in \chref{ch:document-prep}. Thus partial or even wrong proof attempts can be discussed in a logically sound manner. Note that the Isabelle {\LaTeX} macros can be easily adapted to print something like ``\\\'' instead of the keyword ``@{command "oops"}''. \ section \Statements\ subsection \Context elements \label{sec:proof-context}\ text \ \begin{matharray}{rcl} @{command_def "fix"} & : & \proof(state) \ proof(state)\ \\ @{command_def "assume"} & : & \proof(state) \ proof(state)\ \\ @{command_def "presume"} & : & \proof(state) \ proof(state)\ \\ @{command_def "define"} & : & \proof(state) \ proof(state)\ \\ \end{matharray} The logical proof context consists of fixed variables and assumptions. The former closely correspond to Skolem constants, or meta-level universal quantification as provided by the Isabelle/Pure logical framework. Introducing some \<^emph>\arbitrary, but fixed\ variable via ``@{command "fix"}~\x\'' results in a local value that may be used in the subsequent proof as any other variable or constant. Furthermore, any result \\ \[x]\ exported from the context will be universally closed wrt.\ \x\ at the outermost level: \\ \x. \[x]\ (this is expressed in normal form using Isabelle's meta-variables). Similarly, introducing some assumption \\\ has two effects. On the one hand, a local theorem is created that may be used as a fact in subsequent proof steps. On the other hand, any result \\ \ \\ exported from the context becomes conditional wrt.\ the assumption: \\ \ \ \\. Thus, solving an enclosing goal using such a result would basically introduce a new subgoal stemming from the assumption. How this situation is handled depends on the version of assumption command used: while @{command "assume"} insists on solving the subgoal by unification with some premise of the goal, @{command "presume"} leaves the subgoal unchanged in order to be proved later by the user. Local definitions, introduced by ``\<^theory_text>\define x where x = t\'', are achieved by combining ``@{command "fix"}~\x\'' with another version of assumption that causes any hypothetical equation \x \ t\ to be eliminated by the reflexivity rule. Thus, exporting some result \x \ t \ \[x]\ yields \\ \[t]\. \<^rail>\ @@{command fix} @{syntax vars} ; (@@{command assume} | @@{command presume}) concl prems @{syntax for_fixes} ; concl: (@{syntax props} + @'and') ; prems: (@'if' (@{syntax props'} + @'and'))? ; @@{command define} @{syntax vars} @'where' (@{syntax props} + @'and') @{syntax for_fixes} \ \<^descr> @{command "fix"}~\x\ introduces a local variable \x\ that is \<^emph>\arbitrary, but fixed\. \<^descr> @{command "assume"}~\a: \\ and @{command "presume"}~\a: \\ introduce a local fact \\ \ \\ by assumption. Subsequent results applied to an enclosing goal (e.g.\ by @{command_ref "show"}) are handled as follows: @{command "assume"} expects to be able to unify with existing premises in the goal, while @{command "presume"} leaves \\\ as new subgoals. Several lists of assumptions may be given (separated by @{keyword_ref "and"}; the resulting list of current facts consists of all of these concatenated. A structured assumption like \<^theory_text>\assume "B x" if "A x" for x\ is equivalent to \<^theory_text>\assume "\x. A x \ B x"\, but vacuous quantification is avoided: a for-context only effects propositions according to actual use of variables. \<^descr> \<^theory_text>\define x where "x = t"\ introduces a local (non-polymorphic) definition. In results that are exported from the context, \x\ is replaced by \t\. Internally, equational assumptions are added to the context in Pure form, using \x \ t\ instead of \x = t\ or \x \ t\ from the object-logic. When exporting results from the context, \x\ is generalized and the assumption discharged by reflexivity, causing the replacement by \t\. The default name for the definitional fact is \x_def\. Several simultaneous definitions may be given as well, with a collective default name. \<^medskip> It is also possible to abstract over local parameters as follows: \<^theory_text>\define f :: "'a \ 'b" where "f x = t" for x :: 'a\. \ subsection \Term abbreviations \label{sec:term-abbrev}\ text \ \begin{matharray}{rcl} @{command_def "let"} & : & \proof(state) \ proof(state)\ \\ @{keyword_def "is"} & : & syntax \\ \end{matharray} Abbreviations may be either bound by explicit @{command "let"}~\p \ t\ statements, or by annotating assumptions or goal statements with a list of patterns ``\<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\''. In both cases, higher-order matching is invoked to bind extra-logical term variables, which may be either named schematic variables of the form \?x\, or nameless dummies ``@{variable _}'' (underscore). Note that in the @{command "let"} form the patterns occur on the left-hand side, while the @{keyword "is"} patterns are in postfix position. Polymorphism of term bindings is handled in Hindley-Milner style, similar to ML. Type variables referring to local assumptions or open goal statements are \<^emph>\fixed\, while those of finished results or bound by @{command "let"} may occur in \<^emph>\arbitrary\ instances later. Even though actual polymorphism should be rarely used in practice, this mechanism is essential to achieve proper incremental type-inference, as the user proceeds to build up the Isar proof text from left to right. \<^medskip> Term abbreviations are quite different from local definitions as introduced via @{command "define"} (see \secref{sec:proof-context}). The latter are visible within the logic as actual equations, while abbreviations disappear during the input process just after type checking. Also note that @{command "define"} does not support polymorphism. \<^rail>\ @@{command let} ((@{syntax term} + @'and') '=' @{syntax term} + @'and') \ The syntax of @{keyword "is"} patterns follows @{syntax term_pat} or @{syntax prop_pat} (see \secref{sec:term-decls}). \<^descr> \<^theory_text>\let p\<^sub>1 = t\<^sub>1 and \ p\<^sub>n = t\<^sub>n\ binds any text variables in patterns \p\<^sub>1, \, p\<^sub>n\ by simultaneous higher-order matching against terms \t\<^sub>1, \, t\<^sub>n\. \<^descr> \<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\ resembles @{command "let"}, but matches \p\<^sub>1, \, p\<^sub>n\ against the preceding statement. Also note that @{keyword "is"} is not a separate command, but part of others (such as @{command "assume"}, @{command "have"} etc.). Some \<^emph>\implicit\ term abbreviations\index{term abbreviations} for goals and facts are available as well. For any open goal, @{variable_ref thesis} refers to its object-level statement, abstracted over any meta-level parameters (if present). Likewise, @{variable_ref this} is bound for fact statements resulting from assumptions or finished goals. In case @{variable this} refers to an object-logic statement that is an application \f t\, then \t\ is bound to the special text variable ``@{variable "\"}'' (three dots). The canonical application of this convenience are calculational proofs (see \secref{sec:calculation}). \ subsection \Facts and forward chaining \label{sec:proof-facts}\ text \ \begin{matharray}{rcl} @{command_def "note"} & : & \proof(state) \ proof(state)\ \\ @{command_def "then"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "from"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "with"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "using"} & : & \proof(prove) \ proof(prove)\ \\ @{command_def "unfolding"} & : & \proof(prove) \ proof(prove)\ \\ @{method_def "use"} & : & \method\ \\ @{fact_def "method_facts"} & : & \fact\ \\ \end{matharray} New facts are established either by assumption or proof of local statements. Any fact will usually be involved in further proofs, either as explicit arguments of proof methods, or when forward chaining towards the next goal via @{command "then"} (and variants); @{command "from"} and @{command "with"} are composite forms involving @{command "note"}. The @{command "using"} elements augments the collection of used facts \<^emph>\after\ a goal has been stated. Note that the special theorem name @{fact_ref this} refers to the most recently established facts, but only \<^emph>\before\ issuing a follow-up claim. \<^rail>\ @@{command note} (@{syntax thmdef}? @{syntax thms} + @'and') ; (@@{command from} | @@{command with} | @@{command using} | @@{command unfolding}) (@{syntax thms} + @'and') ; @{method use} @{syntax thms} @'in' @{syntax method} \ \<^descr> @{command "note"}~\a = b\<^sub>1 \ b\<^sub>n\ recalls existing facts \b\<^sub>1, \, b\<^sub>n\, binding the result as \a\. Note that attributes may be involved as well, both on the left and right hand sides. \<^descr> @{command "then"} indicates forward chaining by the current facts in order to establish the goal to be claimed next. The initial proof method invoked to refine that will be offered the facts to do ``anything appropriate'' (see also \secref{sec:proof-steps}). For example, method @{method (Pure) rule} (see \secref{sec:pure-meth-att}) would typically do an elimination rather than an introduction. Automatic methods usually insert the facts into the goal state before operation. This provides a simple scheme to control relevance of facts in automated proof search. \<^descr> @{command "from"}~\b\ abbreviates ``@{command "note"}~\b\~@{command "then"}''; thus @{command "then"} is equivalent to ``@{command "from"}~\this\''. \<^descr> @{command "with"}~\b\<^sub>1 \ b\<^sub>n\ abbreviates ``@{command "from"}~\b\<^sub>1 \ b\<^sub>n \ this\''; thus the forward chaining is from earlier facts together with the current ones. \<^descr> @{command "using"}~\b\<^sub>1 \ b\<^sub>n\ augments the facts to be used by a subsequent refinement step (such as @{command_ref "apply"} or @{command_ref "proof"}). \<^descr> @{command "unfolding"}~\b\<^sub>1 \ b\<^sub>n\ is structurally similar to @{command "using"}, but unfolds definitional equations \b\<^sub>1 \ b\<^sub>n\ throughout the goal state and facts. See also the proof method @{method_ref unfold}. \<^descr> \<^theory_text>\(use b\<^sub>1 \ b\<^sub>n in method)\ uses the facts in the given method expression. The facts provided by the proof state (via @{command "using"} etc.) are ignored, but it is possible to refer to @{fact method_facts} explicitly. \<^descr> @{fact method_facts} is a dynamic fact that refers to the currently used facts of the goal state. Forward chaining with an empty list of theorems is the same as not chaining at all. Thus ``@{command "from"}~\nothing\'' has no effect apart from entering \prove(chain)\ mode, since @{fact_ref nothing} is bound to the empty list of theorems. Basic proof methods (such as @{method_ref (Pure) rule}) expect multiple facts to be given in their proper order, corresponding to a prefix of the premises of the rule involved. Note that positions may be easily skipped using something like @{command "from"}~\_ \ a \ b\, for example. This involves the trivial rule \PROP \ \ PROP \\, which is bound in Isabelle/Pure as ``@{fact_ref "_"}'' (underscore). Automated methods (such as @{method simp} or @{method auto}) just insert any given facts before their usual operation. Depending on the kind of procedure involved, the order of facts is less significant here. \ subsection \Goals \label{sec:goals}\ text \ \begin{matharray}{rcl} @{command_def "lemma"} & : & \local_theory \ proof(prove)\ \\ @{command_def "theorem"} & : & \local_theory \ proof(prove)\ \\ @{command_def "corollary"} & : & \local_theory \ proof(prove)\ \\ @{command_def "proposition"} & : & \local_theory \ proof(prove)\ \\ @{command_def "schematic_goal"} & : & \local_theory \ proof(prove)\ \\ @{command_def "have"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "show"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "hence"} & : & \proof(state) \ proof(prove)\ \\ @{command_def "thus"} & : & \proof(state) \ proof(prove)\ \\ @{command_def "print_statement"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} From a theory context, proof mode is entered by an initial goal command such as @{command "lemma"}. Within a proof context, new claims may be introduced locally; there are variants to interact with the overall proof structure specifically, such as @{command have} or @{command show}. Goals may consist of multiple statements, resulting in a list of facts eventually. A pending multi-goal is internally represented as a meta-level conjunction (\&&&\), which is usually split into the corresponding number of sub-goals prior to an initial method application, via @{command_ref "proof"} (\secref{sec:proof-steps}) or @{command_ref "apply"} (\secref{sec:tactic-commands}). The @{method_ref induct} method covered in \secref{sec:cases-induct} acts on multiple claims simultaneously. Claims at the theory level may be either in short or long form. A short goal merely consists of several simultaneous propositions (often just one). A long goal includes an explicit context specification for the subsequent conclusion, involving local parameters and assumptions. Here the role of each part of the statement is explicitly marked by separate keywords (see also \secref{sec:locale}); the local assumptions being introduced here are available as @{fact_ref assms} in the proof. Moreover, there are two kinds of conclusions: @{element_def "shows"} states several simultaneous propositions (essentially a big conjunction), while @{element_def "obtains"} claims several simultaneous simultaneous contexts of (essentially a big disjunction of eliminated parameters and assumptions, cf.\ \secref{sec:obtain}). \<^rail>\ (@@{command lemma} | @@{command theorem} | @@{command corollary} | @@{command proposition} | @@{command schematic_goal}) (long_statement | short_statement) ; (@@{command have} | @@{command show} | @@{command hence} | @@{command thus}) stmt cond_stmt @{syntax for_fixes} ; @@{command print_statement} @{syntax modes}? @{syntax thms} ; stmt: (@{syntax props} + @'and') ; cond_stmt: ((@'if' | @'when') stmt)? ; short_statement: stmt (@'if' stmt)? @{syntax for_fixes} ; long_statement: @{syntax thmdecl}? context conclusion ; context: (@{syntax_ref "includes"}?) (@{syntax context_elem} *) ; conclusion: @'shows' stmt | @'obtains' @{syntax obtain_clauses} ; @{syntax_def obtain_clauses}: (@{syntax par_name}? obtain_case + '|') ; @{syntax_def obtain_case}: @{syntax vars} @'where' (@{syntax thmdecl}? (@{syntax prop}+) + @'and') \ \<^descr> @{command "lemma"}~\a: \\ enters proof mode with \\\ as main goal, eventually resulting in some fact \\ \\ to be put back into the target context. A @{syntax long_statement} may build up an initial proof context for the subsequent claim, potentially including local definitions and syntax; see also @{syntax "includes"} in \secref{sec:bundle} and @{syntax context_elem} in \secref{sec:locale}. A @{syntax short_statement} consists of propositions as conclusion, with an option context of premises and parameters, via \<^verbatim>\if\/\<^verbatim>\for\ in postfix notation, corresponding to \<^verbatim>\assumes\/\<^verbatim>\fixes\ in the long prefix notation. Local premises (if present) are called ``\assms\'' for @{syntax long_statement}, and ``\that\'' for @{syntax short_statement}. \<^descr> @{command "theorem"}, @{command "corollary"}, and @{command "proposition"} are the same as @{command "lemma"}. The different command names merely serve as a formal comment in the theory source. \<^descr> @{command "schematic_goal"} is similar to @{command "theorem"}, but allows the statement to contain unbound schematic variables. Under normal circumstances, an Isar proof text needs to specify claims explicitly. Schematic goals are more like goals in Prolog, where certain results are synthesized in the course of reasoning. With schematic statements, the inherent compositionality of Isar proofs is lost, which also impacts performance, because proof checking is forced into sequential mode. \<^descr> @{command "have"}~\a: \\ claims a local goal, eventually resulting in a fact within the current logical context. This operation is completely independent of any pending sub-goals of an enclosing goal statements, so @{command "have"} may be freely used for experimental exploration of potential results within a proof body. \<^descr> @{command "show"}~\a: \\ is like @{command "have"}~\a: \\ plus a second stage to refine some pending sub-goal for each one of the finished result, after having been exported into the corresponding context (at the head of the sub-proof of this @{command "show"} command). To accommodate interactive debugging, resulting rules are printed before being applied internally. Even more, interactive execution of @{command "show"} predicts potential failure and displays the resulting error as a warning beforehand. Watch out for the following message: @{verbatim [display] \Local statement fails to refine any pending goal\} \<^descr> @{command "hence"} expands to ``@{command "then"}~@{command "have"}'' and @{command "thus"} expands to ``@{command "then"}~@{command "show"}''. These conflations are left-over from early history of Isar. The expanded syntax is more orthogonal and improves readability and maintainability of proofs. \<^descr> @{command "print_statement"}~\a\ prints facts from the current theory or proof context in long statement form, according to the syntax for @{command "lemma"} given above. Any goal statement causes some term abbreviations (such as @{variable_ref "?thesis"}) to be bound automatically, see also \secref{sec:term-abbrev}. Structured goal statements involving @{keyword_ref "if"} or @{keyword_ref "when"} define the special fact @{fact_ref that} to refer to these assumptions in the proof body. The user may provide separate names according to the syntax of the statement. \ section \Calculational reasoning \label{sec:calculation}\ text \ \begin{matharray}{rcl} @{command_def "also"} & : & \proof(state) \ proof(state)\ \\ @{command_def "finally"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "moreover"} & : & \proof(state) \ proof(state)\ \\ @{command_def "ultimately"} & : & \proof(state) \ proof(chain)\ \\ @{command_def "print_trans_rules"}\\<^sup>*\ & : & \context \\ \\ @{attribute trans} & : & \attribute\ \\ @{attribute sym} & : & \attribute\ \\ @{attribute symmetric} & : & \attribute\ \\ \end{matharray} Calculational proof is forward reasoning with implicit application of transitivity rules (such those of \=\, \\\, \<\). Isabelle/Isar maintains an auxiliary fact register @{fact_ref calculation} for accumulating results obtained by transitivity composed with the current result. Command @{command "also"} updates @{fact calculation} involving @{fact this}, while @{command "finally"} exhibits the final @{fact calculation} by forward chaining towards the next goal statement. Both commands require valid current facts, i.e.\ may occur only after commands that produce theorems such as @{command "assume"}, @{command "note"}, or some finished proof of @{command "have"}, @{command "show"} etc. The @{command "moreover"} and @{command "ultimately"} commands are similar to @{command "also"} and @{command "finally"}, but only collect further results in @{fact calculation} without applying any rules yet. Also note that the implicit term abbreviation ``\\\'' has its canonical application with calculational proofs. It refers to the argument of the preceding statement. (The argument of a curried infix expression happens to be its right-hand side.) Isabelle/Isar calculations are implicitly subject to block structure in the sense that new threads of calculational reasoning are commenced for any new block (as opened by a local goal, for example). This means that, apart from being able to nest calculations, there is no separate \<^emph>\begin-calculation\ command required. \<^medskip> The Isar calculation proof commands may be defined as follows:\<^footnote>\We suppress internal bookkeeping such as proper handling of block-structure.\ \begin{matharray}{rcl} @{command "also"}\\<^sub>0\ & \equiv & @{command "note"}~\calculation = this\ \\ @{command "also"}\\<^sub>n\<^sub>+\<^sub>1\ & \equiv & @{command "note"}~\calculation = trans [OF calculation this]\ \\[0.5ex] @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~\calculation\ \\[0.5ex] @{command "moreover"} & \equiv & @{command "note"}~\calculation = calculation this\ \\ @{command "ultimately"} & \equiv & @{command "moreover"}~@{command "from"}~\calculation\ \\ \end{matharray} \<^rail>\ (@@{command also} | @@{command finally}) ('(' @{syntax thms} ')')? ; @@{attribute trans} (() | 'add' | 'del') \ \<^descr> @{command "also"}~\(a\<^sub>1 \ a\<^sub>n)\ maintains the auxiliary @{fact calculation} register as follows. The first occurrence of @{command "also"} in some calculational thread initializes @{fact calculation} by @{fact this}. Any subsequent @{command "also"} on the same level of block-structure updates @{fact calculation} by some transitivity rule applied to @{fact calculation} and @{fact this} (in that order). Transitivity rules are picked from the current context, unless alternative rules are given as explicit arguments. \<^descr> @{command "finally"}~\(a\<^sub>1 \ a\<^sub>n)\ maintains @{fact calculation} in the same way as @{command "also"} and then concludes the current calculational thread. The final result is exhibited as fact for forward chaining towards the next goal. Basically, @{command "finally"} abbreviates @{command "also"}~@{command "from"}~@{fact calculation}. Typical idioms for concluding calculational proofs are ``@{command "finally"}~@{command "show"}~\?thesis\~@{command "."}'' and ``@{command "finally"}~@{command "have"}~\\\~@{command "."}''. \<^descr> @{command "moreover"} and @{command "ultimately"} are analogous to @{command "also"} and @{command "finally"}, but collect results only, without applying rules. \<^descr> @{command "print_trans_rules"} prints the list of transitivity rules (for calculational commands @{command "also"} and @{command "finally"}) and symmetry rules (for the @{attribute symmetric} operation and single step elimination patters) of the current context. \<^descr> @{attribute trans} declares theorems as transitivity rules. \<^descr> @{attribute sym} declares symmetry rules, as well as @{attribute "Pure.elim"}\?\ rules. \<^descr> @{attribute symmetric} resolves a theorem with some rule declared as @{attribute sym} in the current context. For example, ``@{command "assume"}~\[symmetric]: x = y\'' produces a swapped fact derived from that assumption. In structured proof texts it is often more appropriate to use an explicit single-step elimination proof, such as ``@{command "assume"}~\x = y\~@{command "then"}~@{command "have"}~\y = x\~@{command ".."}''. \ section \Refinement steps\ subsection \Proof method expressions \label{sec:proof-meth}\ text \ Proof methods are either basic ones, or expressions composed of methods via ``\<^verbatim>\,\'' (sequential composition), ``\<^verbatim>\;\'' (structural composition), ``\<^verbatim>\|\'' (alternative choices), ``\<^verbatim>\?\'' (try), ``\<^verbatim>\+\'' (repeat at least once), ``\<^verbatim>\[\\n\\<^verbatim>\]\'' (restriction to first \n\ subgoals). In practice, proof methods are usually just a comma separated list of @{syntax name}~@{syntax args} specifications. Note that parentheses may be dropped for single method specifications (with no arguments). The syntactic precedence of method combinators is \<^verbatim>\|\ \<^verbatim>\;\ \<^verbatim>\,\ \<^verbatim>\[]\ \<^verbatim>\+\ \<^verbatim>\?\ (from low to high). \<^rail>\ @{syntax_def method}: (@{syntax name} | '(' methods ')') (() | '?' | '+' | '[' @{syntax nat}? ']') ; methods: (@{syntax name} @{syntax args} | @{syntax method}) + (',' | ';' | '|') \ Regular Isar proof methods do \<^emph>\not\ admit direct goal addressing, but refer to the first subgoal or to all subgoals uniformly. Nonetheless, the subsequent mechanisms allow to imitate the effect of subgoal addressing that is known from ML tactics. \<^medskip> Goal \<^emph>\restriction\ means the proof state is wrapped-up in a way that certain subgoals are exposed, and other subgoals are ``parked'' elsewhere. Thus a proof method has no other chance than to operate on the subgoals that are presently exposed. Structural composition ``\m\<^sub>1\\<^verbatim>\;\~\m\<^sub>2\'' means that method \m\<^sub>1\ is applied with restriction to the first subgoal, then \m\<^sub>2\ is applied consecutively with restriction to each subgoal that has newly emerged due to - \m\<^sub>1\. This is analogous to the tactic combinator \<^ML_op>\THEN_ALL_NEW\ in + \m\<^sub>1\. This is analogous to the tactic combinator \<^ML_infix>\THEN_ALL_NEW\ in Isabelle/ML, see also @{cite "isabelle-implementation"}. For example, \(rule r; blast)\ applies rule \r\ and then solves all new subgoals by \blast\. Moreover, the explicit goal restriction operator ``\[n]\'' exposes only the first \n\ subgoals (which need to exist), with default \n = 1\. For example, the method expression ``\simp_all[3]\'' simplifies the first three subgoals, while ``\(rule r, simp_all)[]\'' simplifies all new goals that emerge from applying rule \r\ to the originally first one. \<^medskip> Improper methods, notably tactic emulations, offer low-level goal addressing as explicit argument to the individual tactic being involved. Here ``\[!]\'' refers to all goals, and ``\[n-]\'' to all goals starting from \n\. \<^rail>\ @{syntax_def goal_spec}: '[' (@{syntax nat} '-' @{syntax nat} | @{syntax nat} '-' | @{syntax nat} | '!' ) ']' \ \ subsection \Initial and terminal proof steps \label{sec:proof-steps}\ text \ \begin{matharray}{rcl} @{command_def "proof"} & : & \proof(prove) \ proof(state)\ \\ @{command_def "qed"} & : & \proof(state) \ proof(state) | local_theory | theory\ \\ @{command_def "by"} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def ".."} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def "."} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{command_def "sorry"} & : & \proof(prove) \ proof(state) | local_theory | theory\ \\ @{method_def standard} & : & \method\ \\ \end{matharray} Arbitrary goal refinement via tactics is considered harmful. Structured proof composition in Isar admits proof methods to be invoked in two places only. \<^enum> An \<^emph>\initial\ refinement step @{command_ref "proof"}~\m\<^sub>1\ reduces a newly stated goal to a number of sub-goals that are to be solved later. Facts are passed to \m\<^sub>1\ for forward chaining, if so indicated by \proof(chain)\ mode. \<^enum> A \<^emph>\terminal\ conclusion step @{command_ref "qed"}~\m\<^sub>2\ is intended to solve remaining goals. No facts are passed to \m\<^sub>2\. The only other (proper) way to affect pending goals in a proof body is by @{command_ref "show"}, which involves an explicit statement of what is to be solved eventually. Thus we avoid the fundamental problem of unstructured tactic scripts that consist of numerous consecutive goal transformations, with invisible effects. \<^medskip> As a general rule of thumb for good proof style, initial proof methods should either solve the goal completely, or constitute some well-understood reduction to new sub-goals. Arbitrary automatic proof tools that are prone leave a large number of badly structured sub-goals are no help in continuing the proof document in an intelligible manner. Unless given explicitly by the user, the default initial method is @{method standard}, which subsumes at least @{method_ref (Pure) rule} or its classical variant @{method_ref (HOL) rule}. These methods apply a single standard elimination or introduction rule according to the topmost logical connective involved. There is no separate default terminal method. Any remaining goals are always solved by assumption in the very last step. \<^rail>\ @@{command proof} method? ; @@{command qed} method? ; @@{command "by"} method method? ; (@@{command "."} | @@{command ".."} | @@{command sorry}) \ \<^descr> @{command "proof"}~\m\<^sub>1\ refines the goal by proof method \m\<^sub>1\; facts for forward chaining are passed if so indicated by \proof(chain)\ mode. \<^descr> @{command "qed"}~\m\<^sub>2\ refines any remaining goals by proof method \m\<^sub>2\ and concludes the sub-proof by assumption. If the goal had been \show\, some pending sub-goal is solved as well by the rule resulting from the result \<^emph>\exported\ into the enclosing goal context. Thus \qed\ may fail for two reasons: either \m\<^sub>2\ fails, or the resulting rule does not fit to any pending goal\<^footnote>\This includes any additional ``strong'' assumptions as introduced by @{command "assume"}.\ of the enclosing context. Debugging such a situation might involve temporarily changing @{command "show"} into @{command "have"}, or weakening the local context by replacing occurrences of @{command "assume"} by @{command "presume"}. \<^descr> @{command "by"}~\m\<^sub>1 m\<^sub>2\ is a \<^emph>\terminal proof\\index{proof!terminal}; it abbreviates @{command "proof"}~\m\<^sub>1\~@{command "qed"}~\m\<^sub>2\, but with backtracking across both methods. Debugging an unsuccessful @{command "by"}~\m\<^sub>1 m\<^sub>2\ command can be done by expanding its definition; in many cases @{command "proof"}~\m\<^sub>1\ (or even \apply\~\m\<^sub>1\) is already sufficient to see the problem. \<^descr> ``@{command ".."}'' is a \<^emph>\standard proof\\index{proof!standard}; it abbreviates @{command "by"}~\standard\. \<^descr> ``@{command "."}'' is a \<^emph>\trivial proof\\index{proof!trivial}; it abbreviates @{command "by"}~\this\. \<^descr> @{command "sorry"} is a \<^emph>\fake proof\\index{proof!fake} pretending to solve the pending claim without further ado. This only works in interactive development, or if the @{attribute quick_and_dirty} is enabled. Facts emerging from fake proofs are not the real thing. Internally, the derivation object is tainted by an oracle invocation, which may be inspected via the command @{command "thm_oracles"} (\secref{sec:oracles}). The most important application of @{command "sorry"} is to support experimentation and top-down proof development. \<^descr> @{method standard} refers to the default refinement step of some Isar language elements (notably @{command proof} and ``@{command ".."}''). It is \<^emph>\dynamically scoped\, so the behaviour depends on the application environment. In Isabelle/Pure, @{method standard} performs elementary introduction~/ elimination steps (@{method_ref (Pure) rule}), introduction of type classes (@{method_ref intro_classes}) and locales (@{method_ref intro_locales}). In Isabelle/HOL, @{method standard} also takes classical rules into account (cf.\ \secref{sec:classical}). \ subsection \Fundamental methods and attributes \label{sec:pure-meth-att}\ text \ The following proof methods and attributes refer to basic logical operations of Isar. Further methods and attributes are provided by several generic and object-logic specific tools and packages (see \chref{ch:gen-tools} and \partref{part:hol}). \begin{matharray}{rcl} @{command_def "print_rules"}\\<^sup>*\ & : & \context \\ \\[0.5ex] @{method_def "-"} & : & \method\ \\ @{method_def "goal_cases"} & : & \method\ \\ @{method_def "subproofs"} & : & \method\ \\ @{method_def "fact"} & : & \method\ \\ @{method_def "assumption"} & : & \method\ \\ @{method_def "this"} & : & \method\ \\ @{method_def (Pure) "rule"} & : & \method\ \\ @{attribute_def (Pure) "intro"} & : & \attribute\ \\ @{attribute_def (Pure) "elim"} & : & \attribute\ \\ @{attribute_def (Pure) "dest"} & : & \attribute\ \\ @{attribute_def (Pure) "rule"} & : & \attribute\ \\[0.5ex] @{attribute_def "OF"} & : & \attribute\ \\ @{attribute_def "of"} & : & \attribute\ \\ @{attribute_def "where"} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{method goal_cases} (@{syntax name}*) ; @@{method subproofs} @{syntax method} ; @@{method fact} @{syntax thms}? ; @@{method (Pure) rule} @{syntax thms}? ; rulemod: ('intro' | 'elim' | 'dest') ((('!' | () | '?') @{syntax nat}?) | 'del') ':' @{syntax thms} ; (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}? ; @@{attribute (Pure) rule} 'del' ; @@{attribute OF} @{syntax thms} ; @@{attribute of} @{syntax insts} ('concl' ':' @{syntax insts})? @{syntax for_fixes} ; @@{attribute "where"} @{syntax named_insts} @{syntax for_fixes} \ \<^descr> @{command "print_rules"} prints rules declared via attributes @{attribute (Pure) intro}, @{attribute (Pure) elim}, @{attribute (Pure) dest} of Isabelle/Pure. See also the analogous @{command "print_claset"} command for similar rule declarations of the classical reasoner (\secref{sec:classical}). \<^descr> ``@{method "-"}'' (minus) inserts the forward chaining facts as premises into the goal, and nothing else. Note that command @{command_ref "proof"} without any method actually performs a single reduction step using the @{method_ref (Pure) rule} method; thus a plain \<^emph>\do-nothing\ proof step would be ``@{command "proof"}~\-\'' rather than @{command "proof"} alone. \<^descr> @{method "goal_cases"}~\a\<^sub>1 \ a\<^sub>n\ turns the current subgoals into cases within the context (see also \secref{sec:cases-induct}). The specified case names are used if present; otherwise cases are numbered starting from 1. Invoking cases in the subsequent proof body via the @{command_ref case} command will @{command fix} goal parameters, @{command assume} goal premises, and @{command let} variable @{variable_ref ?case} refer to the conclusion. \<^descr> @{method "subproofs"}~\m\ applies the method expression \m\ consecutively to each subgoal, constructing individual subproofs internally (analogous to ``\<^theory_text>\show goal by m\'' for each subgoal of the proof state). Search alternatives of \m\ are truncated: the method is forced to be deterministic. This method combinator impacts the internal construction of proof terms: it makes a cascade of let-expressions within the derivation tree and may thus improve scalability. \<^descr> @{method "fact"}~\a\<^sub>1 \ a\<^sub>n\ composes some fact from \a\<^sub>1, \, a\<^sub>n\ (or implicitly from the current proof context) modulo unification of schematic type and term variables. The rule structure is not taken into account, i.e.\ meta-level implication is considered atomic. This is the same principle underlying literal facts (cf.\ \secref{sec:syn-att}): ``@{command "have"}~\\\~@{command "by"}~\fact\'' is equivalent to ``@{command "note"}~\<^verbatim>\`\\\\\<^verbatim>\`\'' provided that \\ \\ is an instance of some known \\ \\ in the proof context. \<^descr> @{method assumption} solves some goal by a single assumption step. All given facts are guaranteed to participate in the refinement; this means there may be only 0 or 1 in the first place. Recall that @{command "qed"} (\secref{sec:proof-steps}) already concludes any remaining sub-goals by assumption, so structured proofs usually need not quote the @{method assumption} method at all. \<^descr> @{method this} applies all of the current facts directly as rules. Recall that ``@{command "."}'' (dot) abbreviates ``@{command "by"}~\this\''. \<^descr> @{method (Pure) rule}~\a\<^sub>1 \ a\<^sub>n\ applies some rule given as argument in backward manner; facts are used to reduce the rule before applying it to the goal. Thus @{method (Pure) rule} without facts is plain introduction, while with facts it becomes elimination. When no arguments are given, the @{method (Pure) rule} method tries to pick appropriate rules automatically, as declared in the current context using the @{attribute (Pure) intro}, @{attribute (Pure) elim}, @{attribute (Pure) dest} attributes (see below). This is included in the standard behaviour of @{command "proof"} and ``@{command ".."}'' (double-dot) steps (see \secref{sec:proof-steps}). \<^descr> @{attribute (Pure) intro}, @{attribute (Pure) elim}, and @{attribute (Pure) dest} declare introduction, elimination, and destruct rules, to be used with method @{method (Pure) rule}, and similar tools. Note that the latter will ignore rules declared with ``\?\'', while ``\!\'' are used most aggressively. The classical reasoner (see \secref{sec:classical}) introduces its own variants of these attributes; use qualified names to access the present versions of Isabelle/Pure, i.e.\ @{attribute (Pure) "Pure.intro"}. \<^descr> @{attribute (Pure) rule}~\del\ undeclares introduction, elimination, or destruct rules. \<^descr> @{attribute OF}~\a\<^sub>1 \ a\<^sub>n\ applies some theorem to all of the given rules \a\<^sub>1, \, a\<^sub>n\ in canonical right-to-left order, which means that premises stemming from the \a\<^sub>i\ emerge in parallel in the result, without interfering with each other. In many practical situations, the \a\<^sub>i\ do not have premises themselves, so \rule [OF a\<^sub>1 \ a\<^sub>n]\ can be actually read as functional application (modulo unification). Argument positions may be effectively skipped by using ``\_\'' (underscore), which refers to the propositional identity rule in the Pure theory. \<^descr> @{attribute of}~\t\<^sub>1 \ t\<^sub>n\ performs positional instantiation of term variables. The terms \t\<^sub>1, \, t\<^sub>n\ are substituted for any schematic variables occurring in a theorem from left to right; ``\_\'' (underscore) indicates to skip a position. Arguments following a ``\concl:\'' specification refer to positions of the conclusion of a rule. An optional context of local variables \\ x\<^sub>1 \ x\<^sub>m\ may be specified: the instantiated theorem is exported, and these variables become schematic (usually with some shifting of indices). \<^descr> @{attribute "where"}~\x\<^sub>1 = t\<^sub>1 \ \ x\<^sub>n = t\<^sub>n\ performs named instantiation of schematic type and term variables occurring in a theorem. Schematic variables have to be specified on the left-hand side (e.g.\ \?x1.3\). The question mark may be omitted if the variable name is a plain identifier without index. As type instantiations are inferred from term instantiations, explicit type instantiations are seldom necessary. An optional context of local variables \\ x\<^sub>1 \ x\<^sub>m\ may be specified as for @{attribute "of"} above. \ subsection \Defining proof methods\ text \ \begin{matharray}{rcl} @{command_def "method_setup"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command method_setup} @{syntax name} '=' @{syntax text} @{syntax text}? \ \<^descr> @{command "method_setup"}~\name = text description\ defines a proof method in the current context. The given \text\ has to be an ML expression of type \<^ML_type>\(Proof.context -> Proof.method) context_parser\, cf.\ basic parsers defined in structure \<^ML_structure>\Args\ and \<^ML_structure>\Attrib\. There are also combinators like \<^ML>\METHOD\ and \<^ML>\SIMPLE_METHOD\ to turn certain tactic forms into official proof methods; the primed versions refer to tactics with explicit goal addressing. Here are some example method definitions: \ (*<*)experiment begin(*>*) method_setup my_method1 = \Scan.succeed (K (SIMPLE_METHOD' (fn i: int => no_tac)))\ "my first method (without any arguments)" method_setup my_method2 = \Scan.succeed (fn ctxt: Proof.context => SIMPLE_METHOD' (fn i: int => no_tac))\ "my second method (with context)" method_setup my_method3 = \Attrib.thms >> (fn thms: thm list => fn ctxt: Proof.context => SIMPLE_METHOD' (fn i: int => no_tac))\ "my third method (with theorem arguments and context)" (*<*)end(*>*) section \Proof by cases and induction \label{sec:cases-induct}\ subsection \Rule contexts\ text \ \begin{matharray}{rcl} @{command_def "case"} & : & \proof(state) \ proof(state)\ \\ @{command_def "print_cases"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def case_names} & : & \attribute\ \\ @{attribute_def case_conclusion} & : & \attribute\ \\ @{attribute_def params} & : & \attribute\ \\ @{attribute_def consumes} & : & \attribute\ \\ \end{matharray} The puristic way to build up Isar proof contexts is by explicit language elements like @{command "fix"}, @{command "assume"}, @{command "let"} (see \secref{sec:proof-context}). This is adequate for plain natural deduction, but easily becomes unwieldy in concrete verification tasks, which typically involve big induction rules with several cases. The @{command "case"} command provides a shorthand to refer to a local context symbolically: certain proof methods provide an environment of named ``cases'' of the form \c: x\<^sub>1, \, x\<^sub>m, \\<^sub>1, \, \\<^sub>n\; the effect of ``@{command "case"}~\c\'' is then equivalent to ``@{command "fix"}~\x\<^sub>1 \ x\<^sub>m\~@{command "assume"}~\c: \\<^sub>1 \ \\<^sub>n\''. Term bindings may be covered as well, notably @{variable ?case} for the main conclusion. By default, the ``terminology'' \x\<^sub>1, \, x\<^sub>m\ of a case value is marked as hidden, i.e.\ there is no way to refer to such parameters in the subsequent proof text. After all, original rule parameters stem from somewhere outside of the current proof text. By using the explicit form ``@{command "case"}~\(c y\<^sub>1 \ y\<^sub>m)\'' instead, the proof author is able to chose local names that fit nicely into the current context. \<^medskip> It is important to note that proper use of @{command "case"} does not provide means to peek at the current goal state, which is not directly observable in Isar! Nonetheless, goal refinement commands do provide named cases \goal\<^sub>i\ for each subgoal \i = 1, \, n\ of the resulting goal state. Using this extra feature requires great care, because some bits of the internal tactical machinery intrude the proof text. In particular, parameter names stemming from the left-over of automated reasoning tools are usually quite unpredictable. Under normal circumstances, the text of cases emerge from standard elimination or induction rules, which in turn are derived from previous theory specifications in a canonical way (say from @{command "inductive"} definitions). \<^medskip> Proper cases are only available if both the proof method and the rules involved support this. By using appropriate attributes, case names, conclusions, and parameters may be also declared by hand. Thus variant versions of rules that have been derived manually become ready to use in advanced case analysis later. \<^rail>\ @@{command case} @{syntax thmdecl}? (name | '(' name (('_' | @{syntax name}) *) ')') ; @@{attribute case_names} ((@{syntax name} ( '[' (('_' | @{syntax name}) *) ']' ) ? ) +) ; @@{attribute case_conclusion} @{syntax name} (@{syntax name} * ) ; @@{attribute params} ((@{syntax name} * ) + @'and') ; @@{attribute consumes} @{syntax int}? \ \<^descr> @{command "case"}~\a: (c x\<^sub>1 \ x\<^sub>m)\ invokes a named local context \c: x\<^sub>1, \, x\<^sub>m, \\<^sub>1, \, \\<^sub>m\, as provided by an appropriate proof method (such as @{method_ref cases} and @{method_ref induct}). The command ``@{command "case"}~\a: (c x\<^sub>1 \ x\<^sub>m)\'' abbreviates ``@{command "fix"}~\x\<^sub>1 \ x\<^sub>m\~@{command "assume"}~\a.c: \\<^sub>1 \ \\<^sub>n\''. Each local fact is qualified by the prefix \a\, and all such facts are collectively bound to the name \a\. The fact name is specification \a\ is optional, the default is to re-use \c\. So @{command "case"}~\(c x\<^sub>1 \ x\<^sub>m)\ is the same as @{command "case"}~\c: (c x\<^sub>1 \ x\<^sub>m)\. \<^descr> @{command "print_cases"} prints all local contexts of the current state, using Isar proof language notation. \<^descr> @{attribute case_names}~\c\<^sub>1 \ c\<^sub>k\ declares names for the local contexts of premises of a theorem; \c\<^sub>1, \, c\<^sub>k\ refers to the \<^emph>\prefix\ of the list of premises. Each of the cases \c\<^sub>i\ can be of the form \c[h\<^sub>1 \ h\<^sub>n]\ where the \h\<^sub>1 \ h\<^sub>n\ are the names of the hypotheses in case \c\<^sub>i\ from left to right. \<^descr> @{attribute case_conclusion}~\c d\<^sub>1 \ d\<^sub>k\ declares names for the conclusions of a named premise \c\; here \d\<^sub>1, \, d\<^sub>k\ refers to the prefix of arguments of a logical formula built by nesting a binary connective (e.g.\ \\\). Note that proof methods such as @{method induct} and @{method coinduct} already provide a default name for the conclusion as a whole. The need to name subformulas only arises with cases that split into several sub-cases, as in common co-induction rules. \<^descr> @{attribute params}~\p\<^sub>1 \ p\<^sub>m \ \ q\<^sub>1 \ q\<^sub>n\ renames the innermost parameters of premises \1, \, n\ of some theorem. An empty list of names may be given to skip positions, leaving the present parameters unchanged. Note that the default usage of case rules does \<^emph>\not\ directly expose parameters to the proof context. \<^descr> @{attribute consumes}~\n\ declares the number of ``major premises'' of a rule, i.e.\ the number of facts to be consumed when it is applied by an appropriate proof method. The default value of @{attribute consumes} is \n = 1\, which is appropriate for the usual kind of cases and induction rules for inductive sets (cf.\ \secref{sec:hol-inductive}). Rules without any @{attribute consumes} declaration given are treated as if @{attribute consumes}~\0\ had been specified. A negative \n\ is interpreted relatively to the total number of premises of the rule in the target context. Thus its absolute value specifies the remaining number of premises, after subtracting the prefix of major premises as indicated above. This form of declaration has the technical advantage of being stable under more morphisms, notably those that export the result from a nested @{command_ref context} with additional assumptions. Note that explicit @{attribute consumes} declarations are only rarely needed; this is already taken care of automatically by the higher-level @{attribute cases}, @{attribute induct}, and @{attribute coinduct} declarations. \ subsection \Proof methods\ text \ \begin{matharray}{rcl} @{method_def cases} & : & \method\ \\ @{method_def induct} & : & \method\ \\ @{method_def induction} & : & \method\ \\ @{method_def coinduct} & : & \method\ \\ \end{matharray} The @{method cases}, @{method induct}, @{method induction}, and @{method coinduct} methods provide a uniform interface to common proof techniques over datatypes, inductive predicates (or sets), recursive functions etc. The corresponding rules may be specified and instantiated in a casual manner. Furthermore, these methods provide named local contexts that may be invoked via the @{command "case"} proof command within the subsequent proof text. This accommodates compact proof texts even when reasoning about large specifications. The @{method induct} method also provides some infrastructure to work with structured statements (either using explicit meta-level connectives, or including facts and parameters separately). This avoids cumbersome encoding of ``strengthened'' inductive statements within the object-logic. Method @{method induction} differs from @{method induct} only in the names of the facts in the local context invoked by the @{command "case"} command. \<^rail>\ @@{method cases} ('(' 'no_simp' ')')? \ (@{syntax insts} * @'and') rule? ; (@@{method induct} | @@{method induction}) ('(' 'no_simp' ')')? (definsts * @'and') \ arbitrary? taking? rule? ; @@{method coinduct} @{syntax insts} taking rule? ; rule: ('type' | 'pred' | 'set') ':' (@{syntax name} +) | 'rule' ':' (@{syntax thm} +) ; definst: @{syntax name} ('==' | '\') @{syntax term} | '(' @{syntax term} ')' | @{syntax inst} ; definsts: ( definst * ) ; arbitrary: 'arbitrary' ':' ((@{syntax term} * ) @'and' +) ; taking: 'taking' ':' @{syntax insts} \ \<^descr> @{method cases}~\insts R\ applies method @{method rule} with an appropriate case distinction theorem, instantiated to the subjects \insts\. Symbolic case names are bound according to the rule's local contexts. The rule is determined as follows, according to the facts and arguments passed to the @{method cases} method: \<^medskip> \begin{tabular}{llll} facts & & arguments & rule \\\hline \\ R\ & @{method cases} & & implicit rule \R\ \\ & @{method cases} & & classical case split \\ & @{method cases} & \t\ & datatype exhaustion (type of \t\) \\ \\ A t\ & @{method cases} & \\\ & inductive predicate/set elimination (of \A\) \\ \\\ & @{method cases} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Several instantiations may be given, referring to the \<^emph>\suffix\ of premises of the case rule; within each premise, the \<^emph>\prefix\ of variables is instantiated. In most situations, only a single term needs to be specified; this refers to the first variable of the last premise (it is usually the same for all cases). The \(no_simp)\ option can be used to disable pre-simplification of cases (see the description of @{method induct} below for details). \<^descr> @{method induct}~\insts R\ and @{method induction}~\insts R\ are analogous to the @{method cases} method, but refer to induction rules, which are determined as follows: \<^medskip> \begin{tabular}{llll} facts & & arguments & rule \\\hline & @{method induct} & \P x\ & datatype induction (type of \x\) \\ \\ A x\ & @{method induct} & \\\ & predicate/set induction (of \A\) \\ \\\ & @{method induct} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Several instantiations may be given, each referring to some part of a mutual inductive definition or datatype --- only related partial induction rules may be used together, though. Any of the lists of terms \P, x, \\ refers to the \<^emph>\suffix\ of variables present in the induction rule. This enables the writer to specify only induction variables, or both predicates and variables, for example. Instantiations may be definitional: equations \x \ t\ introduce local definitions, which are inserted into the claim and discharged after applying the induction rule. Equalities reappear in the inductive cases, but have been transformed according to the induction principle being involved here. In order to achieve practically useful induction hypotheses, some variables occurring in \t\ need to generalized (see below). Instantiations of the form \t\, where \t\ is not a variable, are taken as a shorthand for \x \ t\, where \x\ is a fresh variable. If this is not intended, \t\ has to be enclosed in parentheses. By default, the equalities generated by definitional instantiations are pre-simplified using a specific set of rules, usually consisting of distinctness and injectivity theorems for datatypes. This pre-simplification may cause some of the parameters of an inductive case to disappear, or may even completely delete some of the inductive cases, if one of the equalities occurring in their premises can be simplified to \False\. The \(no_simp)\ option can be used to disable pre-simplification. Additional rules to be used in pre-simplification can be declared using the @{attribute_def induct_simp} attribute. The optional ``\arbitrary: x\<^sub>1 \ x\<^sub>m\'' specification generalizes variables \x\<^sub>1, \, x\<^sub>m\ of the original goal before applying induction. It is possible to separate variables by ``\and\'' to generalize in goals other than the first. Thus induction hypotheses may become sufficiently general to get the proof through. Together with definitional instantiations, one may effectively perform induction over expressions of a certain structure. The optional ``\taking: t\<^sub>1 \ t\<^sub>n\'' specification provides additional instantiations of a prefix of pending variables in the rule. Such schematic induction rules rarely occur in practice, though. \<^descr> @{method coinduct}~\inst R\ is analogous to the @{method induct} method, but refers to coinduction rules, which are determined as follows: \<^medskip> \begin{tabular}{llll} goal & & arguments & rule \\\hline & @{method coinduct} & \x\ & type coinduction (type of \x\) \\ \A x\ & @{method coinduct} & \\\ & predicate/set coinduction (of \A\) \\ \\\ & @{method coinduct} & \\ rule: R\ & explicit rule \R\ \\ \end{tabular} \<^medskip> Coinduction is the dual of induction. Induction essentially eliminates \A x\ towards a generic result \P x\, while coinduction introduces \A x\ starting with \B x\, for a suitable ``bisimulation'' \B\. The cases of a coinduct rule are typically named after the predicates or sets being covered, while the conclusions consist of several alternatives being named after the individual destructor patterns. The given instantiation refers to the \<^emph>\suffix\ of variables occurring in the rule's major premise, or conclusion if unavailable. An additional ``\taking: t\<^sub>1 \ t\<^sub>n\'' specification may be required in order to specify the bisimulation to be used in the coinduction step. Above methods produce named local contexts, as determined by the instantiated rule as given in the text. Beyond that, the @{method induct} and @{method coinduct} methods guess further instantiations from the goal specification itself. Any persisting unresolved schematic variables of the resulting rule will render the the corresponding case invalid. The term binding @{variable ?case} for the conclusion will be provided with each case, provided that term is fully specified. The @{command "print_cases"} command prints all named cases present in the current proof state. \<^medskip> Despite the additional infrastructure, both @{method cases} and @{method coinduct} merely apply a certain rule, after instantiation, while conforming due to the usual way of monotonic natural deduction: the context of a structured statement \\x\<^sub>1 \ x\<^sub>m. \\<^sub>1 \ \ \\<^sub>n \ \\ reappears unchanged after the case split. The @{method induct} method is fundamentally different in this respect: the meta-level structure is passed through the ``recursive'' course involved in the induction. Thus the original statement is basically replaced by separate copies, corresponding to the induction hypotheses and conclusion; the original goal context is no longer available. Thus local assumptions, fixed parameters and definitions effectively participate in the inductive rephrasing of the original statement. In @{method induct} proofs, local assumptions introduced by cases are split into two different kinds: \hyps\ stemming from the rule and \prems\ from the goal statement. This is reflected in the extracted cases accordingly, so invoking ``@{command "case"}~\c\'' will provide separate facts \c.hyps\ and \c.prems\, as well as fact \c\ to hold the all-inclusive list. In @{method induction} proofs, local assumptions introduced by cases are split into three different kinds: \IH\, the induction hypotheses, \hyps\, the remaining hypotheses stemming from the rule, and \prems\, the assumptions from the goal statement. The names are \c.IH\, \c.hyps\ and \c.prems\, as above. \<^medskip> Facts presented to either method are consumed according to the number of ``major premises'' of the rule involved, which is usually 0 for plain cases and induction rules of datatypes etc.\ and 1 for rules of inductive predicates or sets and the like. The remaining facts are inserted into the goal verbatim before the actual \cases\, \induct\, or \coinduct\ rule is applied. \ subsection \Declaring rules\ text \ \begin{matharray}{rcl} @{command_def "print_induct_rules"}\\<^sup>*\ & : & \context \\ \\ @{attribute_def cases} & : & \attribute\ \\ @{attribute_def induct} & : & \attribute\ \\ @{attribute_def coinduct} & : & \attribute\ \\ \end{matharray} \<^rail>\ @@{attribute cases} spec ; @@{attribute induct} spec ; @@{attribute coinduct} spec ; spec: (('type' | 'pred' | 'set') ':' @{syntax name}) | 'del' \ \<^descr> @{command "print_induct_rules"} prints cases and induct rules for predicates (or sets) and types of the current context. \<^descr> @{attribute cases}, @{attribute induct}, and @{attribute coinduct} (as attributes) declare rules for reasoning about (co)inductive predicates (or sets) and types, using the corresponding methods of the same name. Certain definitional packages of object-logics usually declare emerging cases and induction rules as expected, so users rarely need to intervene. Rules may be deleted via the \del\ specification, which covers all of the \type\/\pred\/\set\ sub-categories simultaneously. For example, @{attribute cases}~\del\ removes any @{attribute cases} rules declared for some type, predicate, or set. Manual rule declarations usually refer to the @{attribute case_names} and @{attribute params} attributes to adjust names of cases and parameters of a rule; the @{attribute consumes} declaration is taken care of automatically: @{attribute consumes}~\0\ is specified for ``type'' rules and @{attribute consumes}~\1\ for ``predicate'' / ``set'' rules. \ section \Generalized elimination and case splitting \label{sec:obtain}\ text \ \begin{matharray}{rcl} @{command_def "consider"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "obtain"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "guess"}\\<^sup>*\ & : & \proof(state) | proof(chain) \ proof(prove)\ \\ \end{matharray} Generalized elimination means that hypothetical parameters and premises may be introduced in the current context, potentially with a split into cases. This works by virtue of a locally proven rule that establishes the soundness of this temporary context extension. As representative examples, one may think of standard rules from Isabelle/HOL like this: \<^medskip> \begin{tabular}{ll} \\x. B x \ (\x. B x \ thesis) \ thesis\ \\ \A \ B \ (A \ B \ thesis) \ thesis\ \\ \A \ B \ (A \ thesis) \ (B \ thesis) \ thesis\ \\ \end{tabular} \<^medskip> In general, these particular rules and connectives need to get involved at all: this concept works directly in Isabelle/Pure via Isar commands defined below. In particular, the logic of elimination and case splitting is delegated to an Isar proof, which often involves automated tools. \<^rail>\ @@{command consider} @{syntax obtain_clauses} ; @@{command obtain} @{syntax par_name}? @{syntax vars} \ @'where' concl prems @{syntax for_fixes} ; concl: (@{syntax props} + @'and') ; prems: (@'if' (@{syntax props'} + @'and'))? ; @@{command guess} @{syntax vars} \ \<^descr> @{command consider}~\(a) \<^vec>x \ \<^vec>A \<^vec>x | (b) \<^vec>y \ \<^vec>B \<^vec>y | \\ states a rule for case splitting into separate subgoals, such that each case involves new parameters and premises. After the proof is finished, the resulting rule may be used directly with the @{method cases} proof method (\secref{sec:cases-induct}), in order to perform actual case-splitting of the proof text via @{command case} and @{command next} as usual. Optional names in round parentheses refer to case names: in the proof of the rule this is a fact name, in the resulting rule it is used as annotation with the @{attribute_ref case_names} attribute. \<^medskip> Formally, the command @{command consider} is defined as derived Isar language element as follows: \begin{matharray}{l} @{command "consider"}~\(a) \<^vec>x \ \<^vec>A \<^vec>x | (b) \<^vec>y \ \<^vec>B \<^vec>y | \ \\ \\[1ex] \quad @{command "have"}~\[case_names a b \]: thesis\ \\ \qquad \\ a [Pure.intro?]: \\<^vec>x. \<^vec>A \<^vec>x \ thesis\ \\ \qquad \\ b [Pure.intro?]: \\<^vec>y. \<^vec>B \<^vec>y \ thesis\ \\ \qquad \\ \\ \\ \qquad \\ thesis\ \\ \qquad @{command "apply"}~\(insert a b \)\ \\ \end{matharray} See also \secref{sec:goals} for @{keyword "obtains"} in toplevel goal statements, as well as @{command print_statement} to print existing rules in a similar format. \<^descr> @{command obtain}~\\<^vec>x \ \<^vec>A \<^vec>x\ states a generalized elimination rule with exactly one case. After the proof is finished, it is activated for the subsequent proof text: the context is augmented via @{command fix}~\\<^vec>x\ @{command assume}~\\<^vec>A \<^vec>x\, with special provisions to export later results by discharging these assumptions again. Note that according to the parameter scopes within the elimination rule, results \<^emph>\must not\ refer to hypothetical parameters; otherwise the export will fail! This restriction conforms to the usual manner of existential reasoning in Natural Deduction. \<^medskip> Formally, the command @{command obtain} is defined as derived Isar language element as follows, using an instrumented variant of @{command assume}: \begin{matharray}{l} @{command "obtain"}~\\<^vec>x \ a: \<^vec>A \<^vec>x \proof\ \\ \\[1ex] \quad @{command "have"}~\thesis\ \\ \qquad \\ that [Pure.intro?]: \\<^vec>x. \<^vec>A \<^vec>x \ thesis\ \\ \qquad \\ thesis\ \\ \qquad @{command "apply"}~\(insert that)\ \\ \qquad \\proof\\ \\ \quad @{command "fix"}~\\<^vec>x\~@{command "assume"}\\<^sup>* a: \<^vec>A \<^vec>x\ \\ \end{matharray} \<^descr> @{command guess} is similar to @{command obtain}, but it derives the obtained context elements from the course of tactical reasoning in the proof. Thus it can considerably obscure the proof: it is classified as \<^emph>\improper\. A proof with @{command guess} starts with a fixed goal \thesis\. The subsequent refinement steps may turn this to anything of the form \\\<^vec>x. \<^vec>A \<^vec>x \ thesis\, but without splitting into new subgoals. The final goal state is then used as reduction rule for the obtain pattern described above. Obtained parameters \\<^vec>x\ are marked as internal by default, and thus inaccessible in the proof text. The variable names and type constraints given as arguments for @{command "guess"} specify a prefix of accessible parameters. In the proof of @{command consider} and @{command obtain} the local premises are always bound to the fact name @{fact_ref that}, according to structured Isar statements involving @{keyword_ref "if"} (\secref{sec:goals}). Facts that are established by @{command "obtain"} and @{command "guess"} may not be polymorphic: any type-variables occurring here are fixed in the present context. This is a natural consequence of the role of @{command fix} and @{command assume} in these constructs. \ end diff --git a/src/Doc/Isar_Ref/Spec.thy b/src/Doc/Isar_Ref/Spec.thy --- a/src/Doc/Isar_Ref/Spec.thy +++ b/src/Doc/Isar_Ref/Spec.thy @@ -1,1532 +1,1535 @@ (*:maxLineLen=78:*) theory Spec imports Main Base begin chapter \Specifications\ text \ The Isabelle/Isar theory format integrates specifications and proofs, with support for interactive development by continuous document editing. There is a separate document preparation system (see \chref{ch:document-prep}), for typesetting formal developments together with informal text. The resulting hyper-linked PDF documents can be used both for WWW presentation and printed copies. The Isar proof language (see \chref{ch:proofs}) is embedded into the theory language as a proper sub-language. Proof mode is entered by stating some \<^theory_text>\theorem\ or \<^theory_text>\lemma\ at the theory level, and left again with the final conclusion (e.g.\ via \<^theory_text>\qed\). \ section \Defining theories \label{sec:begin-thy}\ text \ \begin{matharray}{rcl} @{command_def "theory"} & : & \toplevel \ theory\ \\ @{command_def (global) "end"} & : & \theory \ toplevel\ \\ @{command_def "thy_deps"}\\<^sup>*\ & : & \theory \\ \\ \end{matharray} Isabelle/Isar theories are defined via theory files, which consist of an outermost sequence of definition--statement--proof elements. Some definitions are self-sufficient (e.g.\ \<^theory_text>\fun\ in Isabelle/HOL), with foundational proofs performed internally. Other definitions require an explicit proof as justification (e.g.\ \<^theory_text>\function\ and \<^theory_text>\termination\ in Isabelle/HOL). Plain statements like \<^theory_text>\theorem\ or \<^theory_text>\lemma\ are merely a special case of that, defining a theorem from a given proposition and its proof. The theory body may be sub-structured by means of \<^emph>\local theory targets\, such as \<^theory_text>\locale\ and \<^theory_text>\class\. It is also possible to use \<^theory_text>\context begin \ end\ blocks to delimited a local theory context: a \<^emph>\named context\ to augment a locale or class specification, or an \<^emph>\unnamed context\ to refer to local parameters and assumptions that are discharged later. See \secref{sec:target} for more details. \<^medskip> A theory is commenced by the \<^theory_text>\theory\ command, which indicates imports of previous theories, according to an acyclic foundational order. Before the initial \<^theory_text>\theory\ command, there may be optional document header material (like \<^theory_text>\section\ or \<^theory_text>\text\, see \secref{sec:markup}). The document header is outside of the formal theory context, though. A theory is concluded by a final @{command (global) "end"} command, one that does not belong to a local theory target. No further commands may follow such a global @{command (global) "end"}. \<^rail>\ @@{command theory} @{syntax system_name} @'imports' (@{syntax system_name} +) \ keywords? abbrevs? @'begin' ; keywords: @'keywords' (keyword_decls + @'and') ; keyword_decls: (@{syntax string} +) ('::' @{syntax name} @{syntax tags})? ; abbrevs: @'abbrevs' (((text+) '=' (text+)) + @'and') ; @@{command thy_deps} (thy_bounds thy_bounds?)? ; thy_bounds: @{syntax name} | '(' (@{syntax name} + @'|') ')' \ \<^descr> \<^theory_text>\theory A imports B\<^sub>1 \ B\<^sub>n begin\ starts a new theory \A\ based on the merge of existing theories \B\<^sub>1 \ B\<^sub>n\. Due to the possibility to import more than one ancestor, the resulting theory structure of an Isabelle session forms a directed acyclic graph (DAG). Isabelle takes care that sources contributing to the development graph are always up-to-date: changed files are automatically rechecked whenever a theory header specification is processed. Empty imports are only allowed in the bootstrap process of the special theory \<^theory>\Pure\, which is the start of any other formal development based on Isabelle. Regular user theories usually refer to some more complex entry point, such as theory \<^theory>\Main\ in Isabelle/HOL. The @{keyword_def "keywords"} specification declares outer syntax (\chref{ch:outer-syntax}) that is introduced in this theory later on (rare in end-user applications). Both minor keywords and major keywords of the Isar command language need to be specified, in order to make parsing of proof documents work properly. Command keywords need to be classified according to their structural role in the formal text. Examples may be seen in Isabelle/HOL sources itself, such as @{keyword "keywords"}~\<^verbatim>\"typedef"\ \:: thy_goal_defn\ or @{keyword "keywords"}~\<^verbatim>\"datatype"\ \:: thy_defn\ for theory-level definitions with and without proof, respectively. Additional @{syntax tags} provide defaults for document preparation (\secref{sec:document-markers}). The @{keyword_def "abbrevs"} specification declares additional abbreviations for syntactic completion. The default for a new keyword is just its name, but completion may be avoided by defining @{keyword "abbrevs"} with empty text. \<^descr> @{command (global) "end"} concludes the current theory definition. Note that some other commands, e.g.\ local theory targets \<^theory_text>\locale\ or \<^theory_text>\class\ may involve a \<^theory_text>\begin\ that needs to be matched by @{command (local) "end"}, according to the usual rules for nested blocks. \<^descr> \<^theory_text>\thy_deps\ visualizes the theory hierarchy as a directed acyclic graph. By default, all imported theories are shown. This may be restricted by specifying bounds wrt. the theory inclusion relation. \ section \Local theory targets \label{sec:target}\ text \ \begin{matharray}{rcll} @{command_def "context"} & : & \theory \ local_theory\ \\ @{command_def (local) "end"} & : & \local_theory \ theory\ \\ @{keyword_def "private"} \\ @{keyword_def "qualified"} \\ \end{matharray} A local theory target is a specification context that is managed separately within the enclosing theory. Contexts may introduce parameters (fixed variables) and assumptions (hypotheses). Definitions and theorems depending on the context may be added incrementally later on. \<^emph>\Named contexts\ refer to locales (cf.\ \secref{sec:locale}) or type classes (cf.\ \secref{sec:class}); the name ``\-\'' signifies the global theory context. \<^emph>\Unnamed contexts\ may introduce additional parameters and assumptions, and results produced in the context are generalized accordingly. Such auxiliary contexts may be nested within other targets, like \<^theory_text>\locale\, \<^theory_text>\class\, \<^theory_text>\instantiation\, \<^theory_text>\overloading\. \<^rail>\ @@{command context} @{syntax name} @{syntax_ref "opening"}? @'begin' ; @@{command context} @{syntax_ref "includes"}? (@{syntax context_elem} * ) @'begin' ; @{syntax_def target}: '(' @'in' @{syntax name} ')' \ \<^descr> \<^theory_text>\context c bundles begin\ opens a named context, by recommencing an existing locale or class \c\. Note that locale and class definitions allow to include the \<^theory_text>\begin\ keyword as well, in order to continue the local theory immediately after the initial specification. Optionally given \bundles\ only take effect in the surface context within the \<^theory_text>\begin\ / \<^theory_text>\end\ block. \<^descr> \<^theory_text>\context bundles elements begin\ opens an unnamed context, by extending the enclosing global or local theory target by the given declaration bundles (\secref{sec:bundle}) and context elements (\<^theory_text>\fixes\, \<^theory_text>\assumes\ etc.). This means any results stemming from definitions and proofs in the extended context will be exported into the enclosing target by lifting over extra parameters and premises. \<^descr> @{command (local) "end"} concludes the current local theory, according to the nesting of contexts. Note that a global @{command (global) "end"} has a different meaning: it concludes the theory itself (\secref{sec:begin-thy}). \<^descr> \<^theory_text>\private\ or \<^theory_text>\qualified\ may be given as modifiers before any local theory command. This restricts name space accesses to the local scope, as determined by the enclosing \<^theory_text>\context begin \ end\ block. Outside its scope, a \<^theory_text>\private\ name is inaccessible, and a \<^theory_text>\qualified\ name is only accessible with some qualification. Neither a global \<^theory_text>\theory\ nor a \<^theory_text>\locale\ target provides a local scope by itself: an extra unnamed context is required to use \<^theory_text>\private\ or \<^theory_text>\qualified\ here. \<^descr> \(\@{keyword_def "in"}~\c)\ given after any local theory command specifies an immediate target, e.g.\ ``\<^theory_text>\definition (in c)\'' or ``\<^theory_text>\theorem (in c)\''. This works both in a local or global theory context; the current target context will be suspended for this command only. Note that ``\<^theory_text>\(in -)\'' will always produce a global result independently of the current target context. Any specification element that operates on \local_theory\ according to this manual implicitly allows the above target syntax \<^theory_text>\(in c)\, but individual syntax diagrams omit that aspect for clarity. \<^medskip> The exact meaning of results produced within a local theory context depends on the underlying target infrastructure (locale, type class etc.). The general idea is as follows, considering a context named \c\ with parameter \x\ and assumption \A[x]\. Definitions are exported by introducing a global version with additional arguments; a syntactic abbreviation links the long form with the abstract version of the target context. For example, \a \ t[x]\ becomes \c.a ?x \ t[?x]\ at the theory level (for arbitrary \?x\), together with a local abbreviation \a \ c.a x\ in the target context (for the fixed parameter \x\). Theorems are exported by discharging the assumptions and generalizing the parameters of the context. For example, \a: B[x]\ becomes \c.a: A[?x] \ B[?x]\, again for arbitrary \?x\. \ section \Bundled declarations \label{sec:bundle}\ text \ \begin{matharray}{rcl} @{command_def "bundle"} & : & \local_theory \ local_theory\ \\ @{command "bundle"} & : & \theory \ local_theory\ \\ @{command_def "print_bundles"}\\<^sup>*\ & : & \context \\ \\ @{command_def "include"} & : & \proof(state) \ proof(state)\ \\ @{command_def "including"} & : & \proof(prove) \ proof(prove)\ \\ @{keyword_def "includes"} & : & syntax \\ \end{matharray} The outer syntax of fact expressions (\secref{sec:syn-att}) involves theorems and attributes, which are evaluated in the context and applied to it. Attributes may declare theorems to the context, as in \this_rule [intro] that_rule [elim]\ for example. Configuration options (\secref{sec:config}) are special declaration attributes that operate on the context without a theorem, as in \[[show_types = false]]\ for example. Expressions of this form may be defined as \<^emph>\bundled declarations\ in the context, and included in other situations later on. Including declaration bundles augments a local context casually without logical dependencies, which is in contrast to locales and locale interpretation (\secref{sec:locale}). \<^rail>\ @@{command bundle} @{syntax name} ( '=' @{syntax thms} @{syntax for_fixes} | @'begin') ; @@{command print_bundles} ('!'?) ; (@@{command include} | @@{command including}) (@{syntax name}+) ; @{syntax_def "includes"}: @'includes' (@{syntax name}+) ; @{syntax_def "opening"}: @'opening' (@{syntax name}+) ; @@{command unbundle} (@{syntax name}+) \ \<^descr> \<^theory_text>\bundle b = decls\ defines a bundle of declarations in the current context. The RHS is similar to the one of the \<^theory_text>\declare\ command. Bundles defined in local theory targets are subject to transformations via morphisms, when moved into different application contexts; this works analogously to any other local theory specification. \<^descr> \<^theory_text>\bundle b begin body end\ defines a bundle of declarations from the \body\ of local theory specifications. It may consist of commands that are technically equivalent to \<^theory_text>\declare\ or \<^theory_text>\declaration\, which also includes \<^theory_text>\notation\, for example. Named fact declarations like ``\<^theory_text>\lemmas a [simp] = b\'' or ``\<^theory_text>\lemma a [simp]: B \\'' are also admitted, but the name bindings are not recorded in the bundle. \<^descr> \<^theory_text>\print_bundles\ prints the named bundles that are available in the current context; the ``\!\'' option indicates extra verbosity. \<^descr> \<^theory_text>\include b\<^sub>1 \ b\<^sub>n\ activates the declarations from the given bundles in a proof body (forward mode). This is analogous to \<^theory_text>\note\ (\secref{sec:proof-facts}) with the expanded bundles. \<^descr> \<^theory_text>\including b\<^sub>1 \ b\<^sub>n\ is similar to \<^theory_text>\include\, but works in proof refinement (backward mode). This is analogous to \<^theory_text>\using\ (\secref{sec:proof-facts}) with the expanded bundles. \<^descr> \<^theory_text>\includes b\<^sub>1 \ b\<^sub>n\ is similar to \<^theory_text>\include\, but applies to a confined specification context: unnamed \<^theory_text>\context\s and long statements of \<^theory_text>\theorem\. \<^descr> \<^theory_text>\opening b\<^sub>1 \ b\<^sub>n\ is similar to \<^theory_text>\includes\, but applies to a named specification context: \<^theory_text>\locale\s, \<^theory_text>\class\es and named \<^theory_text>\context\s. The effect is confined to the surface context within the specification block itself and the corresponding \<^theory_text>\begin\ / \<^theory_text>\end\ block. \<^descr> \<^theory_text>\unbundle b\<^sub>1 \ b\<^sub>n\ activates the declarations from the given bundles in the current local theory context. This is analogous to \<^theory_text>\lemmas\ (\secref{sec:theorems}) with the expanded bundles. Here is an artificial example of bundling various configuration options: \ (*<*)experiment begin(*>*) bundle trace = [[simp_trace, linarith_trace, metis_trace, smt_trace]] lemma "x = x" including trace by metis (*<*)end(*>*) section \Term definitions \label{sec:term-definitions}\ text \ \begin{matharray}{rcll} @{command_def "definition"} & : & \local_theory \ local_theory\ \\ @{attribute_def "defn"} & : & \attribute\ \\ @{command_def "print_defn_rules"}\\<^sup>*\ & : & \context \\ \\ @{command_def "abbreviation"} & : & \local_theory \ local_theory\ \\ @{command_def "print_abbrevs"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} Term definitions may either happen within the logic (as equational axioms of a certain form (see also \secref{sec:overloading}), or outside of it as rewrite system on abstract syntax. The second form is called ``abbreviation''. \<^rail>\ @@{command definition} decl? definition ; @@{command abbreviation} @{syntax mode}? decl? abbreviation ; @@{command print_abbrevs} ('!'?) ; decl: @{syntax name} ('::' @{syntax type})? @{syntax mixfix}? @'where' ; definition: @{syntax thmdecl}? @{syntax prop} @{syntax spec_prems} @{syntax for_fixes} ; abbreviation: @{syntax prop} @{syntax for_fixes} \ \<^descr> \<^theory_text>\definition c where eq\ produces an internal definition \c \ t\ according to the specification given as \eq\, which is then turned into a proven fact. The given proposition may deviate from internal meta-level equality according to the rewrite rules declared as @{attribute defn} by the object-logic. This usually covers object-level equality \x = y\ and equivalence \A \ B\. End-users normally need not change the @{attribute defn} setup. Definitions may be presented with explicit arguments on the LHS, as well as additional conditions, e.g.\ \f x y = t\ instead of \f \ \x y. t\ and \y \ 0 \ g x y = u\ instead of an unrestricted \g \ \x y. u\. \<^descr> \<^theory_text>\print_defn_rules\ prints the definitional rewrite rules declared via @{attribute defn} in the current context. \<^descr> \<^theory_text>\abbreviation c where eq\ introduces a syntactic constant which is associated with a certain term according to the meta-level equality \eq\. Abbreviations participate in the usual type-inference process, but are expanded before the logic ever sees them. Pretty printing of terms involves higher-order rewriting with rules stemming from reverted abbreviations. This needs some care to avoid overlapping or looping syntactic replacements! The optional \mode\ specification restricts output to a particular print mode; using ``\input\'' here achieves the effect of one-way abbreviations. The mode may also include an ``\<^theory_text>\output\'' qualifier that affects the concrete syntax declared for abbreviations, cf.\ \<^theory_text>\syntax\ in \secref{sec:syn-trans}. \<^descr> \<^theory_text>\print_abbrevs\ prints all constant abbreviations of the current context; the ``\!\'' option indicates extra verbosity. \ section \Axiomatizations \label{sec:axiomatizations}\ text \ \begin{matharray}{rcll} @{command_def "axiomatization"} & : & \theory \ theory\ & (axiomatic!) \\ \end{matharray} \<^rail>\ @@{command axiomatization} @{syntax vars}? (@'where' axiomatization)? ; axiomatization: (@{syntax thmdecl} @{syntax prop} + @'and') @{syntax spec_prems} @{syntax for_fixes} \ \<^descr> \<^theory_text>\axiomatization c\<^sub>1 \ c\<^sub>m where \\<^sub>1 \ \\<^sub>n\ introduces several constants simultaneously and states axiomatic properties for these. The constants are marked as being specified once and for all, which prevents additional specifications for the same constants later on, but it is always possible to emit axiomatizations without referring to particular constants. Note that lack of precise dependency tracking of axiomatizations may disrupt the well-formedness of an otherwise definitional theory. Axiomatization is restricted to a global theory context: support for local theory targets \secref{sec:target} would introduce an extra dimension of uncertainty what the written specifications really are, and make it infeasible to argue why they are correct. Axiomatic specifications are required when declaring a new logical system within Isabelle/Pure, but in an application environment like Isabelle/HOL the user normally stays within definitional mechanisms provided by the logic and its libraries. \ section \Generic declarations\ text \ \begin{matharray}{rcl} @{command_def "declaration"} & : & \local_theory \ local_theory\ \\ @{command_def "syntax_declaration"} & : & \local_theory \ local_theory\ \\ @{command_def "declare"} & : & \local_theory \ local_theory\ \\ \end{matharray} Arbitrary operations on the background context may be wrapped-up as generic declaration elements. Since the underlying concept of local theories may be subject to later re-interpretation, there is an additional dependency on a morphism that tells the difference of the original declaration context wrt.\ the application context encountered later on. A fact declaration is an important special case: it consists of a theorem which is applied to the context by means of an attribute. \<^rail>\ (@@{command declaration} | @@{command syntax_declaration}) ('(' @'pervasive' ')')? \ @{syntax text} ; @@{command declare} (@{syntax thms} + @'and') \ \<^descr> \<^theory_text>\declaration d\ adds the declaration function \d\ of ML type \<^ML_type>\declaration\, to the current local theory under construction. In later application contexts, the function is transformed according to the morphisms being involved in the interpretation hierarchy. If the \<^theory_text>\(pervasive)\ option is given, the corresponding declaration is applied to all possible contexts involved, including the global background theory. \<^descr> \<^theory_text>\syntax_declaration\ is similar to \<^theory_text>\declaration\, but is meant to affect only ``syntactic'' tools by convention (such as notation and type-checking information). \<^descr> \<^theory_text>\declare thms\ declares theorems to the current local theory context. No theorem binding is involved here, unlike \<^theory_text>\lemmas\ (cf.\ \secref{sec:theorems}), so \<^theory_text>\declare\ only has the effect of applying attributes as included in the theorem specification. \ section \Locales \label{sec:locale}\ text \ A locale is a functor that maps parameters (including implicit type parameters) and a specification to a list of declarations. The syntax of locales is modeled after the Isar proof context commands (cf.\ \secref{sec:proof-context}). Locale hierarchies are supported by maintaining a graph of dependencies between locale instances in the global theory. Dependencies may be introduced through import (where a locale is defined as sublocale of the imported instances) or by proving that an existing locale is a sublocale of one or several locale instances. A locale may be opened with the purpose of appending to its list of declarations (cf.\ \secref{sec:target}). When opening a locale declarations from all dependencies are collected and are presented as a local theory. In this process, which is called \<^emph>\roundup\, redundant locale instances are omitted. A locale instance is redundant if it is subsumed by an instance encountered earlier. A more detailed description of this process is available elsewhere @{cite Ballarin2014}. \ subsection \Locale expressions \label{sec:locale-expr}\ text \ A \<^emph>\locale expression\ denotes a context composed of instances of existing locales. The context consists of the declaration elements from the locale instances. Redundant locale instances are omitted according to roundup. \<^rail>\ @{syntax_def locale_expr}: (instance + '+') @{syntax for_fixes} ; instance: (qualifier ':')? @{syntax name} (pos_insts | named_insts) \ rewrites? ; qualifier: @{syntax name} ('?')? ; pos_insts: ('_' | @{syntax term})* ; named_insts: @'where' (@{syntax name} '=' @{syntax term} + @'and') ; rewrites: @'rewrites' (@{syntax thmdecl}? @{syntax prop} + @'and') \ A locale instance consists of a reference to a locale and either positional or named parameter instantiations optionally followed by rewrites clauses. Identical instantiations (that is, those that instantiate a parameter by itself) may be omitted. The notation ``\_\'' enables to omit the instantiation for a parameter inside a positional instantiation. Terms in instantiations are from the context the locale expressions is declared in. Local names may be added to this context with the optional \<^theory_text>\for\ clause. This is useful for shadowing names bound in outer contexts, and for declaring syntax. In addition, syntax declarations from one instance are effective when parsing subsequent instances of the same expression. Instances have an optional qualifier which applies to names in declarations. Names include local definitions and theorem names. If present, the qualifier itself is either mandatory (default) or non-mandatory (when followed by ``\<^verbatim>\?\''). Non-mandatory means that the qualifier may be omitted on input. Qualifiers only affect name spaces; they play no role in determining whether one locale instance subsumes another. Rewrite clauses amend instances with equations that act as rewrite rules. This is particularly useful for changing concepts introduced through definitions. Rewrite clauses are available only in interpretation commands (see \secref{sec:locale-interpretation} below) and must be proved the user. \ subsection \Locale declarations\ text \ \begin{tabular}{rcl} @{command_def "locale"} & : & \theory \ local_theory\ \\ @{command_def "experiment"} & : & \theory \ local_theory\ \\ @{command_def "print_locale"}\\<^sup>*\ & : & \context \\ \\ @{command_def "print_locales"}\\<^sup>*\ & : & \context \\ \\ @{command_def "locale_deps"}\\<^sup>*\ & : & \context \\ \\ \end{tabular} - \indexisarelem{fixes}\indexisarelem{constrains}\indexisarelem{assumes} - \indexisarelem{defines}\indexisarelem{notes} + @{index_ref \\<^theory_text>\fixes\ (element)\} + @{index_ref \\<^theory_text>\constrains\ (element)\} + @{index_ref \\<^theory_text>\assumes\ (element)\} + @{index_ref \\<^theory_text>\defines\ (element)\} + @{index_ref \\<^theory_text>\notes\ (element)\} \<^rail>\ @@{command locale} @{syntax name} ('=' @{syntax locale})? @'begin'? ; @@{command experiment} (@{syntax context_elem}*) @'begin' ; @@{command print_locale} '!'? @{syntax name} ; @@{command print_locales} ('!'?) ; @{syntax_def locale}: @{syntax context_elem}+ | @{syntax_ref "opening"} ('+' (@{syntax context_elem}+))? | @{syntax locale_expr} @{syntax_ref "opening"}? ('+' (@{syntax context_elem}+))? ; @{syntax_def context_elem}: @'fixes' @{syntax vars} | @'constrains' (@{syntax name} '::' @{syntax type} + @'and') | @'assumes' (@{syntax props} + @'and') | @'defines' (@{syntax thmdecl}? @{syntax prop} @{syntax prop_pat}? + @'and') | @'notes' (@{syntax thmdef}? @{syntax thms} + @'and') \ \<^descr> \<^theory_text>\locale loc = import opening bundles + body\ defines a new locale \loc\ as a context consisting of a certain view of existing locales (\import\) plus some additional elements (\body\) with declaration \bundles\ enriching the context of the command itself. All \import\, \bundles\ and \body\ are optional; the degenerate form \<^theory_text>\locale loc\ defines an empty locale, which may still be useful to collect declarations of facts later on. Type-inference on locale expressions automatically takes care of the most general typing that the combined context elements may acquire. The \import\ consists of a locale expression; see \secref{sec:locale-expr} above. Its \<^theory_text>\for\ clause defines the parameters of \import\. These are parameters of the defined locale. Locale parameters whose instantiation is omitted automatically extend the (possibly empty) \<^theory_text>\for\ clause: they are inserted at its beginning. This means that these parameters may be referred to from within the expression and also in the subsequent context elements and provides a notational convenience for the inheritance of parameters in locale declarations. Declarations from \bundles\, see \secref{sec:bundle}, are effective in the entire command including a subsequent \<^theory_text>\begin\ / \<^theory_text>\end\ block, but they do not contribute to the declarations stored in the locale. The \body\ consists of context elements: \<^descr> @{element "fixes"}~\x :: \ (mx)\ declares a local parameter of type \\\ and mixfix annotation \mx\ (both are optional). The special syntax declaration ``\(\@{keyword_ref "structure"}\)\'' means that \x\ may be referenced implicitly in this context. \<^descr> @{element "constrains"}~\x :: \\ introduces a type constraint \\\ on the local parameter \x\. This element is deprecated. The type constraint should be introduced in the \<^theory_text>\for\ clause or the relevant @{element "fixes"} element. \<^descr> @{element "assumes"}~\a: \\<^sub>1 \ \\<^sub>n\ introduces local premises, similar to \<^theory_text>\assume\ within a proof (cf.\ \secref{sec:proof-context}). \<^descr> @{element "defines"}~\a: x \ t\ defines a previously declared parameter. This is similar to \<^theory_text>\define\ within a proof (cf.\ \secref{sec:proof-context}), but @{element "defines"} is restricted to Pure equalities and the defined variable needs to be declared beforehand via @{element "fixes"}. The left-hand side of the equation may have additional arguments, e.g.\ ``@{element "defines"}~\f x\<^sub>1 \ x\<^sub>n \ t\'', which need to be free in the context. \<^descr> @{element "notes"}~\a = b\<^sub>1 \ b\<^sub>n\ reconsiders facts within a local context. Most notably, this may include arbitrary declarations in any attribute specifications included here, e.g.\ a local @{attribute simp} rule. Both @{element "assumes"} and @{element "defines"} elements contribute to the locale specification. When defining an operation derived from the parameters, \<^theory_text>\definition\ (\secref{sec:term-definitions}) is usually more appropriate. Note that ``\<^theory_text>\(is p\<^sub>1 \ p\<^sub>n)\'' patterns given in the syntax of @{element "assumes"} and @{element "defines"} above are illegal in locale definitions. In the long goal format of \secref{sec:goals}, term bindings may be included as expected, though. \<^medskip> Locale specifications are ``closed up'' by turning the given text into a predicate definition \loc_axioms\ and deriving the original assumptions as local lemmas (modulo local definitions). The predicate statement covers only the newly specified assumptions, omitting the content of included locale expressions. The full cumulative view is only provided on export, involving another predicate \loc\ that refers to the complete specification text. In any case, the predicate arguments are those locale parameters that actually occur in the respective piece of text. Also these predicates operate at the meta-level in theory, but the locale packages attempts to internalize statements according to the object-logic setup (e.g.\ replacing \\\ by \\\, and \\\ by \\\ in HOL; see also \secref{sec:object-logic}). Separate introduction rules \loc_axioms.intro\ and \loc.intro\ are provided as well. \<^descr> \<^theory_text>\experiment body begin\ opens an anonymous locale context with private naming policy. Specifications in its body are inaccessible from outside. This is useful to perform experiments, without polluting the name space. \<^descr> \<^theory_text>\print_locale "locale"\ prints the contents of the named locale. The command omits @{element "notes"} elements by default. Use \<^theory_text>\print_locale!\ to have them included. \<^descr> \<^theory_text>\print_locales\ prints the names of all locales of the current theory; the ``\!\'' option indicates extra verbosity. \<^descr> \<^theory_text>\locale_deps\ visualizes all locales and their relations as a Hasse diagram. This includes locales defined as type classes (\secref{sec:class}). \ subsection \Locale interpretation \label{sec:locale-interpretation}\ text \ \begin{matharray}{rcl} @{command "interpretation"} & : & \local_theory \ proof(prove)\ \\ @{command_def "interpret"} & : & \proof(state) | proof(chain) \ proof(prove)\ \\ @{command_def "global_interpretation"} & : & \theory | local_theory \ proof(prove)\ \\ @{command_def "sublocale"} & : & \theory | local_theory \ proof(prove)\ \\ @{command_def "print_interps"}\\<^sup>*\ & : & \context \\ \\ @{method_def intro_locales} & : & \method\ \\ @{method_def unfold_locales} & : & \method\ \\ @{attribute_def trace_locales} & : & \mbox{\attribute\ \quad default \false\} \\ \end{matharray} Locales may be instantiated, and the resulting instantiated declarations added to the current context. This requires proof (of the instantiated specification) and is called \<^emph>\locale interpretation\. Interpretation is possible within arbitrary local theories (\<^theory_text>\interpretation\), within proof bodies (\<^theory_text>\interpret\), into global theories (\<^theory_text>\global_interpretation\) and into locales (\<^theory_text>\sublocale\). \<^rail>\ @@{command interpretation} @{syntax locale_expr} ; @@{command interpret} @{syntax locale_expr} ; @@{command global_interpretation} @{syntax locale_expr} definitions? ; @@{command sublocale} (@{syntax name} ('<' | '\'))? @{syntax locale_expr} \ definitions? ; @@{command print_interps} @{syntax name} ; definitions: @'defines' (@{syntax thmdecl}? @{syntax name} \ @{syntax mixfix}? '=' @{syntax term} + @'and'); \ The core of each interpretation command is a locale expression \expr\; the command generates proof obligations for the instantiated specifications. Once these are discharged by the user, instantiated declarations (in particular, facts) are added to the context in a post-processing phase, in a manner specific to each command. Interpretation commands are aware of interpretations that are already active: post-processing is achieved through a variant of roundup that takes interpretations of the current global or local theory into account. In order to simplify the proof obligations according to existing interpretations use methods @{method intro_locales} or @{method unfold_locales}. Rewrites clauses \<^theory_text>\rewrites eqns\ occur within expressions. They amend the morphism through which a locale instance is interpreted with rewrite rules, also called rewrite morphisms. This is particularly useful for interpreting concepts introduced through definitions. The equations must be proved the user. To enable syntax of the instantiated locale within the equation, while reading a locale expression, equations of a locale instance are read in a temporary context where the instance is already activated. If activation fails, typically due to duplicate constant declarations, processing falls back to reading the equation first. Given definitions \defs\ produce corresponding definitions in the local theory's underlying target \<^emph>\and\ amend the morphism with rewrite rules stemming from the symmetric of those definitions. Hence these need not be proved explicitly the user. Such rewrite definitions are a even more useful device for interpreting concepts introduced through definitions, but they are only supported for interpretation commands operating in a local theory whose implementing target actually supports this. Note that despite the suggestive \<^theory_text>\and\ connective, \defs\ are processed sequentially without mutual recursion. \<^descr> \<^theory_text>\interpretation expr\ interprets \expr\ into a local theory such that its lifetime is limited to the current context block (e.g. a locale or unnamed context). At the closing @{command end} of the block the interpretation and its declarations disappear. Hence facts based on interpretation can be established without creating permanent links to the interpreted locale instances, as would be the case with @{command sublocale}. When used on the level of a global theory, there is no end of a current context block, hence \<^theory_text>\interpretation\ behaves identically to \<^theory_text>\global_interpretation\ then. \<^descr> \<^theory_text>\interpret expr\ interprets \expr\ into a proof context: the interpretation and its declarations disappear when closing the current proof block. Note that for \<^theory_text>\interpret\ the \eqns\ should be explicitly universally quantified. \<^descr> \<^theory_text>\global_interpretation expr defines defs\ interprets \expr\ into a global theory. When adding declarations to locales, interpreted versions of these declarations are added to the global theory for all interpretations in the global theory as well. That is, interpretations into global theories dynamically participate in any declarations added to locales. Free variables in the interpreted expression are allowed. They are turned into schematic variables in the generated declarations. In order to use a free variable whose name is already bound in the context --- for example, because a constant of that name exists --- add it to the \<^theory_text>\for\ clause. \<^descr> \<^theory_text>\sublocale name \ expr defines defs\ interprets \expr\ into the locale \name\. A proof that the specification of \name\ implies the specification of \expr\ is required. As in the localized version of the theorem command, the proof is in the context of \name\. After the proof obligation has been discharged, the locale hierarchy is changed as if \name\ imported \expr\ (hence the name \<^theory_text>\sublocale\). When the context of \name\ is subsequently entered, traversing the locale hierarchy will involve the locale instances of \expr\, and their declarations will be added to the context. This makes \<^theory_text>\sublocale\ dynamic: extensions of a locale that is instantiated in \expr\ may take place after the \<^theory_text>\sublocale\ declaration and still become available in the context. Circular \<^theory_text>\sublocale\ declarations are allowed as long as they do not lead to infinite chains. If interpretations of \name\ exist in the current global theory, the command adds interpretations for \expr\ as well, with the same qualifier, although only for fragments of \expr\ that are not interpreted in the theory already. Rewrites clauses in the expression or rewrite definitions \defs\ can help break infinite chains induced by circular \<^theory_text>\sublocale\ declarations. In a named context block the \<^theory_text>\sublocale\ command may also be used, but the locale argument must be omitted. The command then refers to the locale (or class) target of the context block. \<^descr> \<^theory_text>\print_interps name\ lists all interpretations of locale \name\ in the current theory or proof context, including those due to a combination of an \<^theory_text>\interpretation\ or \<^theory_text>\interpret\ and one or several \<^theory_text>\sublocale\ declarations. \<^descr> @{method intro_locales} and @{method unfold_locales} repeatedly expand all introduction rules of locale predicates of the theory. While @{method intro_locales} only applies the \loc.intro\ introduction rules and therefore does not descend to assumptions, @{method unfold_locales} is more aggressive and applies \loc_axioms.intro\ as well. Both methods are aware of locale specifications entailed by the context, both from target statements, and from interpretations (see below). New goals that are entailed by the current context are discharged automatically. While @{method unfold_locales} is part of the default method for \<^theory_text>\proof\ and often invoked ``behind the scenes'', @{method intro_locales} helps understand which proof obligations originated from which locale instances. The latter method is useful while developing proofs but rare in finished developments. \<^descr> @{attribute trace_locales}, when set to \true\, prints the locale instances activated during roundup. Use this when locale commands yield obscure errors or for understanding local theories created by complex locale hierarchies. \begin{warn} If a global theory inherits declarations (body elements) for a locale from one parent and an interpretation of that locale from another parent, the interpretation will not be applied to the declarations. \end{warn} \begin{warn} Since attributes are applied to interpreted theorems, interpretation may modify the context of common proof tools, e.g.\ the Simplifier or Classical Reasoner. As the behaviour of such tools is \<^emph>\not\ stable under interpretation morphisms, manual declarations might have to be added to the target context of the interpretation to revert such declarations. \end{warn} \begin{warn} An interpretation in a local theory or proof context may subsume previous interpretations. This happens if the same specification fragment is interpreted twice and the instantiation of the second interpretation is more general than the interpretation of the first. The locale package does not attempt to remove subsumed interpretations. \end{warn} \begin{warn} While \<^theory_text>\interpretation (in c) \\ is admissible, it is not useful since its result is discarded immediately. \end{warn} \ section \Classes \label{sec:class}\ text \ \begin{matharray}{rcl} @{command_def "class"} & : & \theory \ local_theory\ \\ @{command_def "instantiation"} & : & \theory \ local_theory\ \\ @{command_def "instance"} & : & \local_theory \ local_theory\ \\ @{command "instance"} & : & \theory \ proof(prove)\ \\ @{command_def "subclass"} & : & \local_theory \ local_theory\ \\ @{command_def "print_classes"}\\<^sup>*\ & : & \context \\ \\ @{command_def "class_deps"}\\<^sup>*\ & : & \context \\ \\ @{method_def intro_classes} & : & \method\ \\ \end{matharray} A class is a particular locale with \<^emph>\exactly one\ type variable \\\. Beyond the underlying locale, a corresponding type class is established which is interpreted logically as axiomatic type class @{cite "Wenzel:1997:TPHOL"} whose logical content are the assumptions of the locale. Thus, classes provide the full generality of locales combined with the commodity of type classes (notably type-inference). See @{cite "isabelle-classes"} for a short tutorial. \<^rail>\ @@{command class} class_spec @'begin'? ; class_spec: @{syntax name} '=' ((@{syntax name} @{syntax_ref "opening"}? '+' (@{syntax context_elem}+)) | @{syntax name} @{syntax_ref "opening"}? | @{syntax_ref "opening"}? '+' (@{syntax context_elem}+)) ; @@{command instantiation} (@{syntax name} + @'and') '::' @{syntax arity} @'begin' ; @@{command instance} (() | (@{syntax name} + @'and') '::' @{syntax arity} | @{syntax name} ('<' | '\') @{syntax name} ) ; @@{command subclass} @{syntax name} ; @@{command class_deps} (class_bounds class_bounds?)? ; class_bounds: @{syntax sort} | '(' (@{syntax sort} + @'|') ')' \ \<^descr> \<^theory_text>\class c = superclasses bundles + body\ defines a new class \c\, inheriting from \superclasses\. This introduces a locale \c\ with import of all locales \superclasses\. Any @{element "fixes"} in \body\ are lifted to the global theory level (\<^emph>\class operations\ \f\<^sub>1, \, f\<^sub>n\ of class \c\), mapping the local type parameter \\\ to a schematic type variable \?\ :: c\. Likewise, @{element "assumes"} in \body\ are also lifted, mapping each local parameter \f :: \[\]\ to its corresponding global constant \f :: \[?\ :: c]\. The corresponding introduction rule is provided as \c_class_axioms.intro\. This rule should be rarely needed directly --- the @{method intro_classes} method takes care of the details of class membership proofs. Optionally given \bundles\ take effect in the surface context within the \body\ and the potentially following \<^theory_text>\begin\ / \<^theory_text>\end\ block. \<^descr> \<^theory_text>\instantiation t :: (s\<^sub>1, \, s\<^sub>n)s begin\ opens a target (cf.\ \secref{sec:target}) which allows to specify class operations \f\<^sub>1, \, f\<^sub>n\ corresponding to sort \s\ at the particular type instance \(\\<^sub>1 :: s\<^sub>1, \, \\<^sub>n :: s\<^sub>n) t\. A plain \<^theory_text>\instance\ command in the target body poses a goal stating these type arities. The target is concluded by an @{command_ref (local) "end"} command. Note that a list of simultaneous type constructors may be given; this corresponds nicely to mutually recursive type definitions, e.g.\ in Isabelle/HOL. \<^descr> \<^theory_text>\instance\ in an instantiation target body sets up a goal stating the type arities claimed at the opening \<^theory_text>\instantiation\. The proof would usually proceed by @{method intro_classes}, and then establish the characteristic theorems of the type classes involved. After finishing the proof, the background theory will be augmented by the proven type arities. On the theory level, \<^theory_text>\instance t :: (s\<^sub>1, \, s\<^sub>n)s\ provides a convenient way to instantiate a type class with no need to specify operations: one can continue with the instantiation proof immediately. \<^descr> \<^theory_text>\subclass c\ in a class context for class \d\ sets up a goal stating that class \c\ is logically contained in class \d\. After finishing the proof, class \d\ is proven to be subclass \c\ and the locale \c\ is interpreted into \d\ simultaneously. A weakened form of this is available through a further variant of @{command instance}: \<^theory_text>\instance c\<^sub>1 \ c\<^sub>2\ opens a proof that class \c\<^sub>2\ implies \c\<^sub>1\ without reference to the underlying locales; this is useful if the properties to prove the logical connection are not sufficient on the locale level but on the theory level. \<^descr> \<^theory_text>\print_classes\ prints all classes in the current theory. \<^descr> \<^theory_text>\class_deps\ visualizes classes and their subclass relations as a directed acyclic graph. By default, all classes from the current theory context are show. This may be restricted by optional bounds as follows: \<^theory_text>\class_deps upper\ or \<^theory_text>\class_deps upper lower\. A class is visualized, iff it is a subclass of some sort from \upper\ and a superclass of some sort from \lower\. \<^descr> @{method intro_classes} repeatedly expands all class introduction rules of this theory. Note that this method usually needs not be named explicitly, as it is already included in the default proof step (e.g.\ of \<^theory_text>\proof\). In particular, instantiation of trivial (syntactic) classes may be performed by a single ``\<^theory_text>\..\'' proof step. \ subsection \The class target\ text \ %FIXME check A named context may refer to a locale (cf.\ \secref{sec:target}). If this locale is also a class \c\, apart from the common locale target behaviour the following happens. \<^item> Local constant declarations \g[\]\ referring to the local type parameter \\\ and local parameters \f[\]\ are accompanied by theory-level constants \g[?\ :: c]\ referring to theory-level class operations \f[?\ :: c]\. \<^item> Local theorem bindings are lifted as are assumptions. \<^item> Local syntax refers to local operations \g[\]\ and global operations \g[?\ :: c]\ uniformly. Type inference resolves ambiguities. In rare cases, manual type annotations are needed. \ subsection \Co-regularity of type classes and arities\ text \ The class relation together with the collection of type-constructor arities must obey the principle of \<^emph>\co-regularity\ as defined below. \<^medskip> For the subsequent formulation of co-regularity we assume that the class relation is closed by transitivity and reflexivity. Moreover the collection of arities \t :: (\<^vec>s)c\ is completed such that \t :: (\<^vec>s)c\ and \c \ c'\ implies \t :: (\<^vec>s)c'\ for all such declarations. Treating sorts as finite sets of classes (meaning the intersection), the class relation \c\<^sub>1 \ c\<^sub>2\ is extended to sorts as follows: \[ \s\<^sub>1 \ s\<^sub>2 \ \c\<^sub>2 \ s\<^sub>2. \c\<^sub>1 \ s\<^sub>1. c\<^sub>1 \ c\<^sub>2\ \] This relation on sorts is further extended to tuples of sorts (of the same length) in the component-wise way. \<^medskip> Co-regularity of the class relation together with the arities relation means: \[ \t :: (\<^vec>s\<^sub>1)c\<^sub>1 \ t :: (\<^vec>s\<^sub>2)c\<^sub>2 \ c\<^sub>1 \ c\<^sub>2 \ \<^vec>s\<^sub>1 \ \<^vec>s\<^sub>2\ \] for all such arities. In other words, whenever the result classes of some type-constructor arities are related, then the argument sorts need to be related in the same way. \<^medskip> Co-regularity is a very fundamental property of the order-sorted algebra of types. For example, it entails principal types and most general unifiers, e.g.\ see @{cite "nipkow-prehofer"}. \ section \Overloaded constant definitions \label{sec:overloading}\ text \ Definitions essentially express abbreviations within the logic. The simplest form of a definition is \c :: \ \ t\, where \c\ is a new constant and \t\ is a closed term that does not mention \c\. Moreover, so-called \<^emph>\hidden polymorphism\ is excluded: all type variables in \t\ need to occur in its type \\\. \<^emph>\Overloading\ means that a constant being declared as \c :: \ decl\ may be defined separately on type instances \c :: (\\<^sub>1, \, \\<^sub>n)\ decl\ for each type constructor \\\. At most occasions overloading will be used in a Haskell-like fashion together with type classes by means of \<^theory_text>\instantiation\ (see \secref{sec:class}). Sometimes low-level overloading is desirable; this is supported by \<^theory_text>\consts\ and \<^theory_text>\overloading\ explained below. The right-hand side of overloaded definitions may mention overloaded constants recursively at type instances corresponding to the immediate argument types \\\<^sub>1, \, \\<^sub>n\. Incomplete specification patterns impose global constraints on all occurrences. E.g.\ \d :: \ \ \\ on the left-hand side means that all corresponding occurrences on some right-hand side need to be an instance of this, and general \d :: \ \ \\ will be disallowed. Full details are given by Kun\v{c}ar @{cite "Kuncar:2015"}. \<^medskip> The \<^theory_text>\consts\ command and the \<^theory_text>\overloading\ target provide a convenient interface for end-users. Regular specification elements such as @{command definition}, @{command inductive}, @{command function} may be used in the body. It is also possible to use \<^theory_text>\consts c :: \\ with later \<^theory_text>\overloading c \ c :: \\ to keep the declaration and definition of a constant separate. \begin{matharray}{rcl} @{command_def "consts"} & : & \theory \ theory\ \\ @{command_def "overloading"} & : & \theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command consts} ((@{syntax name} '::' @{syntax type} @{syntax mixfix}?) +) ; @@{command overloading} ( spec + ) @'begin' ; spec: @{syntax name} ( '\' | '==' ) @{syntax term} ( '(' @'unchecked' ')' )? \ \<^descr> \<^theory_text>\consts c :: \\ declares constant \c\ to have any instance of type scheme \\\. The optional mixfix annotations may attach concrete syntax to the constants declared. \<^descr> \<^theory_text>\overloading x\<^sub>1 \ c\<^sub>1 :: \\<^sub>1 \ x\<^sub>n \ c\<^sub>n :: \\<^sub>n begin \ end\ defines a theory target (cf.\ \secref{sec:target}) which allows to specify already declared constants via definitions in the body. These are identified by an explicitly given mapping from variable names \x\<^sub>i\ to constants \c\<^sub>i\ at particular type instances. The definitions themselves are established using common specification tools, using the names \x\<^sub>i\ as reference to the corresponding constants. Option \<^theory_text>\(unchecked)\ disables global dependency checks for the corresponding definition, which is occasionally useful for exotic overloading; this is a form of axiomatic specification. It is at the discretion of the user to avoid malformed theory specifications! \ subsubsection \Example\ consts Length :: "'a \ nat" overloading Length\<^sub>0 \ "Length :: unit \ nat" Length\<^sub>1 \ "Length :: 'a \ unit \ nat" Length\<^sub>2 \ "Length :: 'a \ 'b \ unit \ nat" Length\<^sub>3 \ "Length :: 'a \ 'b \ 'c \ unit \ nat" begin fun Length\<^sub>0 :: "unit \ nat" where "Length\<^sub>0 () = 0" fun Length\<^sub>1 :: "'a \ unit \ nat" where "Length\<^sub>1 (a, ()) = 1" fun Length\<^sub>2 :: "'a \ 'b \ unit \ nat" where "Length\<^sub>2 (a, b, ()) = 2" fun Length\<^sub>3 :: "'a \ 'b \ 'c \ unit \ nat" where "Length\<^sub>3 (a, b, c, ()) = 3" end lemma "Length (a, b, c, ()) = 3" by simp lemma "Length ((a, b), (c, d), ()) = 2" by simp lemma "Length ((a, b, c, d, e), ()) = 1" by simp section \Incorporating ML code \label{sec:ML}\ text \ \begin{matharray}{rcl} @{command_def "SML_file"} & : & \local_theory \ local_theory\ \\ @{command_def "SML_file_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "SML_file_no_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_file_no_debug"} & : & \local_theory \ local_theory\ \\ @{command_def "ML"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_export"} & : & \local_theory \ local_theory\ \\ @{command_def "ML_prf"} & : & \proof \ proof\ \\ @{command_def "ML_val"} & : & \any \\ \\ @{command_def "ML_command"} & : & \any \\ \\ @{command_def "setup"} & : & \theory \ theory\ \\ @{command_def "local_setup"} & : & \local_theory \ local_theory\ \\ @{command_def "attribute_setup"} & : & \local_theory \ local_theory\ \\ \end{matharray} \begin{tabular}{rcll} @{attribute_def ML_print_depth} & : & \attribute\ & default 10 \\ @{attribute_def ML_source_trace} & : & \attribute\ & default \false\ \\ @{attribute_def ML_debugger} & : & \attribute\ & default \false\ \\ @{attribute_def ML_exception_trace} & : & \attribute\ & default \false\ \\ @{attribute_def ML_exception_debugger} & : & \attribute\ & default \false\ \\ @{attribute_def ML_environment} & : & \attribute\ & default \Isabelle\ \\ \end{tabular} \<^rail>\ (@@{command SML_file} | @@{command SML_file_debug} | @@{command SML_file_no_debug} | @@{command ML_file} | @@{command ML_file_debug} | @@{command ML_file_no_debug}) @{syntax name} ';'? ; (@@{command ML} | @@{command ML_export} | @@{command ML_prf} | @@{command ML_val} | @@{command ML_command} | @@{command setup} | @@{command local_setup}) @{syntax text} ; @@{command attribute_setup} @{syntax name} '=' @{syntax text} @{syntax text}? \ \<^descr> \<^theory_text>\SML_file name\ reads and evaluates the given Standard ML file. Top-level SML bindings are stored within the (global or local) theory context; the initial environment is restricted to the Standard ML implementation of Poly/ML, without the many add-ons of Isabelle/ML. Multiple \<^theory_text>\SML_file\ commands may be used to build larger Standard ML projects, independently of the regular Isabelle/ML environment. \<^descr> \<^theory_text>\ML_file name\ reads and evaluates the given ML file. The current theory context is passed down to the ML toplevel and may be modified, using \<^ML>\Context.>>\ or derived ML commands. Top-level ML bindings are stored within the (global or local) theory context. \<^descr> \<^theory_text>\SML_file_debug\, \<^theory_text>\SML_file_no_debug\, \<^theory_text>\ML_file_debug\, and \<^theory_text>\ML_file_no_debug\ change the @{attribute ML_debugger} option locally while the given file is compiled. \<^descr> \<^theory_text>\ML\ is similar to \<^theory_text>\ML_file\, but evaluates directly the given \text\. Top-level ML bindings are stored within the (global or local) theory context. \<^descr> \<^theory_text>\ML_export\ is similar to \<^theory_text>\ML\, but the resulting toplevel bindings are exported to the global bootstrap environment of the ML process --- it has a lasting effect that cannot be retracted. This allows ML evaluation without a formal theory context, e.g. for command-line tools via @{tool process} @{cite "isabelle-system"}. \<^descr> \<^theory_text>\ML_prf\ is analogous to \<^theory_text>\ML\ but works within a proof context. Top-level ML bindings are stored within the proof context in a purely sequential fashion, disregarding the nested proof structure. ML bindings introduced by \<^theory_text>\ML_prf\ are discarded at the end of the proof. \<^descr> \<^theory_text>\ML_val\ and \<^theory_text>\ML_command\ are diagnostic versions of \<^theory_text>\ML\, which means that the context may not be updated. \<^theory_text>\ML_val\ echos the bindings produced at the ML toplevel, but \<^theory_text>\ML_command\ is silent. \<^descr> \<^theory_text>\setup "text"\ changes the current theory context by applying \text\, which refers to an ML expression of type \<^ML_type>\theory -> theory\. This enables to initialize any object-logic specific tools and packages written in ML, for example. \<^descr> \<^theory_text>\local_setup\ is similar to \<^theory_text>\setup\ for a local theory context, and an ML expression of type \<^ML_type>\local_theory -> local_theory\. This allows to invoke local theory specification packages without going through concrete outer syntax, for example. \<^descr> \<^theory_text>\attribute_setup name = "text" description\ defines an attribute in the current context. The given \text\ has to be an ML expression of type \<^ML_type>\attribute context_parser\, cf.\ basic parsers defined in structure \<^ML_structure>\Args\ and \<^ML_structure>\Attrib\. In principle, attributes can operate both on a given theorem and the implicit context, although in practice only one is modified and the other serves as parameter. Here are examples for these two cases: \ (*<*)experiment begin(*>*) attribute_setup my_rule = \Attrib.thms >> (fn ths => Thm.rule_attribute ths (fn context: Context.generic => fn th: thm => let val th' = th OF ths in th' end))\ attribute_setup my_declaration = \Attrib.thms >> (fn ths => Thm.declaration_attribute (fn th: thm => fn context: Context.generic => let val context' = context in context' end))\ (*<*)end(*>*) text \ \<^descr> @{attribute ML_print_depth} controls the printing depth of the ML toplevel pretty printer. Typically the limit should be less than 10. Bigger values such as 100--1000 are occasionally useful for debugging. \<^descr> @{attribute ML_source_trace} indicates whether the source text that is given to the ML compiler should be output: it shows the raw Standard ML after expansion of Isabelle/ML antiquotations. \<^descr> @{attribute ML_debugger} controls compilation of sources with or without debugging information. The global system option @{system_option_ref ML_debugger} does the same when building a session image. It is also possible use commands like \<^theory_text>\ML_file_debug\ etc. The ML debugger is explained further in @{cite "isabelle-jedit"}. \<^descr> @{attribute ML_exception_trace} indicates whether the ML run-time system should print a detailed stack trace on exceptions. The result is dependent on various ML compiler optimizations. The boundary for the exception trace is the current Isar command transactions: it is occasionally better to insert the combinator \<^ML>\Runtime.exn_trace\ into ML code for debugging @{cite "isabelle-implementation"}, closer to the point where it actually happens. \<^descr> @{attribute ML_exception_debugger} controls detailed exception trace via the Poly/ML debugger, at the cost of extra compile-time and run-time overhead. Relevant ML modules need to be compiled beforehand with debugging enabled, see @{attribute ML_debugger} above. \<^descr> @{attribute ML_environment} determines the named ML environment for toplevel declarations, e.g.\ in command \<^theory_text>\ML\ or \<^theory_text>\ML_file\. The following ML environments are predefined in Isabelle/Pure: \<^item> \Isabelle\ for Isabelle/ML. It contains all modules of Isabelle/Pure and further add-ons, e.g. material from Isabelle/HOL. \<^item> \SML\ for official Standard ML. It contains only the initial basis according to \<^url>\http://sml-family.org/Basis/overview.html\. The Isabelle/ML function \<^ML>\ML_Env.setup\ defines a new ML environment. This is useful to incorporate big SML projects in an isolated name space, possibly with variations on ML syntax; the existing setup of \<^ML>\ML_Env.SML_operations\ follows the official standard. It is also possible to move toplevel bindings between ML environments, using a notation with ``\>\'' as separator. For example: \ (*<*)experiment begin(*>*) declare [[ML_environment = "Isabelle>SML"]] ML \val println = writeln\ declare [[ML_environment = "SML"]] ML \println "test"\ declare [[ML_environment = "Isabelle"]] ML \ML \println\ (*bad*) handle ERROR msg => warning msg\ (*<*)end(*>*) section \Generated files and exported files\ text \ Write access to the physical file-system is incompatible with the stateless model of processing Isabelle documents. To avoid bad effects, the following concepts for abstract file-management are provided by Isabelle: \<^descr>[Generated files] are stored within the theory context in Isabelle/ML. This allows to operate on the content in Isabelle/ML, e.g. via the command @{command compile_generated_files}. \<^descr>[Exported files] are stored within the session database in Isabelle/Scala. This allows to deliver artefacts to external tools, see also @{cite "isabelle-system"} for session \<^verbatim>\ROOT\ declaration \<^theory_text>\export_files\, and @{tool build} option \<^verbatim>\-e\. A notable example is the command @{command_ref export_code} (\chref{ch:export-code}): it uses both concepts simultaneously. File names are hierarchically structured, using a slash as separator. The (long) theory name is used as a prefix: the resulting name needs to be globally unique. \begin{matharray}{rcll} @{command_def "generate_file"} & : & \local_theory \ local_theory\ \\ @{command_def "export_generated_files"} & : & \context \\ \\ @{command_def "compile_generated_files"} & : & \context \\ \\ @{command_def "external_file"} & : & \any \ any\ \\ \end{matharray} \<^rail>\ @@{command generate_file} path '=' content ; path: @{syntax embedded} ; content: @{syntax embedded} ; @@{command export_generated_files} (files_in_theory + @'and') ; files_in_theory: (@'_' | (path+)) (('(' @'in' @{syntax name} ')')?) ; @@{command compile_generated_files} (files_in_theory + @'and') \ (@'external_files' (external_files + @'and'))? \ (@'export_files' (export_files + @'and'))? \ (@'export_prefix' path)? ; external_files: (path+) (('(' @'in' path ')')?) ; export_files: (path+) (executable?) ; executable: '(' ('exe' | 'executable') ')' ; @@{command external_file} @{syntax name} ';'? \ \<^descr> \<^theory_text>\generate_file path = content\ augments the table of generated files within the current theory by a new entry: duplicates are not allowed. The name extension determines a pre-existent file-type; the \content\ is a string that is preprocessed according to rules of this file-type. For example, Isabelle/Pure supports \<^verbatim>\.hs\ as file-type for Haskell: embedded cartouches are evaluated as Isabelle/ML expressions of type \<^ML_type>\string\, the result is inlined in Haskell string syntax. \<^descr> \<^theory_text>\export_generated_files paths (in thy)\ retrieves named generated files from the given theory (that needs be reachable via imports of the current one). By default, the current theory node is used. Using ``\<^verbatim>\_\'' (underscore) instead of explicit path names refers to \emph{all} files of a theory node. The overall list of files is prefixed with the respective (long) theory name and exported to the session database. In Isabelle/jEdit the result can be browsed via the virtual file-system with prefix ``\<^verbatim>\isabelle-export:\'' (using the regular file-browser). \<^descr> \<^theory_text>\compile_generated_files paths (in thy) where compile_body\ retrieves named generated files as for \<^theory_text>\export_generated_files\ and writes them into a temporary directory, such that the \compile_body\ may operate on them as an ML function of type \<^ML_type>\Path.T -> unit\. This may create further files, e.g.\ executables produced by a compiler that is invoked as external process (e.g.\ via \<^ML>\Isabelle_System.bash\), or any other files. The option ``\<^theory_text>\external_files paths (in base_dir)\'' copies files from the physical file-system into the temporary directory, \emph{before} invoking \compile_body\. The \base_dir\ prefix is removed from each of the \paths\, but the remaining sub-directory structure is reconstructed in the target directory. The option ``\<^theory_text>\export_files paths\'' exports the specified files from the temporary directory to the session database, \emph{after} invoking \compile_body\. Entries may be decorated with ``\<^theory_text>\(exe)\'' to say that it is a platform-specific executable program: the executable file-attribute will be set, and on Windows the \<^verbatim>\.exe\ file-extension will be included; ``\<^theory_text>\(executable)\'' only refers to the file-attribute, without special treatment of the \<^verbatim>\.exe\ extension. The option ``\<^theory_text>\export_prefix path\'' specifies an extra path prefix for all exports of \<^theory_text>\export_files\ above. \<^descr> \<^theory_text>\external_file name\ declares the formal dependency on the given file name, such that the Isabelle build process knows about it (see also @{cite "isabelle-system"}). This is required for any files mentioned in \<^theory_text>\compile_generated_files / external_files\ above, in order to document source dependencies properly. It is also possible to use \<^theory_text>\external_file\ alone, e.g.\ when other Isabelle/ML tools use \<^ML>\File.read\, without specific management of content by the Prover IDE. \ section \Primitive specification elements\ subsection \Sorts\ text \ \begin{matharray}{rcll} @{command_def "default_sort"} & : & \local_theory \ local_theory\ \end{matharray} \<^rail>\ @@{command default_sort} @{syntax sort} \ \<^descr> \<^theory_text>\default_sort s\ makes sort \s\ the new default sort for any type variable that is given explicitly in the text, but lacks a sort constraint (wrt.\ the current context). Type variables generated by type inference are not affected. Usually the default sort is only changed when defining a new object-logic. For example, the default sort in Isabelle/HOL is \<^class>\type\, the class of all HOL types. When merging theories, the default sorts of the parents are logically intersected, i.e.\ the representations as lists of classes are joined. \ subsection \Types \label{sec:types-pure}\ text \ \begin{matharray}{rcll} @{command_def "type_synonym"} & : & \local_theory \ local_theory\ \\ @{command_def "typedecl"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command type_synonym} (@{syntax typespec} '=' @{syntax type} @{syntax mixfix}?) ; @@{command typedecl} @{syntax typespec} @{syntax mixfix}? \ \<^descr> \<^theory_text>\type_synonym (\\<^sub>1, \, \\<^sub>n) t = \\ introduces a \<^emph>\type synonym\ \(\\<^sub>1, \, \\<^sub>n) t\ for the existing type \\\. Unlike the semantic type definitions in Isabelle/HOL, type synonyms are merely syntactic abbreviations without any logical significance. Internally, type synonyms are fully expanded. \<^descr> \<^theory_text>\typedecl (\\<^sub>1, \, \\<^sub>n) t\ declares a new type constructor \t\. If the object-logic defines a base sort \s\, then the constructor is declared to operate on that, via the axiomatic type-class instance \t :: (s, \, s)s\. \begin{warn} If you introduce a new type axiomatically, i.e.\ via @{command_ref typedecl} and @{command_ref axiomatization} (\secref{sec:axiomatizations}), the minimum requirement is that it has a non-empty model, to avoid immediate collapse of the logical environment. Moreover, one needs to demonstrate that the interpretation of such free-form axiomatizations can coexist with other axiomatization schemes for types, notably @{command_def typedef} in Isabelle/HOL (\secref{sec:hol-typedef}), or any other extension that people might have introduced elsewhere. \end{warn} \ section \Naming existing theorems \label{sec:theorems}\ text \ \begin{matharray}{rcll} @{command_def "lemmas"} & : & \local_theory \ local_theory\ \\ @{command_def "named_theorems"} & : & \local_theory \ local_theory\ \\ \end{matharray} \<^rail>\ @@{command lemmas} (@{syntax thmdef}? @{syntax thms} + @'and') @{syntax for_fixes} ; @@{command named_theorems} (@{syntax name} @{syntax text}? + @'and') \ \<^descr> \<^theory_text>\lemmas a = b\<^sub>1 \ b\<^sub>n\~@{keyword_def "for"}~\x\<^sub>1 \ x\<^sub>m\ evaluates given facts (with attributes) in the current context, which may be augmented by local variables. Results are standardized before being stored, i.e.\ schematic variables are renamed to enforce index \0\ uniformly. \<^descr> \<^theory_text>\named_theorems name description\ declares a dynamic fact within the context. The same \name\ is used to define an attribute with the usual \add\/\del\ syntax (e.g.\ see \secref{sec:simp-rules}) to maintain the content incrementally, in canonical declaration order of the text structure. \ section \Oracles \label{sec:oracles}\ text \ \begin{matharray}{rcll} @{command_def "oracle"} & : & \theory \ theory\ & (axiomatic!) \\ @{command_def "thm_oracles"}\\<^sup>*\ & : & \context \\ \\ \end{matharray} Oracles allow Isabelle to take advantage of external reasoners such as arithmetic decision procedures, model checkers, fast tautology checkers or computer algebra systems. Invoked as an oracle, an external reasoner can create arbitrary Isabelle theorems. It is the responsibility of the user to ensure that the external reasoner is as trustworthy as the application requires. Another typical source of errors is the linkup between Isabelle and the external tool, not just its concrete implementation, but also the required translation between two different logical environments. Isabelle merely guarantees well-formedness of the propositions being asserted, and records within the internal derivation object how presumed theorems depend on unproven suppositions. This also includes implicit type-class reasoning via the order-sorted algebra of class relations and type arities (see also @{command_ref instantiation} and @{command_ref instance}). \<^rail>\ @@{command oracle} @{syntax name} '=' @{syntax text} ; @@{command thm_oracles} @{syntax thms} \ \<^descr> \<^theory_text>\oracle name = "text"\ turns the given ML expression \text\ of type \<^ML_text>\'a -> cterm\ into an ML function of type \<^ML_text>\'a -> thm\, which is bound to the global identifier \<^ML_text>\name\. This acts like an infinitary specification of axioms! Invoking the oracle only works within the scope of the resulting theory. See \<^file>\~~/src/HOL/Examples/Iff_Oracle.thy\ for a worked example of defining a new primitive rule as oracle, and turning it into a proof method. \<^descr> \<^theory_text>\thm_oracles thms\ displays all oracles used in the internal derivation of the given theorems; this covers the full graph of transitive dependencies. \ section \Name spaces\ text \ \begin{matharray}{rcl} @{command_def "alias"} & : & \local_theory \ local_theory\ \\ @{command_def "type_alias"} & : & \local_theory \ local_theory\ \\ @{command_def "hide_class"} & : & \theory \ theory\ \\ @{command_def "hide_type"} & : & \theory \ theory\ \\ @{command_def "hide_const"} & : & \theory \ theory\ \\ @{command_def "hide_fact"} & : & \theory \ theory\ \\ \end{matharray} \<^rail>\ (@{command alias} | @{command type_alias}) @{syntax name} '=' @{syntax name} ; (@{command hide_class} | @{command hide_type} | @{command hide_const} | @{command hide_fact}) ('(' @'open' ')')? (@{syntax name} + ) \ Isabelle organizes any kind of name declarations (of types, constants, theorems etc.) by separate hierarchically structured name spaces. Normally the user does not have to control the behaviour of name spaces by hand, yet the following commands provide some way to do so. \<^descr> \<^theory_text>\alias\ and \<^theory_text>\type_alias\ introduce aliases for constants and type constructors, respectively. This allows adhoc changes to name-space accesses. \<^descr> \<^theory_text>\type_alias b = c\ introduces an alias for an existing type constructor. \<^descr> \<^theory_text>\hide_class names\ fully removes class declarations from a given name space; with the \(open)\ option, only the unqualified base name is hidden. Note that hiding name space accesses has no impact on logical declarations --- they remain valid internally. Entities that are no longer accessible to the user are printed with the special qualifier ``\??\'' prefixed to the full internal name. \<^descr> \<^theory_text>\hide_type\, \<^theory_text>\hide_const\, and \<^theory_text>\hide_fact\ are similar to \<^theory_text>\hide_class\, but hide types, constants, and facts, respectively. \ end diff --git a/src/Doc/Isar_Ref/Symbols.thy b/src/Doc/Isar_Ref/Symbols.thy --- a/src/Doc/Isar_Ref/Symbols.thy +++ b/src/Doc/Isar_Ref/Symbols.thy @@ -1,40 +1,40 @@ (*:maxLineLen=78:*) theory Symbols imports Main Base begin chapter \Predefined Isabelle symbols \label{app:symbols}\ text \ Isabelle supports an infinite number of non-ASCII symbols, which are represented in source text as \<^verbatim>\\\\<^verbatim>\<\\name\\<^verbatim>\>\ (where \name\ may be any identifier). It is left to front-end tools how to present these symbols to the user. The collection of predefined standard symbols given below is available by default for Isabelle document output, due to appropriate definitions of \<^verbatim>\\isasym\\name\ for each \<^verbatim>\\\\<^verbatim>\<\\name\\<^verbatim>\>\ in the \<^verbatim>\isabellesym.sty\ file. Most of these symbols are displayed properly in Isabelle/jEdit and {\LaTeX} generated from Isabelle. Moreover, any single symbol (or ASCII character) may be prefixed by \<^verbatim>\\<^sup>\ for superscript and \<^verbatim>\\<^sub>\ for subscript, such as \<^verbatim>\A\<^sup>\\ for \A\<^sup>\\ and \<^verbatim>\A\<^sub>1\ for \A\<^sub>1\. Sub- and superscripts that span a region of text can be marked up with \<^verbatim>\\<^bsub>\\\\\<^verbatim>\\<^esub>\ and \<^verbatim>\\<^bsup>\\\\\<^verbatim>\\<^esup>\ respectively, but note that there are limitations in the typographic rendering quality of this form. Furthermore, all ASCII characters and most other symbols may be printed in bold by prefixing \<^verbatim>\\<^bold>\ such as \<^verbatim>\\<^bold>\\ for \\<^bold>\\. Note that \<^verbatim>\\<^sup>\, \<^verbatim>\\<^sub>\, \<^verbatim>\\<^bold>\ cannot be combined. Further details of Isabelle document preparation are covered in \chref{ch:document-prep}. \begin{center} \begin{isabellebody} - \input{syms} + @{show_symbols} \end{isabellebody} \end{center} \ end diff --git a/src/Doc/Isar_Ref/document/build b/src/Doc/Isar_Ref/document/build deleted file mode 100755 --- a/src/Doc/Isar_Ref/document/build +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo Isar -./showsymbols "$ISABELLE_HOME/lib/texinputs/isabellesym.sty" > syms.tex -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Isar_Ref/document/root.tex b/src/Doc/Isar_Ref/document/root.tex --- a/src/Doc/Isar_Ref/document/root.tex +++ b/src/Doc/Isar_Ref/document/root.tex @@ -1,99 +1,99 @@ \documentclass[12pt,a4paper,fleqn]{report} \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage{amsmath} \usepackage{amssymb} \usepackage{wasysym} \usepackage{eurosym} \usepackage{pifont} \usepackage[english]{babel} \usepackage[only,bigsqcap,fatsemi,interleave,sslash]{stmaryrd} \usepackage{graphicx} \let\intorig=\int %iman.sty redefines \int \usepackage{iman,extra,isar,proof} \usepackage[nohyphen,strings]{underscore} \usepackage{isabelle} \usepackage{isabellesym} \usepackage{railsetup} \usepackage{supertabular} \usepackage{style} \usepackage{pdfsetup} \hyphenation{Isabelle} \hyphenation{Isar} \isadroptag{theory} -\title{\includegraphics[scale=0.5]{isabelle_isar} \\[4ex] The Isabelle/Isar Reference Manual} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] The Isabelle/Isar Reference Manual} \author{\emph{Makarius Wenzel} \\[3ex] With Contributions by Clemens Ballarin, Stefan Berghofer, \\ Jasmin Blanchette, Timothy Bourke, Lukas Bulwahn, \\ Amine Chaieb, Lucas Dixon, Florian Haftmann, \\ Brian Huffman, Lars Hupel, Gerwin Klein, \\ Alexander Krauss, Ond\v{r}ej Kun\v{c}ar, Andreas Lochbihler, \\ Tobias Nipkow, Lars Noschinski, David von Oheimb, \\ Larry Paulson, Sebastian Skalberg, \\ Christian Sternagel, Dmitriy Traytel } \makeindex \chardef\charbackquote=`\` \newcommand{\backquote}{\mbox{\tt\charbackquote}} \begin{document} \maketitle \pagenumbering{roman} \chapter*{Preface} \input{Preface.tex} \tableofcontents \listoffigures \clearfirst \part{Basic Concepts} \input{Synopsis.tex} \input{Framework.tex} \input{First_Order_Logic.tex} \part{General Language Elements} \input{Outer_Syntax.tex} \input{Document_Preparation.tex} \input{Spec.tex} \input{Proof.tex} \input{Proof_Script.tex} \input{Inner_Syntax.tex} \input{Generic.tex} \part{Isabelle/HOL}\label{part:hol} \input{HOL_Specific.tex} \part{Appendix} \appendix \input{Quick_Reference.tex} \let\int\intorig \input{Symbols.tex} \begingroup \tocentry{\bibname} \bibliographystyle{abbrv} \small\raggedright\frenchspacing \bibliography{manual} \endgroup \tocentry{\indexname} \printindex \end{document} diff --git a/src/Doc/Isar_Ref/document/showsymbols b/src/Doc/Isar_Ref/document/showsymbols deleted file mode 100755 --- a/src/Doc/Isar_Ref/document/showsymbols +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env perl - -print "\\begin{supertabular}{ll\@{\\qquad}ll}\n"; - -$eol = "&"; - -while () { - if (m/^\\newcommand\{\\isasym([A-Za-z]+)\}/) { - print "\\verb,\\<$1>, & {\\isasym$1} $eol\n"; - if ("$eol" eq "&") { - $eol = "\\\\"; - } else { - $eol = "&"; - } - } -} - -if ("$eol" eq "\\\\") { - print "$eol\n"; -} - -print "\\end{supertabular}\n"; - diff --git a/src/Doc/JEdit/JEdit.thy b/src/Doc/JEdit/JEdit.thy --- a/src/Doc/JEdit/JEdit.thy +++ b/src/Doc/JEdit/JEdit.thy @@ -1,2245 +1,2245 @@ (*:maxLineLen=78:*) theory JEdit imports Base begin chapter \Introduction\ section \Concepts and terminology\ text \ Isabelle/jEdit is a Prover IDE that integrates \<^emph>\parallel proof checking\ @{cite "Wenzel:2009" and "Wenzel:2013:ITP"} with \<^emph>\asynchronous user interaction\ @{cite "Wenzel:2010" and "Wenzel:2012:UITP-EPTCS" and "Wenzel:2014:ITP-PIDE" and "Wenzel:2014:UITP"}, based on a document-oriented approach to \<^emph>\continuous proof processing\ @{cite "Wenzel:2011:CICM" and "Wenzel:2012" and "Wenzel:2018:FIDE" and "Wenzel:2019:MKM"}. Many concepts and system components are fit together in order to make this work. The main building blocks are as follows. \<^descr>[Isabelle/ML] is the implementation and extension language of Isabelle, see also @{cite "isabelle-implementation"}. It is integrated into the logical context of Isabelle/Isar and allows to manipulate logical entities directly. Arbitrary add-on tools may be implemented for object-logics such as Isabelle/HOL. \<^descr>[Isabelle/Scala] is the system programming language of Isabelle. It extends the pure logical environment of Isabelle/ML towards the outer world of graphical user interfaces, text editors, IDE frameworks, web services, SSH servers, SQL databases etc. Special infrastructure allows to transfer algebraic datatypes and formatted text easily between ML and Scala, using asynchronous protocol commands. \<^descr>[PIDE] is a general framework for Prover IDEs based on Isabelle/Scala. It is built around a concept of parallel and asynchronous document processing, which is supported natively by the parallel proof engine that is implemented in Isabelle/ML. The traditional prover command loop is given up; instead there is direct support for editing of source text, with rich formal markup for GUI rendering. \<^descr>[jEdit] is a sophisticated text editor\<^footnote>\\<^url>\http://www.jedit.org\\ implemented in Java\<^footnote>\\<^url>\https://adoptopenjdk.net\\. It is easily extensible by plugins written in any language that works on the JVM. In the context of Isabelle this is always Scala\<^footnote>\\<^url>\https://www.scala-lang.org\\. \<^descr>[Isabelle/jEdit] is the main application of the PIDE framework and the default user-interface for Isabelle. It targets both beginners and experts. Technically, Isabelle/jEdit consists of the original jEdit code base with minimal patches and a special plugin for Isabelle. This is integrated as a desktop application for the main operating system families: Linux, Windows, macOS. End-users of Isabelle download and run a standalone application that exposes jEdit as a text editor on the surface. Thus there is occasionally a tendency to apply the name ``jEdit'' to any of the Isabelle Prover IDE aspects, without proper differentiation. When discussing these PIDE building blocks in public forums, mailing lists, or even scientific publications, it is particularly important to distinguish Isabelle/ML versus Standard ML, Isabelle/Scala versus Scala, Isabelle/jEdit versus jEdit. \ section \The Isabelle/jEdit Prover IDE\ text \ \begin{figure}[!htb] \begin{center} \includegraphics[width=\textwidth]{isabelle-jedit} \end{center} \caption{The Isabelle/jEdit Prover IDE} \label{fig:isabelle-jedit} \end{figure} Isabelle/jEdit (\figref{fig:isabelle-jedit}) consists of some plugins for the jEdit text editor, while preserving its overall look-and-feel. The main plugin is called ``Isabelle'' and has its own menu \<^emph>\Plugins~/ Isabelle\ with access to several actions and add-on panels (see also \secref{sec:dockables}), as well as \<^emph>\Plugins~/ Plugin Options~/ Isabelle\ (see also \secref{sec:options}). The options allow to specify a logic session name, but the same selector is also accessible in the \<^emph>\Theories\ panel (\secref{sec:theories}). After startup of the Isabelle plugin, the selected logic session image is provided automatically by the Isabelle build tool @{cite "isabelle-system"}: if it is absent or outdated wrt.\ its sources, the build process updates it within the running text editor. Prover IDE functionality is fully activated after successful termination of the build process. A failure may require changing some options and restart of the Isabelle plugin or application. Changing the logic session requires a restart of the whole application to take effect. \<^medskip> The main job of the Prover IDE is to manage sources and their changes, taking the logical structure as a formal document into account (see also \secref{sec:document-model}). The editor and the prover are connected asynchronously without locking. The prover is free to organize the checking of the formal text in parallel on multiple cores, and provides feedback via markup, which is rendered in the editor via colors, boxes, squiggly underlines, hyperlinks, popup windows, icons, clickable output etc. Using the mouse together with the modifier key \<^verbatim>\CONTROL\ (Linux, Windows) or \<^verbatim>\COMMAND\ (macOS) exposes formal content via tooltips and/or hyperlinks (see also \secref{sec:tooltips-hyperlinks}). Output (in popups etc.) may be explored recursively, using the same techniques as in the editor source buffer. Thus the Prover IDE gives an impression of direct access to formal content of the prover within the editor, but in reality only certain aspects are exposed, according to the possibilities of the prover and its add-on tools. \ subsection \Documentation\ text \ The \<^emph>\Documentation\ panel of Isabelle/jEdit provides access to some example theory files and the standard Isabelle documentation. PDF files are opened by regular desktop operations of the underlying platform. The section ``Original jEdit Documentation'' contains the original \<^emph>\User's Guide\ of this sophisticated text editor. The same is accessible via the \<^verbatim>\Help\ menu or \<^verbatim>\F1\ keyboard shortcut, using the built-in HTML viewer of Java/Swing. The latter also includes \<^emph>\Frequently Asked Questions\ and documentation of individual plugins. Most of the information about jEdit is relevant for Isabelle/jEdit as well, but users need to keep in mind that defaults sometimes differ, and the official jEdit documentation does not know about the Isabelle plugin with its support for continuous checking of formal source text: jEdit is a plain text editor, but Isabelle/jEdit is a Prover IDE. \ subsection \Plugins\ text \ The \<^emph>\Plugin Manager\ of jEdit allows to augment editor functionality by JVM modules (jars) that are provided by the central plugin repository, which is accessible via various mirror sites. Connecting to the plugin server-infrastructure of the jEdit project allows to update bundled plugins or to add further functionality. This needs to be done with the usual care for such an open bazaar of contributions. Arbitrary combinations of add-on features are apt to cause problems. It is advisable to start with the default configuration of Isabelle/jEdit and develop a sense how it is meant to work, before loading too many other plugins. \<^medskip> The \<^emph>\Isabelle\ plugin is responsible for the main Prover IDE functionality of Isabelle/jEdit: it manages the prover session in the background. A few additional plugins are bundled with Isabelle/jEdit for convenience or out of necessity, notably \<^emph>\Console\ with its \<^emph>\Scala\ sub-plugin (\secref{sec:scala-console}) and \<^emph>\SideKick\ with some Isabelle-specific parsers for document tree structure (\secref{sec:sidekick}). The \<^emph>\Navigator\ plugin is particularly important for hyperlinks within the formal document-model (\secref{sec:tooltips-hyperlinks}). Further plugins (e.g.\ \<^emph>\ErrorList\, \<^emph>\Code2HTML\) are included to saturate the dependencies of bundled plugins, but have no particular use in Isabelle/jEdit. \ subsection \Options \label{sec:options}\ text \ Both jEdit and Isabelle have distinctive management of persistent options. Regular jEdit options are accessible via the dialogs \<^emph>\Utilities~/ Global Options\ or \<^emph>\Plugins~/ Plugin Options\, with a second chance to flip the two within the central options dialog. Changes are stored in \<^path>\$JEDIT_SETTINGS/properties\ and \<^path>\$JEDIT_SETTINGS/keymaps\. Isabelle system options are managed by Isabelle/Scala and changes are stored in \<^path>\$ISABELLE_HOME_USER/etc/preferences\, independently of other jEdit properties. See also @{cite "isabelle-system"}, especially the coverage of sessions and command-line tools like @{tool build} or @{tool options}. Those Isabelle options that are declared as \<^verbatim>\public\ are configurable in Isabelle/jEdit via \<^emph>\Plugin Options~/ Isabelle~/ General\. Moreover, there are various options for rendering document content, which are configurable via \<^emph>\Plugin Options~/ Isabelle~/ Rendering\. Thus \<^emph>\Plugin Options~/ Isabelle\ in jEdit provides a view on a subset of Isabelle system options. Note that some of these options affect general parameters that are relevant outside Isabelle/jEdit as well, e.g.\ @{system_option threads} or @{system_option parallel_proofs} for the Isabelle build tool @{cite "isabelle-system"}, but it is possible to use the settings variable @{setting ISABELLE_BUILD_OPTIONS} to change defaults for batch builds without affecting the Prover IDE. The jEdit action @{action_def isabelle.options} opens the options dialog for the Isabelle plugin; it can be mapped to editor GUI elements as usual. \<^medskip> Options are usually loaded on startup and saved on shutdown of Isabelle/jEdit. Editing the generated \<^path>\$JEDIT_SETTINGS/properties\ or \<^path>\$ISABELLE_HOME_USER/etc/preferences\ manually while the application is running may cause lost updates! \ subsection \Keymaps\ text \ Keyboard shortcuts are managed as a separate concept of \<^emph>\keymap\ that is configurable via \<^emph>\Global Options~/ Shortcuts\. The \<^verbatim>\imported\ keymap is derived from the initial environment of properties that is available at the first start of the editor; afterwards the keymap file takes precedence and is no longer affected by change of default properties. Users may modify their keymap later, but this can lead to conflicts with \<^verbatim>\shortcut\ properties in \<^file>\$JEDIT_HOME/dist/properties/jEdit.props\. The action @{action_def "isabelle.keymap-merge"} helps to resolve pending Isabelle keymap changes wrt. the current jEdit keymap; non-conflicting changes are applied implicitly. This action is automatically invoked on Isabelle/jEdit startup. \ section \Command-line invocation \label{sec:command-line}\ text \ Isabelle/jEdit is normally invoked as a single-instance desktop application, based on platform-specific executables for Linux, Windows, macOS. It is also possible to invoke the Prover IDE on the command-line, with some extra options and environment settings. The command-line usage of @{tool_def jedit} is as follows: @{verbatim [display] \Usage: isabelle jedit [OPTIONS] [FILES ...] Options are: -A NAME ancestor session for option -R (default: parent) -D NAME=X set JVM system property -J OPTION add JVM runtime option (default $JEDIT_JAVA_SYSTEM_OPTIONS $JEDIT_JAVA_OPTIONS) -R NAME build image with requirements from other sessions -b build only -d DIR include session directory -f fresh build -i NAME include session in name-space of theories -j OPTION add jEdit runtime option (default $JEDIT_OPTIONS) -l NAME logic image name -m MODE add print mode for output -n no build of session image on startup -p CMD ML process command prefix (process policy) -s system build mode for session image (system_heaps=true) -u user build mode for session image (system_heaps=false) Start jEdit with Isabelle plugin setup and open FILES (default "$USER_HOME/Scratch.thy" or ":" for empty buffer).\} The \<^verbatim>\-l\ option specifies the session name of the logic image to be used for proof processing. Additional session root directories may be included via option \<^verbatim>\-d\ to augment the session name space (see also @{cite "isabelle-system"}). By default, the specified image is checked and built on demand, but option \<^verbatim>\-n\ bypasses the implicit build process for the selected session image. Options \<^verbatim>\-s\ and \<^verbatim>\-u\ override the default system option @{system_option system_heaps}: this determines where to store the session image of @{tool build}. The \<^verbatim>\-R\ option builds an auxiliary logic image with all theories from other sessions that are not already present in its parent; it also opens the session \<^verbatim>\ROOT\ entry in the editor to facilitate editing of the main session. The \<^verbatim>\-A\ option specifies and alternative ancestor session for option \<^verbatim>\-R\: this allows to restructure the hierarchy of session images on the spot. The \<^verbatim>\-i\ option includes additional sessions into the name-space of theories: multiple occurrences are possible. The \<^verbatim>\-m\ option specifies additional print modes for the prover process. Note that the system option @{system_option_ref jedit_print_mode} allows to do the same persistently (e.g.\ via the \<^emph>\Plugin Options\ dialog of Isabelle/jEdit), without requiring command-line invocation. The \<^verbatim>\-J\ and \<^verbatim>\-j\ options pass additional low-level options to the JVM or jEdit, respectively. The defaults are provided by the Isabelle settings environment @{cite "isabelle-system"}, but note that these only work for the command-line tool described here, and not the desktop application. The \<^verbatim>\-D\ option allows to define JVM system properties; this is passed directly to the underlying \<^verbatim>\java\ process. The \<^verbatim>\-b\ and \<^verbatim>\-f\ options control the self-build mechanism of Isabelle/jEdit. This is only relevant for building from sources, which also requires an auxiliary \<^verbatim>\jedit_build\ component from \<^url>\https://isabelle.in.tum.de/components\. The official Isabelle release already includes a pre-built version of Isabelle/jEdit. \<^bigskip> It is also possible to connect to an already running Isabelle/jEdit process via @{tool_def jedit_client}: @{verbatim [display] \Usage: isabelle jedit_client [OPTIONS] [FILES ...] Options are: -c only check presence of server -n only report server name -s NAME server name (default "Isabelle") Connect to already running Isabelle/jEdit instance and open FILES\} The \<^verbatim>\-c\ option merely checks the presence of the server, producing a process return-code. The \<^verbatim>\-n\ option reports the server name, and the \<^verbatim>\-s\ option provides a different server name. The default server name is the official distribution name (e.g.\ \<^verbatim>\Isabelle2021\). Thus @{tool jedit_client} can connect to the Isabelle desktop application without further options. The \<^verbatim>\-p\ option allows to override the implicit default of the system option @{system_option_ref ML_process_policy} for ML processes started by the Prover IDE, e.g. to control CPU affinity on multiprocessor systems. The JVM system property \<^verbatim>\isabelle.jedit_server\ provides a different server name, e.g.\ use \<^verbatim>\isabelle jedit -Disabelle.jedit_server=\\name\ and \<^verbatim>\isabelle jedit_client -s\~\name\ to connect later on. \ section \GUI rendering\ text \ Isabelle/jEdit is a classic Java/AWT/Swing application: its GUI rendering usually works well, but there are technical side-conditions on the Java window system and graphics engine. When researching problems and solutions on the Web, it often helps to include other well-known Swing applications, notably IntelliJ IDEA and Netbeans. \ subsection \Portable and scalable look-and-feel\ text \ In the past, \<^emph>\system look-and-feels\ tried hard to imitate native GUI elements on specific platforms (Windows, macOS/Aqua, Linux/GTK+), but many technical problems have accumulated in recent years (e.g.\ see \secref{sec:problems}). In 2021, we are de-facto back to \<^emph>\portable look-and-feels\, which also happen to be \emph{scalable} on high-resolution displays: \<^item> \<^verbatim>\FlatLaf Light\ is the default for Isabelle/jEdit on all platforms. It generally looks good and adapts itself pretty well to high-resolution displays. \<^item> \<^verbatim>\FlatLaf Dark\ is an alternative, but it requires further changes of editor colors by the user (or by the jEdit plugin \<^verbatim>\Editor Scheme\). Also note that Isabelle/PIDE has its own extensive set of rendering options that need to be revisited. \<^item> \<^verbatim>\Metal\ still works smoothly, although it is stylistically outdated. It can accommodate high-resolution displays via font properties (see below). Changing the look-and-feel in \<^emph>\Global Options~/ Appearance\ often updates the GUI only partially: a full restart of Isabelle/jEdit is required to see the true effect. \ subsection \Adjusting fonts\ text \ The preferred font family for Isabelle/jEdit is \<^verbatim>\Isabelle DejaVu\: it is used by default for the main text area and various GUI elements. The default font sizes attempt to deliver a usable application for common display types, such as ``Full HD'' at $1920 \times 1080$ and ``Ultra HD'' at $3840 \times 2160$. \<^medskip> Isabelle/jEdit provides various options to adjust font sizes in particular GUI elements. Here is a summary of all relevant font properties: \<^item> \<^emph>\Global Options / Text Area / Text font\: the main text area font, which is also used as reference point for various derived font sizes, e.g.\ the \<^emph>\Output\ (\secref{sec:output}) and \<^emph>\State\ (\secref{sec:state-output}) panels. \<^item> \<^emph>\Global Options / Gutter / Gutter font\: the font for the gutter area left of the main text area, e.g.\ relevant for display of line numbers (disabled by default). \<^item> \<^emph>\Global Options / Appearance / Button, menu and label font\ as well as \<^emph>\List and text field font\: this specifies the primary and secondary font for the \<^emph>\Metal\ look-and-feel. \<^item> \<^emph>\Plugin Options / Isabelle / General / Reset Font Size\: the main text area font size for action @{action_ref "isabelle.reset-font-size"}, e.g.\ relevant for quick scaling like in common web browsers. \<^item> \<^emph>\Plugin Options / Console / General / Font\: the console window font, e.g.\ relevant for Isabelle/Scala command-line. \ chapter \Augmented jEdit functionality\ section \Dockable windows \label{sec:dockables}\ text \ In jEdit terminology, a \<^emph>\view\ is an editor window with one or more \<^emph>\text areas\ that show the content of one or more \<^emph>\buffers\. A regular view may be surrounded by \<^emph>\dockable windows\ that show additional information in arbitrary format, not just text; a \<^emph>\plain view\ does not allow dockables. The \<^emph>\dockable window manager\ of jEdit organizes these dockable windows, either as \<^emph>\floating\ windows, or \<^emph>\docked\ panels within one of the four margins of the view. There may be any number of floating instances of some dockable window, but at most one docked instance; jEdit actions that address \<^emph>\the\ dockable window of a particular kind refer to the unique docked instance. Dockables are used routinely in jEdit for important functionality like \<^emph>\HyperSearch Results\ or the \<^emph>\File System Browser\. Plugins often provide a central dockable to access their main functionality, which may be opened by the user on demand. The Isabelle/jEdit plugin takes this approach to the extreme: its plugin menu provides the entry-points to many panels that are managed as dockable windows. Some important panels are docked by default, e.g.\ \<^emph>\Documentation\, \<^emph>\State\, \<^emph>\Theories\ \<^emph>\Output\, \<^emph>\Query\. The user can change this arrangement easily and persistently. Compared to plain jEdit, dockable window management in Isabelle/jEdit is slightly augmented according to the the following principles: \<^item> Floating windows are dependent on the main window as \<^emph>\dialog\ in the sense of Java/AWT/Swing. Dialog windows always stay on top of the view, which is particularly important in full-screen mode. The desktop environment of the underlying platform may impose further policies on such dependent dialogs, in contrast to fully independent windows, e.g.\ some window management functions may be missing. \<^item> Keyboard focus of the main view vs.\ a dockable window is carefully managed according to the intended semantics, as a panel mainly for output or input. For example, activating the \<^emph>\Output\ (\secref{sec:output}) or State (\secref{sec:state-output}) panel via the dockable window manager returns keyboard focus to the main text area, but for \<^emph>\Query\ (\secref{sec:query}) or \<^emph>\Sledgehammer\ \secref{sec:sledgehammer} the focus is given to the main input field of that panel. \<^item> Panels that provide their own text area for output have an additional dockable menu item \<^emph>\Detach\. This produces an independent copy of the current output as a floating \<^emph>\Info\ window, which displays that content independently of ongoing changes of the PIDE document-model. Note that Isabelle/jEdit popup windows (\secref{sec:tooltips-hyperlinks}) provide a similar \<^emph>\Detach\ operation as an icon. \ section \Isabelle symbols \label{sec:symbols}\ text \ Isabelle sources consist of \<^emph>\symbols\ that extend plain ASCII to allow infinitely many mathematical symbols within the formal sources. This works without depending on particular encodings and varying Unicode standards.\<^footnote>\Raw Unicode characters within formal sources compromise portability and reliability in the face of changing interpretation of special features of Unicode, such as Combining Characters or Bi-directional Text.\ See @{cite "Wenzel:2011:CICM"}. For the prover back-end, formal text consists of ASCII characters that are grouped according to some simple rules, e.g.\ as plain ``\<^verbatim>\a\'' or symbolic ``\<^verbatim>\\\''. For the editor front-end, a certain subset of symbols is rendered physically via Unicode glyphs, to show ``\<^verbatim>\\\'' as ``\\\'', for example. This symbol interpretation is specified by the Isabelle system distribution in \<^file>\$ISABELLE_HOME/etc/symbols\ and may be augmented by the user in \<^path>\$ISABELLE_HOME_USER/etc/symbols\. The appendix of @{cite "isabelle-isar-ref"} gives an overview of the standard interpretation of finitely many symbols from the infinite collection. Uninterpreted symbols are displayed literally, e.g.\ ``\<^verbatim>\\\''. Overlap of Unicode characters used in symbol interpretation with informal ones (which might appear e.g.\ in comments) needs to be avoided. Raw Unicode characters within prover source files should be restricted to informal parts, e.g.\ to write text in non-latin alphabets in comments. \ paragraph \Encoding.\ text \Technically, the Unicode interpretation of Isabelle symbols is an \<^emph>\encoding\ called \<^verbatim>\UTF-8-Isabelle\ in jEdit (\<^emph>\not\ in the underlying JVM). It is provided by the Isabelle Base plugin and enabled by default for all source files in Isabelle/jEdit. Sometimes such defaults are reset accidentally, or malformed UTF-8 sequences in the text force jEdit to fall back on a different encoding like \<^verbatim>\ISO-8859-15\. In that case, verbatim ``\<^verbatim>\\\'' will be shown in the text buffer instead of its Unicode rendering ``\\\''. The jEdit menu operation \<^emph>\File~/ Reload with Encoding~/ UTF-8-Isabelle\ helps to resolve such problems (after repairing malformed parts of the text). If the loaded text already contains Unicode sequences that are in conflict with the Isabelle symbol encoding, the fallback-encoding UTF-8 is used and Isabelle symbols remain in literal \<^verbatim>\\\ form. The jEdit menu operation \<^emph>\Utilities~/ Buffer Options~/ Character encoding\ allows to enforce \<^verbatim>\UTF-8-Isabelle\, but this will also change original Unicode text into Isabelle symbols when saving the file! \ paragraph \Font.\ text \Correct rendering via Unicode requires a font that contains glyphs for the corresponding codepoints. There are also various unusual symbols with particular purpose in Isabelle, e.g.\ control symbols and very long arrows. Isabelle/jEdit prefers its own font collection \<^verbatim>\Isabelle DejaVu\, with families \<^verbatim>\Serif\ (default for help texts), \<^verbatim>\Sans\ (default for GUI elements), \<^verbatim>\Mono Sans\ (default for text area). This ensures that all standard Isabelle symbols are shown on the screen (or printer) as expected. Note that a Java/AWT/Swing application can load additional fonts only if they are not installed on the operating system already! Outdated versions of Isabelle fonts that happen to be provided by the operating system prevent Isabelle/jEdit to use its bundled version. This could lead to missing glyphs (black rectangles), when the system version of a font is older than the application version. This problem can be avoided by refraining to ``install'' a copy of the Isabelle fonts in the first place, although it might be tempting to use the same font in other applications. HTML pages generated by Isabelle refer to the same Isabelle fonts as a server-side resource. Thus a web-browser can use that without requiring a locally installed copy. \ paragraph \Input methods.\ text \In principle, Isabelle/jEdit could delegate the problem to produce Isabelle symbols in their Unicode rendering to the underlying operating system and its \<^emph>\input methods\. Regular jEdit also provides various ways to work with \<^emph>\abbreviations\ to produce certain non-ASCII characters. Since none of these standard input methods work satisfactorily for the mathematical characters required for Isabelle, various specific Isabelle/jEdit mechanisms are provided. This is a summary for practically relevant input methods for Isabelle symbols. \<^enum> The \<^emph>\Symbols\ panel: some GUI buttons allow to insert certain symbols in the text buffer. There are also tooltips to reveal the official Isabelle representation with some additional information about \<^emph>\symbol abbreviations\ (see below). \<^enum> Copy/paste from decoded source files: text that is already rendered as Unicode can be re-used for other text. This also works between different applications, e.g.\ Isabelle/jEdit and some web browser or mail client, as long as the same Unicode interpretation of Isabelle symbols is used. \<^enum> Copy/paste from prover output within Isabelle/jEdit. The same principles as for text buffers apply, but note that \<^emph>\copy\ in secondary Isabelle/jEdit windows works via the keyboard shortcuts \<^verbatim>\C+c\ or \<^verbatim>\C+INSERT\, while jEdit menu actions always refer to the primary text area! \<^enum> Completion provided by the Isabelle plugin (see \secref{sec:completion}). Isabelle symbols have a canonical name and optional abbreviations. This can be used with the text completion mechanism of Isabelle/jEdit, to replace a prefix of the actual symbol like \<^verbatim>\\\, or its name preceded by backslash \<^verbatim>\\lambda\, or its ASCII abbreviation \<^verbatim>\%\ by the Unicode rendering. The following table is an extract of the information provided by the standard \<^file>\$ISABELLE_HOME/etc/symbols\ file: \<^medskip> \begin{tabular}{lll} \<^bold>\symbol\ & \<^bold>\name with backslash\ & \<^bold>\abbreviation\ \\\hline \\\ & \<^verbatim>\\lambda\ & \<^verbatim>\%\ \\ \\\ & \<^verbatim>\\Rightarrow\ & \<^verbatim>\=>\ \\ \\\ & \<^verbatim>\\Longrightarrow\ & \<^verbatim>\==>\ \\[0.5ex] \\\ & \<^verbatim>\\And\ & \<^verbatim>\!!\ \\ \\\ & \<^verbatim>\\equiv\ & \<^verbatim>\==\ \\[0.5ex] \\\ & \<^verbatim>\\forall\ & \<^verbatim>\!\ \\ \\\ & \<^verbatim>\\exists\ & \<^verbatim>\?\ \\ \\\ & \<^verbatim>\\longrightarrow\ & \<^verbatim>\-->\ \\ \\\ & \<^verbatim>\\and\ & \<^verbatim>\&\ \\ \\\ & \<^verbatim>\\or\ & \<^verbatim>\|\ \\ \\\ & \<^verbatim>\\not\ & \<^verbatim>\~\ \\ \\\ & \<^verbatim>\\noteq\ & \<^verbatim>\~=\ \\ \\\ & \<^verbatim>\\in\ & \<^verbatim>\:\ \\ \\\ & \<^verbatim>\\notin\ & \<^verbatim>\~:\ \\ \end{tabular} \<^medskip> Note that the above abbreviations refer to the input method. The logical notation provides ASCII alternatives that often coincide, but sometimes deviate. This occasionally causes user confusion with old-fashioned Isabelle source that use ASCII replacement notation like \<^verbatim>\!\ or \<^verbatim>\ALL\ directly in the text. On the other hand, coincidence of symbol abbreviations with ASCII replacement syntax syntax helps to update old theory sources via explicit completion (see also \<^verbatim>\C+b\ explained in \secref{sec:completion}). \ paragraph \Control symbols.\ text \There are some special control symbols to modify the display style of a single symbol (without nesting). Control symbols may be applied to a region of selected text, either using the \<^emph>\Symbols\ panel or keyboard shortcuts or jEdit actions. These editor operations produce a separate control symbol for each symbol in the text, in order to make the whole text appear in a certain style. \<^medskip> \begin{tabular}{llll} \<^bold>\style\ & \<^bold>\symbol\ & \<^bold>\shortcut\ & \<^bold>\action\ \\\hline superscript & \<^verbatim>\\<^sup>\ & \<^verbatim>\C+e UP\ & @{action_ref "isabelle.control-sup"} \\ subscript & \<^verbatim>\\<^sub>\ & \<^verbatim>\C+e DOWN\ & @{action_ref "isabelle.control-sub"} \\ bold face & \<^verbatim>\\<^bold>\ & \<^verbatim>\C+e RIGHT\ & @{action_ref "isabelle.control-bold"} \\ emphasized & \<^verbatim>\\<^emph>\ & \<^verbatim>\C+e LEFT\ & @{action_ref "isabelle.control-emph"} \\ reset & & \<^verbatim>\C+e BACK_SPACE\ & @{action_ref "isabelle.control-reset"} \\ \end{tabular} \<^medskip> To produce a single control symbol, it is also possible to complete on \<^verbatim>\\sup\, \<^verbatim>\\sub\, \<^verbatim>\\bold\, \<^verbatim>\\emph\ as for regular symbols. The emphasized style only takes effect in document output (when used with a cartouche), but not in the editor. \ section \Scala console \label{sec:scala-console}\ text \ The \<^emph>\Console\ plugin manages various shells (command interpreters), e.g.\ \<^emph>\BeanShell\, which is the official jEdit scripting language, and the cross-platform \<^emph>\System\ shell. Thus the console provides similar functionality than the Emacs buffers \<^verbatim>\*scratch*\ and \<^verbatim>\*shell*\. Isabelle/jEdit extends the repertoire of the console by \<^emph>\Scala\, which is the regular Scala toplevel loop running inside the same JVM process as Isabelle/jEdit itself. This means the Scala command interpreter has access to the JVM name space and state of the running Prover IDE application. The default environment imports the full content of packages \<^verbatim>\isabelle\ and \<^verbatim>\isabelle.jedit\. For example, \<^verbatim>\PIDE\ refers to the Isabelle/jEdit plugin object, and \<^verbatim>\view\ to the current editor view of jEdit. The Scala expression \<^verbatim>\PIDE.snapshot(view)\ makes a PIDE document snapshot of the current buffer within the current editor view: it allows to retrieve document markup in a timeless~/ stateless manner, while the prover continues its processing. This helps to explore Isabelle/Scala functionality interactively. Some care is required to avoid interference with the internals of the running application. \ section \Physical and logical files \label{sec:files}\ text \ File specifications in jEdit follow various formats and conventions according to \<^emph>\Virtual File Systems\, which may be also provided by plugins. This allows to access remote files via the \<^verbatim>\https:\ protocol prefix, for example. Isabelle/jEdit attempts to work with the file-system model of jEdit as far as possible. In particular, theory sources are passed from the editor to the prover, without indirection via the file-system. Thus files don't need to be saved: the editor buffer content is used directly. \ subsection \Local files and environment variables \label{sec:local-files}\ text \ Local files (without URL notation) are particularly important. The file path notation is that of the Java Virtual Machine on the underlying platform. On Windows the preferred form uses backslashes, but happens to accept forward slashes like Unix/POSIX as well. Further differences arise due to Windows drive letters and network shares: thus relative paths (with forward slashes) are portable, but absolute paths are not. File paths in Java are distinct from Isabelle; the latter uses POSIX notation with forward slashes on \<^emph>\all\ platforms. Isabelle/ML on Windows uses Unix-style path notation, with drive letters according to Cygwin (e.g.\ \<^verbatim>\/cygdrive/c\). Environment variables from the Isabelle process may be used freely, e.g.\ \<^file>\$ISABELLE_HOME/etc/symbols\ or \<^file>\$POLYML_HOME/README\. There are special shortcuts: \<^dir>\~\ for \<^dir>\$USER_HOME\ and \<^dir>\~~\ for \<^dir>\$ISABELLE_HOME\. \<^medskip> Since jEdit happens to support environment variables within file specifications as well, it is natural to use similar notation within the editor, e.g.\ in the file-browser. This does not work in full generality, though, due to the bias of jEdit towards platform-specific notation and of Isabelle towards POSIX. Moreover, the Isabelle settings environment is not accessible when starting Isabelle/jEdit via the desktop application wrapper, in contrast to @{tool jedit} run from the command line (\secref{sec:command-line}). Isabelle/jEdit imitates important system settings within the Java process environment, in order to allow easy access to these important places from the editor: \<^verbatim>\$ISABELLE_HOME\, \<^verbatim>\$ISABELLE_HOME_USER\, \<^verbatim>\$JEDIT_HOME\, \<^verbatim>\$JEDIT_SETTINGS\. The file browser of jEdit also includes \<^emph>\Favorites\ for these locations. \<^medskip> Path specifications in prover input or output usually include formal markup that turns it into a hyperlink (see also \secref{sec:tooltips-hyperlinks}). This allows to open the corresponding file in the text editor, independently of the path notation. If the path refers to a directory, it is opened in the jEdit file browser. Formally checked paths in prover input are subject to completion (\secref{sec:completion}): partial specifications are resolved via directory content and possible completions are offered in a popup. \ subsection \PIDE resources via virtual file-systems\ text \ The jEdit file browser is docked by default. It provides immediate access to the local file-system, as well as important Isabelle resources via the \<^emph>\Favorites\ menu. Environment variables like \<^verbatim>\$ISABELLE_HOME\ are discussed in \secref{sec:local-files}. Virtual file-systems are more special: the idea is to present structured information like a directory tree. The following URLs are offered in the \<^emph>\Favorites\ menu, or by corresponding jEdit actions. \<^item> URL \<^verbatim>\isabelle-export:\ or action @{action_def "isabelle-export-browser"} shows a toplevel directory with theory names: each may provide its own tree structure of session exports. Exports are like a logical file-system for the current prover session, maintained as Isabelle/Scala data structures and written to the session database eventually. The \<^verbatim>\isabelle-export:\ URL exposes the current content according to a snapshot of the document model. The file browser is \<^emph>\not\ updated continuously when the PIDE document changes: the reload operation needs to be used explicitly. A notable example for exports is the command @{command_ref export_code} @{cite "isabelle-isar-ref"}. \<^item> URL \<^verbatim>\isabelle-session:\ or action @{action_def "isabelle-session-browser"} show the structure of session chapters and sessions within them. What looks like a file-entry is actually a reference to the session definition in its corresponding \<^verbatim>\ROOT\ file. The latter is subject to Prover IDE markup, so the session theories and other files may be browsed quickly by following hyperlinks in the text. \ section \Indentation\ text \ Isabelle/jEdit augments the existing indentation facilities of jEdit to take the structure of theory and proof texts into account. There is also special support for unstructured proof scripts (\<^theory_text>\apply\ etc.). \<^descr>[Syntactic indentation] follows the outer syntax of Isabelle/Isar. Action @{action "indent-lines"} (shortcut \<^verbatim>\C+i\) indents the current line according to command keywords and some command substructure: this approximation may need further manual tuning. Action @{action "isabelle.newline"} (shortcut \<^verbatim>\ENTER\) indents the old and the new line according to command keywords only: leading to precise alignment of the main Isar language elements. This depends on option @{system_option_def "jedit_indent_newline"} (enabled by default). Regular input (via keyboard or completion) indents the current line whenever an new keyword is emerging at the start of the line. This depends on option @{system_option_def "jedit_indent_input"} (enabled by default). \<^descr>[Semantic indentation] adds additional white space to unstructured proof scripts via the number of subgoals. This requires information of ongoing document processing and may thus lag behind when the user is editing too quickly; see also option @{system_option_def "jedit_script_indent"} and @{system_option_def "jedit_script_indent_limit"}. The above options are accessible in the menu \<^emph>\Plugins / Plugin Options / Isabelle / General\. A prerequisite for advanced indentation is \<^emph>\Utilities / Buffer Options / Automatic indentation\: it needs to be set to \<^verbatim>\full\ (default). \ section \SideKick parsers \label{sec:sidekick}\ text \ The \<^emph>\SideKick\ plugin provides some general services to display buffer structure in a tree view. Isabelle/jEdit provides SideKick parsers for its main mode for theory files, ML files, as well as some minor modes for the \<^verbatim>\NEWS\ file (see \figref{fig:sidekick}), session \<^verbatim>\ROOT\ files, system \<^verbatim>\options\, and Bib{\TeX} files (\secref{sec:bibtex}). \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{sidekick} \end{center} \caption{The Isabelle NEWS file with SideKick tree view} \label{fig:sidekick} \end{figure} The default SideKick parser for theory files is \<^verbatim>\isabelle\: it provides a tree-view on the formal document structure, with section headings at the top and formal specification elements at the bottom. The alternative parser \<^verbatim>\isabelle-context\ shows nesting of context blocks according to \<^theory_text>\begin \ end\ structure. \<^medskip> Isabelle/ML files are structured according to semi-formal comments that are explained in @{cite "isabelle-implementation"}. This outline is turned into a tree-view by default, by using the \<^verbatim>\isabelle-ml\ parser. There is also a folding mode of the same name, for hierarchic text folds within ML files. \<^medskip> The special SideKick parser \<^verbatim>\isabelle-markup\ exposes the uninterpreted markup tree of the PIDE document model of the current buffer. This is occasionally useful for informative purposes, but the amount of displayed information might cause problems for large buffers. \ chapter \Prover IDE functionality \label{sec:document-model}\ section \Document model \label{sec:document-model}\ text \ The document model is central to the PIDE architecture: the editor and the prover have a common notion of structured source text with markup, which is produced by formal processing. The editor is responsible for edits of document source, as produced by the user. The prover is responsible for reports of document markup, as produced by its processing in the background. Isabelle/jEdit handles classic editor events of jEdit, in order to connect the physical world of the GUI (with its singleton state) to the mathematical world of multiple document versions (with timeless and stateless updates). \ subsection \Editor buffers and document nodes \label{sec:buffer-node}\ text \ As a regular text editor, jEdit maintains a collection of \<^emph>\buffers\ to store text files; each buffer may be associated with any number of visible \<^emph>\text areas\. Buffers are subject to an \<^emph>\edit mode\ that is determined from the file name extension. The following modes are treated specifically in Isabelle/jEdit: \<^medskip> \begin{tabular}{lll} \<^bold>\mode\ & \<^bold>\file name\ & \<^bold>\content\ \\\hline \<^verbatim>\isabelle\ & \<^verbatim>\*.thy\ & theory source \\ \<^verbatim>\isabelle-ml\ & \<^verbatim>\*.ML\ & Isabelle/ML source \\ \<^verbatim>\sml\ & \<^verbatim>\*.sml\ or \<^verbatim>\*.sig\ & Standard ML source \\ \<^verbatim>\isabelle-root\ & \<^verbatim>\ROOT\ & session root \\ \<^verbatim>\isabelle-options\ & & Isabelle options \\ \<^verbatim>\isabelle-news\ & & Isabelle NEWS \\ \end{tabular} \<^medskip> All jEdit buffers are automatically added to the PIDE document-model as \<^emph>\document nodes\. The overall document structure is defined by the theory nodes in two dimensions: \<^enum> via \<^bold>\theory imports\ that are specified in the \<^emph>\theory header\ using concrete syntax of the @{command_ref theory} command @{cite "isabelle-isar-ref"}; \<^enum> via \<^bold>\auxiliary files\ that are included into a theory by \<^emph>\load commands\, notably @{command_ref ML_file} and @{command_ref SML_file} @{cite "isabelle-isar-ref"}. In any case, source files are managed by the PIDE infrastructure: the physical file-system only plays a subordinate role. The relevant version of source text is passed directly from the editor to the prover, using internal communication channels. \ subsection \Theories \label{sec:theories}\ text \ The \<^emph>\Theories\ panel (see also \figref{fig:theories}) provides an overview of the status of continuous checking of theory nodes within the document model. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{theories} \end{center} \caption{Theories panel with an overview of the document-model, and jEdit text areas as editable views on some of the document nodes} \label{fig:theories} \end{figure} Theory imports are resolved automatically by the PIDE document model: all required files are loaded and stored internally, without the need to open corresponding jEdit buffers. Opening or closing editor buffers later on has no direct impact on the formal document content: it only affects visibility. In contrast, auxiliary files (e.g.\ from @{command ML_file} commands) are \<^emph>\not\ resolved within the editor by default, but the prover process takes care of that. This may be changed by enabling the system option @{system_option jedit_auto_resolve}: it ensures that all files are uniformly provided by the editor. \<^medskip> The visible \<^emph>\perspective\ of Isabelle/jEdit is defined by the collective view on theory buffers via open text areas. The perspective is taken as a hint for document processing: the prover ensures that those parts of a theory where the user is looking are checked, while other parts that are presently not required are ignored. The perspective is changed by opening or closing text area windows, or scrolling within a window. The \<^emph>\Theories\ panel provides some further options to influence the process of continuous checking: it may be switched off globally to restrict the prover to superficial processing of command syntax. It is also possible to indicate theory nodes as \<^emph>\required\ for continuous checking: this means such nodes and all their imports are always processed independently of the visibility status (if continuous checking is enabled). Big theory libraries that are marked as required can have significant impact on performance! The \<^emph>\Purge\ button restricts the document model to theories that are required for open editor buffers: inaccessible theories are removed and will be rechecked when opened or imported later. \<^medskip> Formal markup of checked theory content is turned into GUI rendering, based on a standard repertoire known from mainstream IDEs for programming languages: colors, icons, highlighting, squiggly underlines, tooltips, hyperlinks etc. For outer syntax of Isabelle/Isar there is some traditional syntax-highlighting via static keywords and tokenization within the editor; this buffer syntax is determined from theory imports. In contrast, the painting of inner syntax (term language etc.)\ uses semantic information that is reported dynamically from the logical context. Thus the prover can provide additional markup to help the user to understand the meaning of formal text, and to produce more text with some add-on tools (e.g.\ information messages with \<^emph>\sendback\ markup by automated provers or disprovers in the background). \ subsection \Auxiliary files \label{sec:aux-files}\ text \ Special load commands like @{command_ref ML_file} and @{command_ref SML_file} @{cite "isabelle-isar-ref"} refer to auxiliary files within some theory. Conceptually, the file argument of the command extends the theory source by the content of the file, but its editor buffer may be loaded~/ changed~/ saved separately. The PIDE document model propagates changes of auxiliary file content to the corresponding load command in the theory, to update and process it accordingly: changes of auxiliary file content are treated as changes of the corresponding load command. \<^medskip> As a concession to the massive amount of ML files in Isabelle/HOL itself, the content of auxiliary files is only added to the PIDE document-model on demand, the first time when opened explicitly in the editor. There are further tricks to manage markup of ML files, such that Isabelle/HOL may be edited conveniently in the Prover IDE on small machines with only 8\,GB of main memory. Using \<^verbatim>\Pure\ as logic session image, the exploration may start at the top \<^file>\$ISABELLE_HOME/src/HOL/Main.thy\ or the bottom \<^file>\$ISABELLE_HOME/src/HOL/HOL.thy\, for example. It is also possible to explore the Isabelle/Pure bootstrap process (a virtual copy) by opening \<^file>\$ISABELLE_HOME/src/Pure/ROOT.ML\ like a theory in the Prover IDE. Initially, before an auxiliary file is opened in the editor, the prover reads its content from the physical file-system. After the file is opened for the first time in the editor, e.g.\ by following the hyperlink (\secref{sec:tooltips-hyperlinks}) for the argument of its @{command ML_file} command, the content is taken from the jEdit buffer. The change of responsibility from prover to editor counts as an update of the document content, so subsequent theory sources need to be re-checked. When the buffer is closed, the responsibility remains to the editor: the file may be opened again without causing another document update. A file that is opened in the editor, but its theory with the load command is not, is presently inactive in the document model. A file that is loaded via multiple load commands is associated to an arbitrary one: this situation is morally unsupported and might lead to confusion. \<^medskip> Output that refers to an auxiliary file is combined with that of the corresponding load command, and shown whenever the file or the command are active (see also \secref{sec:output}). Warnings, errors, and other useful markup is attached directly to the positions in the auxiliary file buffer, in the manner of standard IDEs. By using the load command @{command SML_file} as explained in \<^file>\$ISABELLE_HOME/src/Tools/SML/Examples.thy\, Isabelle/jEdit may be used as fully-featured IDE for Standard ML, independently of theory or proof development: the required theory merely serves as some kind of project file for a collection of SML source modules. \ section \Output \label{sec:output}\ text \ Prover output consists of \<^emph>\markup\ and \<^emph>\messages\. Both are directly attached to the corresponding positions in the original source text, and visualized in the text area, e.g.\ as text colours for free and bound variables, or as squiggly underlines for warnings, errors etc.\ (see also \figref{fig:output}). In the latter case, the corresponding messages are shown by hovering with the mouse over the highlighted text --- although in many situations the user should already get some clue by looking at the position of the text highlighting, without seeing the message body itself. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{output} \end{center} \caption{Multiple views on prover output: gutter with icon, text area with popup, text overview column, \<^emph>\Theories\ panel, \<^emph>\Output\ panel} \label{fig:output} \end{figure} The ``gutter'' on the left-hand-side of the text area uses icons to provide a summary of the messages within the adjacent text line. Message priorities are used to prefer errors over warnings, warnings over information messages; other output is ignored. The ``text overview column'' on the right-hand-side of the text area uses similar information to paint small rectangles for the overall status of the whole text buffer. The graphics is scaled to fit the logical buffer length into the given window height. Mouse clicks on the overview area move the cursor approximately to the corresponding text line in the buffer. The \<^emph>\Theories\ panel provides another course-grained overview, but without direct correspondence to text positions. The coloured rectangles represent the amount of messages of a certain kind (warnings, errors, etc.) and the execution status of commands. The border of each rectangle indicates the overall status of processing: a thick border means it is \<^emph>\finished\ or \<^emph>\failed\ (with color for errors). A double-click on one of the theory entries with their status overview opens the corresponding text buffer, without moving the cursor to a specific point. \<^medskip> The \<^emph>\Output\ panel displays prover messages that correspond to a given command, within a separate window. The cursor position in the presently active text area determines the prover command whose cumulative message output is appended and shown in that window (in canonical order according to the internal execution of the command). There are also control elements to modify the update policy of the output wrt.\ continued editor movements: \<^emph>\Auto update\ and \<^emph>\Update\. This is particularly useful for multiple instances of the \<^emph>\Output\ panel to look at different situations. Alternatively, the panel can be turned into a passive \<^emph>\Info\ window via the \<^emph>\Detach\ menu item. Proof state is handled separately (\secref{sec:state-output}), but it is also possible to tick the corresponding checkbox to append it to regular output (\figref{fig:output-including-state}). This is a globally persistent option: it affects all open panels and future editor sessions. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{output-including-state} \end{center} \caption{Proof state display within the regular output panel} \label{fig:output-including-state} \end{figure} \<^medskip> Following the IDE principle, regular messages are attached to the original source in the proper place and may be inspected on demand via popups. This excludes messages that are somehow internal to the machinery of proof checking, notably \<^emph>\proof state\ and \<^emph>\tracing\. In any case, the same display technology is used for small popups and big output windows. The formal text contains markup that may be explored recursively via further popups and hyperlinks (see \secref{sec:tooltips-hyperlinks}), or clicked directly to initiate certain actions (see \secref{sec:auto-tools} and \secref{sec:sledgehammer}). \<^medskip> Alternatively, the subsequent actions (with keyboard shortcuts) allow to show tooltip messages or navigate error positions: \<^medskip> \begin{tabular}[t]{l} @{action_ref "isabelle.tooltip"} (\<^verbatim>\CS+b\) \\ @{action_ref "isabelle.message"} (\<^verbatim>\CS+m\) \\ \end{tabular}\quad \begin{tabular}[t]{l} @{action_ref "isabelle.first-error"} (\<^verbatim>\CS+a\) \\ @{action_ref "isabelle.last-error"} (\<^verbatim>\CS+z\) \\ @{action_ref "isabelle.next-error"} (\<^verbatim>\CS+n\) \\ @{action_ref "isabelle.prev-error"} (\<^verbatim>\CS+p\) \\ \end{tabular} \<^medskip> \ section \Proof state \label{sec:state-output}\ text \ The main purpose of the Prover IDE is to help the user editing proof documents, with ongoing formal checking by the prover in the background. This can be done to some extent in the main text area alone, especially for well-structured Isar proofs. Nonetheless, internal proof state needs to be inspected in many situations of exploration and ``debugging''. The \<^emph>\State\ panel shows exclusively such proof state messages without further distraction, while all other messages are displayed in \<^emph>\Output\ (\secref{sec:output}). \Figref{fig:output-and-state} shows a typical GUI layout where both panels are open. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{output-and-state} \end{center} \caption{Separate proof state display (right) and other output (bottom).} \label{fig:output-and-state} \end{figure} Another typical arrangement has more than one \<^emph>\State\ panel open (as floating windows), with \<^emph>\Auto update\ disabled to look at an old situation while the proof text in the vicinity is changed. The \<^emph>\Update\ button triggers an explicit one-shot update; this operation is also available via the action @{action "isabelle.update-state"} (keyboard shortcut \<^verbatim>\S+ENTER\). On small screens, it is occasionally useful to have all messages concatenated in the regular \<^emph>\Output\ panel, e.g.\ see \figref{fig:output-including-state}. \<^medskip> The mechanics of \<^emph>\Output\ versus \<^emph>\State\ are slightly different: \<^item> \<^emph>\Output\ shows information that is continuously produced and already present when the GUI wants to show it. This is implicitly controlled by the visible perspective on the text. \<^item> \<^emph>\State\ initiates a real-time query on demand, with a full round trip including a fresh print operation on the prover side. This is controlled explicitly when the cursor is moved to the next command (\<^emph>\Auto update\) or the \<^emph>\Update\ operation is triggered. This can make a difference in GUI responsibility and resource usage within the prover process. Applications with very big proof states that are only inspected in isolation work better with the \<^emph>\State\ panel. \ section \Query \label{sec:query}\ text \ The \<^emph>\Query\ panel provides various GUI forms to request extra information from the prover, as a replacement of old-style diagnostic commands like @{command find_theorems}. There are input fields and buttons for a particular query command, with output in a dedicated text area. The main query modes are presented as separate tabs: \<^emph>\Find Theorems\, \<^emph>\Find Constants\, \<^emph>\Print Context\, e.g.\ see \figref{fig:query}. As usual in jEdit, multiple \<^emph>\Query\ windows may be active at the same time: any number of floating instances, but at most one docked instance (which is used by default). \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{query} \end{center} \caption{An instance of the Query panel: find theorems} \label{fig:query} \end{figure} \<^medskip> The following GUI elements are common to all query modes: \<^item> The spinning wheel provides feedback about the status of a pending query wrt.\ the evaluation of its context and its own operation. \<^item> The \<^emph>\Apply\ button attaches a fresh query invocation to the current context of the command where the cursor is pointing in the text. \<^item> The \<^emph>\Search\ field allows to highlight query output according to some regular expression, in the notation that is commonly used on the Java platform.\<^footnote>\\<^url>\https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/regex/Pattern.html\\ This may serve as an additional visual filter of the result. \<^item> The \<^emph>\Zoom\ box controls the font size of the output area. All query operations are asynchronous: there is no need to wait for the evaluation of the document for the query context, nor for the query operation itself. Query output may be detached as independent \<^emph>\Info\ window, using a menu operation of the dockable window manager. The printed result usually provides sufficient clues about the original query, with some hyperlink to its context (via markup of its head line). \ subsection \Find theorems\ text \ The \<^emph>\Query\ panel in \<^emph>\Find Theorems\ mode retrieves facts from the theory or proof context matching all of given criteria in the \<^emph>\Find\ text field. A single criterion has the following syntax: \<^rail>\ ('-'?) ('name' ':' @{syntax name} | 'intro' | 'elim' | 'dest' | 'solves' | 'simp' ':' @{syntax term} | @{syntax term}) \ See also the Isar command @{command_ref find_theorems} in @{cite "isabelle-isar-ref"}. \ subsection \Find constants\ text \ The \<^emph>\Query\ panel in \<^emph>\Find Constants\ mode prints all constants whose type meets all of the given criteria in the \<^emph>\Find\ text field. A single criterion has the following syntax: \<^rail>\ ('-'?) ('name' ':' @{syntax name} | 'strict' ':' @{syntax type} | @{syntax type}) \ See also the Isar command @{command_ref find_consts} in @{cite "isabelle-isar-ref"}. \ subsection \Print context\ text \ The \<^emph>\Query\ panel in \<^emph>\Print Context\ mode prints information from the theory or proof context, or proof state. See also the Isar commands @{command_ref print_context}, @{command_ref print_cases}, @{command_ref print_term_bindings}, @{command_ref print_theorems}, described in @{cite "isabelle-isar-ref"}. \ section \Tooltips and hyperlinks \label{sec:tooltips-hyperlinks}\ text \ Formally processed text (prover input or output) contains rich markup that can be explored by using the \<^verbatim>\CONTROL\ modifier key on Linux and Windows, or \<^verbatim>\COMMAND\ on macOS. Hovering with the mouse while the modifier is pressed reveals a \<^emph>\tooltip\ (grey box over the text with a yellow popup) and/or a \<^emph>\hyperlink\ (black rectangle over the text with change of mouse pointer); see also \figref{fig:tooltip}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{popup1} \end{center} \caption{Tooltip and hyperlink for some formal entity} \label{fig:tooltip} \end{figure} Tooltip popups use the same rendering technology as the main text area, and further tooltips and/or hyperlinks may be exposed recursively by the same mechanism; see \figref{fig:nested-tooltips}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{popup2} \end{center} \caption{Nested tooltips over formal entities} \label{fig:nested-tooltips} \end{figure} The tooltip popup window provides some controls to \<^emph>\close\ or \<^emph>\detach\ the window, turning it into a separate \<^emph>\Info\ window managed by jEdit. The \<^verbatim>\ESCAPE\ key closes \<^emph>\all\ popups, which is particularly relevant when nested tooltips are stacking up. \<^medskip> A black rectangle in the text indicates a hyperlink that may be followed by a mouse click (while the \<^verbatim>\CONTROL\ or \<^verbatim>\COMMAND\ modifier key is still pressed). Such jumps to other text locations are recorded by the \<^emph>\Navigator\ plugin, which is bundled with Isabelle/jEdit and enabled by default. There are usually navigation arrows in the main jEdit toolbar. Note that the link target may be a file that is itself not subject to formal document processing of the editor session and thus prevents further exploration: the chain of hyperlinks may end in some source file of the underlying logic image, or within the ML bootstrap sources of Isabelle/Pure. \ section \Formal scopes and semantic selection\ text \ Formal entities are semantically annotated in the source text as explained in \secref{sec:tooltips-hyperlinks}. A \<^emph>\formal scope\ consists of the defining position with all its referencing positions. This correspondence is highlighted in the text according to the cursor position, see also \figref{fig:scope1}. Here the referencing positions are rendered with an additional border, in reminiscence to a hyperlink. A mouse click with \<^verbatim>\C\ modifier, or the action @{action_def "isabelle.goto-entity"} (shortcut \<^verbatim>\CS+d\) jumps to the original defining position. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{scope1} \end{center} \caption{Scope of formal entity: defining vs.\ referencing positions} \label{fig:scope1} \end{figure} The action @{action_def "isabelle.select-entity"} (shortcut \<^verbatim>\CS+ENTER\) supports semantic selection of all occurrences of the formal entity at the caret position, with a defining position in the current editor buffer. This facilitates systematic renaming, using regular jEdit editing of a multi-selection, see also \figref{fig:scope2}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{scope2} \end{center} \caption{The result of semantic selection and systematic renaming} \label{fig:scope2} \end{figure} By default, the visual feedback on scopes is restricted to definitions within the visible text area. The keyboard modifier \<^verbatim>\CS\ overrides this: then all defining and referencing positions are shown. This modifier may be configured via option @{system_option jedit_focus_modifier}; the default coincides with the modifier for the above keyboard actions. The empty string means to disable this additional visual feedback. \ section \Completion \label{sec:completion}\ text \ Smart completion of partial input is the IDE functionality \<^emph>\par excellance\. Isabelle/jEdit combines several sources of information to achieve that. Despite its complexity, it should be possible to get some idea how completion works by experimentation, based on the overview of completion varieties in \secref{sec:completion-varieties}. The remaining subsections explain concepts around completion more systematically. \<^medskip> \<^emph>\Explicit completion\ is triggered by the action @{action_ref "isabelle.complete"}, which is bound to the keyboard shortcut \<^verbatim>\C+b\, and thus overrides the jEdit default for @{action_ref "complete-word"}. \<^emph>\Implicit completion\ hooks into the regular keyboard input stream of the editor, with some event filtering and optional delays. \<^medskip> Completion options may be configured in \<^emph>\Plugin Options~/ Isabelle~/ General~/ Completion\. These are explained in further detail below, whenever relevant. There is also a summary of options in \secref{sec:completion-options}. The asynchronous nature of PIDE interaction means that information from the prover is delayed --- at least by a full round-trip of the document update protocol. The default options already take this into account, with a sufficiently long completion delay to speculate on the availability of all relevant information from the editor and the prover, before completing text immediately or producing a popup. Although there is an inherent danger of non-deterministic behaviour due to such real-time parameters, the general completion policy aims at determined results as far as possible. \ subsection \Varieties of completion \label{sec:completion-varieties}\ subsubsection \Built-in templates\ text \ Isabelle is ultimately a framework of nested sub-languages of different kinds and purposes. The completion mechanism supports this by the following built-in templates: \<^descr> \<^verbatim>\`\ (single ASCII back-quote) or \<^verbatim>\"\ (double ASCII quote) support \<^emph>\quotations\ via text cartouches. There are three selections, which are always presented in the same order and do not depend on any context information. The default choice produces a template ``\\\\\'', where the box indicates the cursor position after insertion; the other choices help to repair the block structure of unbalanced text cartouches. \<^descr> \<^verbatim>\@{\ is completed to the template ``\@{\}\'', where the box indicates the cursor position after insertion. Here it is convenient to use the wildcard ``\<^verbatim>\__\'' or a more specific name prefix to let semantic completion of name-space entries propose antiquotation names. With some practice, input of quoted sub-languages and antiquotations of embedded languages should work smoothly. Note that national keyboard layouts might cause problems with back-quote as dead key, but double quote can be used instead. \ subsubsection \Syntax keywords\ text \ Syntax completion tables are determined statically from the keywords of the ``outer syntax'' of the underlying edit mode: for theory files this is the syntax of Isar commands according to the cumulative theory imports. Keywords are usually plain words, which means the completion mechanism only inserts them directly into the text for explicit completion (\secref{sec:completion-input}), but produces a popup (\secref{sec:completion-popup}) otherwise. At the point where outer syntax keywords are defined, it is possible to specify an alternative replacement string to be inserted instead of the keyword itself. An empty string means to suppress the keyword altogether, which is occasionally useful to avoid confusion, e.g.\ the rare keyword @{command simproc_setup} vs.\ the frequent name-space entry \simp\. \ subsubsection \Isabelle symbols\ text \ The completion tables for Isabelle symbols (\secref{sec:symbols}) are determined statically from \<^file>\$ISABELLE_HOME/etc/symbols\ and \<^path>\$ISABELLE_HOME_USER/etc/symbols\ for each symbol specification as follows: \<^medskip> \begin{tabular}{ll} \<^bold>\completion entry\ & \<^bold>\example\ \\\hline literal symbol & \<^verbatim>\\\ \\ symbol name with backslash & \<^verbatim>\\\\<^verbatim>\forall\ \\ symbol abbreviation & \<^verbatim>\ALL\ or \<^verbatim>\!\ \\ \end{tabular} \<^medskip> When inserted into the text, the above examples all produce the same Unicode rendering \\\ of the underlying symbol \<^verbatim>\\\. A symbol abbreviation that is a plain word, like \<^verbatim>\ALL\, is treated like a syntax keyword. Non-word abbreviations like \<^verbatim>\-->\ are inserted more aggressively, except for single-character abbreviations like \<^verbatim>\!\ above. Completion via abbreviations like \<^verbatim>\ALL\ or \<^verbatim>\-->\ depends on the semantic language context (\secref{sec:completion-context}). In contrast, backslash sequences like \<^verbatim>\\forall\ \<^verbatim>\\\ are always possible, but require additional interaction to confirm (via popup). This is important in ambiguous situations, e.g.\ for Isabelle document source, which may contain formal symbols or informal {\LaTeX} macros. Backslash sequences also help when input is broken, and thus escapes its normal semantic context: e.g.\ antiquotations or string literals in ML, which do not allow arbitrary backslash sequences. Special symbols like \<^verbatim>\\\ or control symbols like \<^verbatim>\\<^cancel>\, \<^verbatim>\\<^latex>\, \<^verbatim>\\<^binding>\ can have an argument: completing on a name prefix offers a template with an empty cartouche. Thus completion of \<^verbatim>\\co\ or \<^verbatim>\\ca\ allows to compose formal document comments quickly.\<^footnote>\It is customary to put a space between \<^verbatim>\\\ and its argument, while control symbols do \<^emph>\not\ allow extra space here.\ \ subsubsection \User-defined abbreviations\ text \ The theory header syntax supports abbreviations via the \<^theory_text>\abbrevs\ keyword @{cite "isabelle-isar-ref"}. This is a slight generalization of built-in templates and abbreviations for Isabelle symbols, as explained above. Examples may be found in the Isabelle sources, by searching for ``\<^verbatim>\abbrevs\'' in \<^verbatim>\*.thy\ files. The \<^emph>\Symbols\ panel shows the abbreviations that are available in the current theory buffer (according to its \<^theory_text>\imports\) in the \<^verbatim>\Abbrevs\ tab. \ subsubsection \Name-space entries\ text \ This is genuine semantic completion, using information from the prover, so it requires some delay. A \<^emph>\failed name-space lookup\ produces an error message that is annotated with a list of alternative names that are legal. The list of results is truncated according to the system option @{system_option_ref completion_limit}. The completion mechanism takes this into account when collecting information on the prover side. Already recognized names are \<^emph>\not\ completed further, but completion may be extended by appending a suffix of underscores. This provokes a failed lookup, and another completion attempt (ignoring the underscores). For example, in a name space where \<^verbatim>\foo\ and \<^verbatim>\foobar\ are known, the input \<^verbatim>\foo\ remains unchanged, but \<^verbatim>\foo_\ may be completed to \<^verbatim>\foo\ or \<^verbatim>\foobar\. The special identifier ``\<^verbatim>\__\'' serves as a wild-card for arbitrary completion: it exposes the name-space content to the completion mechanism (truncated according to @{system_option completion_limit}). This is occasionally useful to explore an unknown name-space, e.g.\ in some template. \ subsubsection \File-system paths\ text \ Depending on prover markup about file-system paths in the source text, e.g.\ for the argument of a load command (\secref{sec:aux-files}), the completion mechanism explores the directory content and offers the result as completion popup. Relative path specifications are understood wrt.\ the \<^emph>\master directory\ of the document node (\secref{sec:buffer-node}) of the enclosing editor buffer; this requires a proper theory, not an auxiliary file. A suffix of slashes may be used to continue the exploration of an already recognized directory name. \ subsubsection \Spell-checking\ text \ The spell-checker combines semantic markup from the prover (regions of plain words) with static dictionaries (word lists) that are known to the editor. Unknown words are underlined in the text, using @{system_option_ref spell_checker_color} (blue by default). This is not an error, but a hint to the user that some action may be taken. The jEdit context menu provides various actions, as far as applicable: \<^medskip> \begin{tabular}{l} @{action_ref "isabelle.complete-word"} \\ @{action_ref "isabelle.exclude-word"} \\ @{action_ref "isabelle.exclude-word-permanently"} \\ @{action_ref "isabelle.include-word"} \\ @{action_ref "isabelle.include-word-permanently"} \\ \end{tabular} \<^medskip> Instead of the specific @{action_ref "isabelle.complete-word"}, it is also possible to use the generic @{action_ref "isabelle.complete"} with its default keyboard shortcut \<^verbatim>\C+b\. \<^medskip> Dictionary lookup uses some educated guesses about lower-case, upper-case, and capitalized words. This is oriented on common use in English, where this aspect is not decisive for proper spelling (in contrast to German, for example). \ subsection \Semantic completion context \label{sec:completion-context}\ text \ Completion depends on a semantic context that is provided by the prover, although with some delay, because at least a full PIDE protocol round-trip is required. Until that information becomes available in the PIDE document-model, the default context is given by the outer syntax of the editor mode (see also \secref{sec:buffer-node}). The semantic \<^emph>\language context\ provides information about nested sub-languages of Isabelle: keywords are only completed for outer syntax, and antiquotations for languages that support them. Symbol abbreviations only work for specific sub-languages: e.g.\ ``\<^verbatim>\=>\'' is \<^emph>\not\ completed in regular ML source, but is completed within ML strings, comments, antiquotations. Backslash representations of symbols like ``\<^verbatim>\\foobar\'' or ``\<^verbatim>\\\'' work in any context --- after additional confirmation. The prover may produce \<^emph>\no completion\ markup in exceptional situations, to tell that some language keywords should be excluded from further completion attempts. For example, ``\<^verbatim>\:\'' within accepted Isar syntax looses its meaning as abbreviation for symbol ``\\\''. \ subsection \Input events \label{sec:completion-input}\ text \ Completion is triggered by certain events produced by the user, with optional delay after keyboard input according to @{system_option jedit_completion_delay}. \<^descr>[Explicit completion] works via action @{action_ref "isabelle.complete"} with keyboard shortcut \<^verbatim>\C+b\. This overrides the shortcut for @{action_ref "complete-word"} in jEdit, but it is possible to restore the original jEdit keyboard mapping of @{action "complete-word"} via \<^emph>\Global Options~/ Shortcuts\ and invent a different one for @{action "isabelle.complete"}. \<^descr>[Explicit spell-checker completion] works via @{action_ref "isabelle.complete-word"}, which is exposed in the jEdit context menu, if the mouse points to a word that the spell-checker can complete. \<^descr>[Implicit completion] works via regular keyboard input of the editor. It depends on further side-conditions: \<^enum> The system option @{system_option_ref jedit_completion} needs to be enabled (default). \<^enum> Completion of syntax keywords requires at least 3 relevant characters in the text. \<^enum> The system option @{system_option_ref jedit_completion_delay} determines an additional delay (0.5 by default), before opening a completion popup. The delay gives the prover a chance to provide semantic completion information, notably the context (\secref{sec:completion-context}). \<^enum> The system option @{system_option_ref jedit_completion_immediate} (enabled by default) controls whether replacement text should be inserted immediately without popup, regardless of @{system_option jedit_completion_delay}. This aggressive mode of completion is restricted to symbol abbreviations that are not plain words (\secref{sec:symbols}). \<^enum> Completion of symbol abbreviations with only one relevant character in the text always enforces an explicit popup, regardless of @{system_option_ref jedit_completion_immediate}. \ subsection \Completion popup \label{sec:completion-popup}\ text \ A \<^emph>\completion popup\ is a minimally invasive GUI component over the text area that offers a selection of completion items to be inserted into the text, e.g.\ by mouse clicks. Items are sorted dynamically, according to the frequency of selection, with persistent history. The popup may interpret special keys \<^verbatim>\ENTER\, \<^verbatim>\TAB\, \<^verbatim>\ESCAPE\, \<^verbatim>\UP\, \<^verbatim>\DOWN\, \<^verbatim>\PAGE_UP\, \<^verbatim>\PAGE_DOWN\, but all other key events are passed to the underlying text area. This allows to ignore unwanted completions most of the time and continue typing quickly. Thus the popup serves as a mechanism of confirmation of proposed items, while the default is to continue without completion. The meaning of special keys is as follows: \<^medskip> \begin{tabular}{ll} \<^bold>\key\ & \<^bold>\action\ \\\hline \<^verbatim>\ENTER\ & select completion (if @{system_option jedit_completion_select_enter}) \\ \<^verbatim>\TAB\ & select completion (if @{system_option jedit_completion_select_tab}) \\ \<^verbatim>\ESCAPE\ & dismiss popup \\ \<^verbatim>\UP\ & move up one item \\ \<^verbatim>\DOWN\ & move down one item \\ \<^verbatim>\PAGE_UP\ & move up one page of items \\ \<^verbatim>\PAGE_DOWN\ & move down one page of items \\ \end{tabular} \<^medskip> Movement within the popup is only active for multiple items. Otherwise the corresponding key event retains its standard meaning within the underlying text area. \ subsection \Insertion \label{sec:completion-insert}\ text \ Completion may first propose replacements to be selected (via a popup), or replace text immediately in certain situations and depending on certain options like @{system_option jedit_completion_immediate}. In any case, insertion works uniformly, by imitating normal jEdit text insertion, depending on the state of the \<^emph>\text selection\. Isabelle/jEdit tries to accommodate the most common forms of advanced selections in jEdit, but not all combinations make sense. At least the following important cases are well-defined: \<^descr>[No selection.] The original is removed and the replacement inserted, depending on the caret position. \<^descr>[Rectangular selection of zero width.] This special case is treated by jEdit as ``tall caret'' and insertion of completion imitates its normal behaviour: separate copies of the replacement are inserted for each line of the selection. \<^descr>[Other rectangular selection or multiple selections.] Here the original is removed and the replacement is inserted for each line (or segment) of the selection. Support for multiple selections is particularly useful for \<^emph>\HyperSearch\: clicking on one of the items in the \<^emph>\HyperSearch Results\ window makes jEdit select all its occurrences in the corresponding line of text. Then explicit completion can be invoked via \<^verbatim>\C+b\, e.g.\ to replace occurrences of \<^verbatim>\-->\ by \\\. \<^medskip> Insertion works by removing and inserting pieces of text from the buffer. This counts as one atomic operation on the jEdit history. Thus unintended completions may be reverted by the regular @{action undo} action of jEdit. According to normal jEdit policies, the recovered text after @{action undo} is selected: \<^verbatim>\ESCAPE\ is required to reset the selection and to continue typing more text. \ subsection \Options \label{sec:completion-options}\ text \ This is a summary of Isabelle/Scala system options that are relevant for completion. They may be configured in \<^emph>\Plugin Options~/ Isabelle~/ General\ as usual. \<^item> @{system_option_def completion_limit} specifies the maximum number of items for various semantic completion operations (name-space entries etc.) \<^item> @{system_option_def jedit_completion} guards implicit completion via regular jEdit key events (\secref{sec:completion-input}): it allows to disable implicit completion altogether. \<^item> @{system_option_def jedit_completion_select_enter} and @{system_option_def jedit_completion_select_tab} enable keys to select a completion item from the popup (\secref{sec:completion-popup}). Note that a regular mouse click on the list of items is always possible. \<^item> @{system_option_def jedit_completion_context} specifies whether the language context provided by the prover should be used at all. Disabling that option makes completion less ``semantic''. Note that incomplete or severely broken input may cause some disagreement of the prover and the user about the intended language context. \<^item> @{system_option_def jedit_completion_delay} and @{system_option_def jedit_completion_immediate} determine the handling of keyboard events for implicit completion (\secref{sec:completion-input}). A @{system_option jedit_completion_delay}~\<^verbatim>\> 0\ postpones the processing of key events, until after the user has stopped typing for the given time span, but @{system_option jedit_completion_immediate}~\<^verbatim>\= true\ means that abbreviations of Isabelle symbols are handled nonetheless. \<^item> @{system_option_def completion_path_ignore} specifies ``glob'' patterns to ignore in file-system path completion (separated by colons), e.g.\ backup files ending with tilde. \<^item> @{system_option_def spell_checker} is a global guard for all spell-checker operations: it allows to disable that mechanism altogether. \<^item> @{system_option_def spell_checker_dictionary} determines the current dictionary, taken from the colon-separated list in the settings variable @{setting_def JORTHO_DICTIONARIES}. There are jEdit actions to specify local updates to a dictionary, by including or excluding words. The result of permanent dictionary updates is stored in the directory \<^path>\$ISABELLE_HOME_USER/dictionaries\, in a separate file for each dictionary. \<^item> @{system_option_def spell_checker_include} specifies a comma-separated list of markup elements that delimit words in the source that is subject to spell-checking, including various forms of comments. \<^item> @{system_option_def spell_checker_exclude} specifies a comma-separated list of markup elements that disable spell-checking (e.g.\ in nested antiquotations). \ section \Automatically tried tools \label{sec:auto-tools}\ text \ Continuous document processing works asynchronously in the background. Visible document source that has been evaluated may get augmented by additional results of \<^emph>\asynchronous print functions\. An example for that is proof state output, if that is enabled in the Output panel (\secref{sec:output}). More heavy-weight print functions may be applied as well, e.g.\ to prove or disprove parts of the formal text by other means. Isabelle/HOL provides various automatically tried tools that operate on outermost goal statements (e.g.\ @{command lemma}, @{command theorem}), independently of the state of the current proof attempt. They work implicitly without any arguments. Results are output as \<^emph>\information messages\, which are indicated in the text area by blue squiggles and a blue information sign in the gutter (see \figref{fig:auto-tools}). The message content may be shown as for other output (see also \secref{sec:output}). Some tools produce output with \<^emph>\sendback\ markup, which means that clicking on certain parts of the text inserts that into the source in the proper place. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{auto-tools} \end{center} \caption{Result of automatically tried tools} \label{fig:auto-tools} \end{figure} \<^medskip> The following Isabelle system options control the behavior of automatically tried tools (see also the jEdit dialog window \<^emph>\Plugin Options~/ Isabelle~/ General~/ Automatically tried tools\): \<^item> @{system_option_ref auto_methods} controls automatic use of a combination of standard proof methods (@{method auto}, @{method simp}, @{method blast}, etc.). This corresponds to the Isar command @{command_ref "try0"} @{cite "isabelle-isar-ref"}. The tool is disabled by default, since unparameterized invocation of standard proof methods often consumes substantial CPU resources without leading to success. \<^item> @{system_option_ref auto_nitpick} controls a slightly reduced version of @{command_ref nitpick}, which tests for counterexamples using first-order relational logic. See also the Nitpick manual @{cite "isabelle-nitpick"}. This tool is disabled by default, due to the extra overhead of invoking an external Java process for each attempt to disprove a subgoal. \<^item> @{system_option_ref auto_quickcheck} controls automatic use of @{command_ref quickcheck}, which tests for counterexamples using a series of assignments for free variables of a subgoal. This tool is \<^emph>\enabled\ by default. It requires little overhead, but is a bit weaker than @{command nitpick}. \<^item> @{system_option_ref auto_sledgehammer} controls a significantly reduced version of @{command_ref sledgehammer}, which attempts to prove a subgoal using external automatic provers. See also the Sledgehammer manual @{cite "isabelle-sledgehammer"}. This tool is disabled by default, due to the relatively heavy nature of Sledgehammer. \<^item> @{system_option_ref auto_solve_direct} controls automatic use of @{command_ref solve_direct}, which checks whether the current subgoals can be solved directly by an existing theorem. This also helps to detect duplicate lemmas. This tool is \<^emph>\enabled\ by default. Invocation of automatically tried tools is subject to some global policies of parallel execution, which may be configured as follows: \<^item> @{system_option_ref auto_time_limit} (default 2.0) determines the timeout (in seconds) for each tool execution. \<^item> @{system_option_ref auto_time_start} (default 1.0) determines the start delay (in seconds) for automatically tried tools, after the main command evaluation is finished. Each tool is submitted independently to the pool of parallel execution tasks in Isabelle/ML, using hardwired priorities according to its relative ``heaviness''. The main stages of evaluation and printing of proof states take precedence, but an already running tool is not canceled and may thus reduce reactivity of proof document processing. Users should experiment how the available CPU resources (number of cores) are best invested to get additional feedback from prover in the background, by using a selection of weaker or stronger tools. \ section \Sledgehammer \label{sec:sledgehammer}\ text \ The \<^emph>\Sledgehammer\ panel (\figref{fig:sledgehammer}) provides a view on some independent execution of the Isar command @{command_ref sledgehammer}, with process indicator (spinning wheel) and GUI elements for important Sledgehammer arguments and options. Any number of Sledgehammer panels may be active, according to the standard policies of Dockable Window Management in jEdit. Closing such windows also cancels the corresponding prover tasks. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{sledgehammer} \end{center} \caption{An instance of the Sledgehammer panel} \label{fig:sledgehammer} \end{figure} The \<^emph>\Apply\ button attaches a fresh invocation of @{command sledgehammer} to the command where the cursor is pointing in the text --- this should be some pending proof problem. Further buttons like \<^emph>\Cancel\ and \<^emph>\Locate\ help to manage the running process. Results appear incrementally in the output window of the panel. Proposed proof snippets are marked-up as \<^emph>\sendback\, which means a single mouse click inserts the text into a suitable place of the original source. Some manual editing may be required nonetheless, say to remove earlier proof attempts. \ chapter \Isabelle document preparation\ text \ The ultimate purpose of Isabelle is to produce nicely rendered documents with the Isabelle document preparation system, which is based on {\LaTeX}; see also @{cite "isabelle-system" and "isabelle-isar-ref"}. Isabelle/jEdit provides some additional support for document editing. \ section \Document outline\ text \ Theory sources may contain document markup commands, such as @{command_ref chapter}, @{command_ref section}, @{command subsection}. The Isabelle SideKick parser (\secref{sec:sidekick}) represents this document outline as structured tree view, with formal statements and proofs nested inside; see \figref{fig:sidekick-document}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{sidekick-document} \end{center} \caption{Isabelle document outline via SideKick tree view} \label{fig:sidekick-document} \end{figure} It is also possible to use text folding according to this structure, by adjusting \<^emph>\Utilities / Buffer Options / Folding mode\ of jEdit. The default mode \<^verbatim>\isabelle\ uses the structure of formal definitions, statements, and proofs. The alternative mode \<^verbatim>\sidekick\ uses the document structure of the SideKick parser, as explained above. \ section \Markdown structure\ text \ Document text is internally structured in paragraphs and nested lists, using notation that is similar to Markdown\<^footnote>\\<^url>\https://commonmark.org\\. There are special control symbols for items of different kinds of lists, corresponding to \<^verbatim>\itemize\, \<^verbatim>\enumerate\, \<^verbatim>\description\ in {\LaTeX}. This is illustrated in for \<^verbatim>\itemize\ in \figref{fig:markdown-document}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{markdown-document} \end{center} \caption{Markdown structure within document text} \label{fig:markdown-document} \end{figure} Items take colour according to the depth of nested lists. This helps to explore the implicit rules for list structure interactively. There is also markup for individual items and paragraphs in the text: it may be explored via mouse hovering with \<^verbatim>\CONTROL\ / \<^verbatim>\COMMAND\ as usual (\secref{sec:tooltips-hyperlinks}). \ section \Citations and Bib{\TeX} entries \label{sec:bibtex}\ text \ Citations are managed by {\LaTeX} and Bib{\TeX} in \<^verbatim>\.bib\ files. The - Isabelle session build process and the @{tool latex} tool @{cite + Isabelle session build process and the @{tool document} tool @{cite "isabelle-system"} are smart enough to assemble the result, based on the session directory layout. The document antiquotation \@{cite}\ is described in @{cite "isabelle-isar-ref"}. Within the Prover IDE it provides semantic markup for tooltips, hyperlinks, and completion for Bib{\TeX} database entries. Isabelle/jEdit does \<^emph>\not\ know about the actual Bib{\TeX} environment used in {\LaTeX} batch-mode, but it can take citations from those \<^verbatim>\.bib\ files that happen to be open in the editor; see \figref{fig:cite-completion}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{cite-completion} \end{center} \caption{Semantic completion of citations from open Bib{\TeX} files} \label{fig:cite-completion} \end{figure} Isabelle/jEdit also provides IDE support for editing \<^verbatim>\.bib\ files themselves. There is syntax highlighting based on entry types (according to standard Bib{\TeX} styles), a context-menu to compose entries systematically, and a SideKick tree view of the overall content; see \figref{fig:bibtex-mode}. Semantic checking with errors and warnings is performed by the original \<^verbatim>\bibtex\ tool using style \<^verbatim>\plain\: different Bib{\TeX} styles may produce slightly different results. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{bibtex-mode} \end{center} \caption{Bib{\TeX} mode with context menu, SideKick tree view, and semantic output from the \<^verbatim>\bibtex\ tool} \label{fig:bibtex-mode} \end{figure} Regular document preview (\secref{sec:document-preview}) of \<^verbatim>\.bib\ files approximates the usual {\LaTeX} bibliography output in HTML (using style \<^verbatim>\unsort\). \ section \Document preview and printing \label{sec:document-preview}\ text \ The action @{action_def isabelle.preview} opens an HTML preview of the current document node in the default web browser. The content is derived from the semantic markup produced by the prover, and thus depends on the status of formal processing. Action @{action_def isabelle.draft} is similar to @{action isabelle.preview}, but shows a plain-text document draft. Both actions show document sources in a regular Web browser, which may be also used to print the result in a more portable manner than the Java printer dialog of the jEdit @{action_ref print} action. \ chapter \ML debugging within the Prover IDE\ text \ Isabelle/ML is based on Poly/ML\<^footnote>\\<^url>\https://www.polyml.org\\ and thus benefits from the source-level debugger of that implementation of Standard ML. The Prover IDE provides the \<^emph>\Debugger\ dockable to connect to running ML threads, inspect the stack frame with local ML bindings, and evaluate ML expressions in a particular run-time context. A typical debugger session is shown in \figref{fig:ml-debugger}. ML debugging depends on the following pre-requisites. \<^enum> ML source needs to be compiled with debugging enabled. This may be controlled for particular chunks of ML sources using any of the subsequent facilities. \<^enum> The system option @{system_option_ref ML_debugger} as implicit state of the Isabelle process. It may be changed in the menu \<^emph>\Plugins / Plugin Options / Isabelle / General\. ML modules need to be reloaded and recompiled to pick up that option as intended. \<^enum> The configuration option @{attribute_ref ML_debugger}, with an attribute of the same name, to update a global or local context (e.g.\ with the @{command declare} command). \<^enum> Commands that modify @{attribute ML_debugger} state for individual files: @{command_ref ML_file_debug}, @{command_ref ML_file_no_debug}, @{command_ref SML_file_debug}, @{command_ref SML_file_no_debug}. The instrumentation of ML code for debugging causes minor run-time overhead. ML modules that implement critical system infrastructure may lead to deadlocks or other undefined behaviour, when put under debugger control! \<^enum> The \<^emph>\Debugger\ panel needs to be active, otherwise the program ignores debugger instrumentation of the compiler and runs unmanaged. It is also possible to start debugging with the panel open, and later undock it, to let the program continue unhindered. \<^enum> The ML program needs to be stopped at a suitable breakpoint, which may be activated individually or globally as follows. For ML sources that have been compiled with debugger support, the IDE visualizes possible breakpoints in the text. A breakpoint may be toggled by pointing accurately with the mouse, with a right-click to activate jEdit's context menu and its \<^emph>\Toggle Breakpoint\ item. Alternatively, the \<^emph>\Break\ checkbox in the \<^emph>\Debugger\ panel may be enabled to stop ML threads always at the next possible breakpoint. Note that the state of individual breakpoints \<^emph>\gets lost\ when the coresponding ML source is re-compiled! This may happen unintentionally, e.g.\ when following hyperlinks into ML modules that have not been loaded into the IDE before. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.333]{ml-debugger} \end{center} \caption{ML debugger session} \label{fig:ml-debugger} \end{figure} The debugger panel (\figref{fig:ml-debugger}) shows a list of all threads that are presently stopped. Each thread shows a stack of all function invocations that lead to the current breakpoint at the top. It is possible to jump between stack positions freely, by clicking on this list. The current situation is displayed in the big output window, as a local ML environment with names and printed values. ML expressions may be evaluated in the current context by entering snippets of source into the text fields labeled \Context\ and \ML\, and pushing the \Eval\ button. By default, the source is interpreted as Isabelle/ML with the usual support for antiquotations (like @{command ML}, @{command ML_file}). Alternatively, strict Standard ML may be enforced via the \<^emph>\SML\ checkbox (like @{command SML_file}). The context for Isabelle/ML is optional, it may evaluate to a value of type \<^ML_type>\theory\, \<^ML_type>\Proof.context\, or \<^ML_type>\Context.generic\. Thus the given ML expression (with its antiquotations) may be subject to the intended dynamic run-time context, instead of the static compile-time context. \<^medskip> The buttons labeled \<^emph>\Continue\, \<^emph>\Step\, \<^emph>\Step over\, \<^emph>\Step out\ recommence execution of the program, with different policies concerning nested function invocations. The debugger always moves the cursor within the ML source to the next breakpoint position, and offers new stack frames as before. \ chapter \Miscellaneous tools\ section \Timing and monitoring\ text \ Managed evaluation of commands within PIDE documents includes timing information, which consists of elapsed (wall-clock) time, CPU time, and GC (garbage collection) time. Note that in a multithreaded system it is difficult to measure execution time precisely: elapsed time is closer to the real requirements of runtime resources than CPU or GC time, which are both subject to influences from the parallel environment that are outside the scope of the current command transaction. The \<^emph>\Timing\ panel provides an overview of cumulative command timings for each document node. Commands with elapsed time below the given threshold are ignored in the grand total. Nodes are sorted according to their overall timing. For the document node that corresponds to the current buffer, individual command timings are shown as well. A double-click on a theory node or command moves the editor focus to that particular source position. It is also possible to reveal individual timing information via some tooltip for the corresponding command keyword, using the technique of mouse hovering with \<^verbatim>\CONTROL\~/ \<^verbatim>\COMMAND\ modifier (\secref{sec:tooltips-hyperlinks}). Actual display of timing depends on the global option @{system_option_ref jedit_timing_threshold}, which can be configured in \<^emph>\Plugin Options~/ Isabelle~/ General\. \<^medskip> The jEdit status line includes a monitor widget for the current heap usage of the Isabelle/ML process; this includes information about ongoing garbage collection (shown as ``ML cleanup''). A double-click opens a new instance of the \<^emph>\Monitor\ panel, as explained below. There is a similar widget for the JVM: a double-click opens an external Java monitor process with detailed information and controls for the Java process underlying Isabelle/Scala/jEdit (this is based on \<^verbatim>\jconsole\). \<^medskip> The \<^emph>\Monitor\ panel visualizes various data collections about recent activity of the runtime system of Isabelle/ML and Java. There are buttons to request a full garbage collection and sharing of live data on the ML heap. The display is continuously updated according to @{system_option_ref editor_chart_delay}. Note that the painting of the chart takes considerable runtime itself --- on the Java Virtual Machine that runs Isabelle/Scala, not Isabelle/ML. \ section \Low-level output\ text \ Prover output is normally shown directly in the main text area or specific panels like \<^emph>\Output\ (\secref{sec:output}) or \<^emph>\State\ (\secref{sec:state-output}). Beyond this, it is occasionally useful to inspect low-level output channels via some of the following additional panels: \<^item> \<^emph>\Protocol\ shows internal messages between the Isabelle/Scala and Isabelle/ML side of the PIDE document editing protocol. Recording of messages starts with the first activation of the corresponding dockable window; earlier messages are lost. Display of protocol messages causes considerable slowdown, so it is important to undock all \<^emph>\Protocol\ panels for production work. \<^item> \<^emph>\Raw Output\ shows chunks of text from the \<^verbatim>\stdout\ and \<^verbatim>\stderr\ channels of the prover process. Recording of output starts with the first activation of the corresponding dockable window; earlier output is lost. The implicit stateful nature of physical I/O channels makes it difficult to relate raw output to the actual command from where it was originating. Parallel execution may add to the confusion. Peeking at physical process I/O is only the last resort to diagnose problems with tools that are not PIDE compliant. Under normal circumstances, prover output always works via managed message channels (corresponding to \<^ML>\writeln\, \<^ML>\warning\, \<^ML>\Output.error_message\ in Isabelle/ML), which are displayed by regular means within the document model (\secref{sec:output}). Unhandled Isabelle/ML exceptions are printed by the system via \<^ML>\Output.error_message\. \<^item> \<^emph>\Syslog\ shows system messages that might be relevant to diagnose problems with the startup or shutdown phase of the prover process; this also includes raw output on \<^verbatim>\stderr\. Isabelle/ML also provides an explicit \<^ML>\Output.system_message\ operation, which is occasionally useful for diagnostic purposes within the system infrastructure itself. A limited amount of syslog messages are buffered, independently of the docking state of the \<^emph>\Syslog\ panel. This allows to diagnose serious problems with Isabelle/PIDE process management, outside of the actual protocol layer. Under normal situations, such low-level system output can be ignored. \ chapter \Known problems and workarounds \label{sec:problems}\ text \ \<^item> \<^bold>\Problem:\ Keyboard shortcuts \<^verbatim>\C+PLUS\ and \<^verbatim>\C+MINUS\ for adjusting the editor font size depend on platform details and national keyboards. \<^bold>\Workaround:\ Rebind keys via \<^emph>\Global Options~/ Shortcuts\. \<^item> \<^bold>\Problem:\ The macOS key sequence \<^verbatim>\COMMAND+COMMA\ for application \<^emph>\Preferences\ is in conflict with the jEdit default keyboard shortcut for \<^emph>\Incremental Search Bar\ (action @{action_ref "quick-search"}). \<^bold>\Workaround:\ Rebind key via \<^emph>\Global Options~/ Shortcuts\ according to the national keyboard layout, e.g.\ \<^verbatim>\COMMAND+SLASH\ on English ones. \<^item> \<^bold>\Problem:\ On macOS with native Apple look-and-feel, some exotic national keyboards may cause a conflict of menu accelerator keys with regular jEdit key bindings. This leads to duplicate execution of the corresponding jEdit action. \<^bold>\Workaround:\ Disable the native Apple menu bar via Java runtime option \<^verbatim>\-Dapple.laf.useScreenMenuBar=false\. \<^item> \<^bold>\Problem:\ macOS system fonts sometimes lead to character drop-outs in the main text area. \<^bold>\Workaround:\ Use the default \<^verbatim>\Isabelle DejaVu\ fonts. \<^item> \<^bold>\Problem:\ On macOS the Java printer dialog sometimes does not work. \<^bold>\Workaround:\ Use action @{action isabelle.draft} and print via the Web browser. \<^item> \<^bold>\Problem:\ Antialiased text rendering may show bad performance or bad visual quality, notably on Linux/X11. \<^bold>\Workaround:\ The property \<^verbatim>\view.antiAlias\ (via menu item Utilities / Global Options / Text Area / Anti Aliased smooth text) has the main impact on text rendering, but some related properties may also change the behaviour. The default is \<^verbatim>\view.antiAlias=subpixel HRGB\: it can be much faster than \<^verbatim>\standard\, but occasionally causes problems with odd color shades. An alternative is to have \<^verbatim>\view.antiAlias=standard\ and set a Java system property like this:\<^footnote>\See also \<^url>\https://docs.oracle.com/javase/10/troubleshoot/java-2d-pipeline-rendering-and-properties.htm\.\ @{verbatim [display] \isabelle jedit -Dsun.java2d.opengl=true\} If this works reliably, it can be made persistent via @{setting JEDIT_JAVA_OPTIONS} within \<^path>\$ISABELLE_HOME_USER/etc/settings\. For the Isabelle desktop ``app'', there is a corresponding file with Java runtime options in the main directory (name depends on the OS platform). \<^item> \<^bold>\Problem:\ Some Linux/X11 input methods such as IBus tend to disrupt key event handling of Java/AWT/Swing. \<^bold>\Workaround:\ Do not use X11 input methods. Note that environment variable \<^verbatim>\XMODIFIERS\ is reset by default within Isabelle settings. \<^item> \<^bold>\Problem:\ Some Linux/X11 window managers that are not ``re-parenting'' cause problems with additional windows opened by Java. This affects either historic or neo-minimalistic window managers like \<^verbatim>\awesome\ or \<^verbatim>\xmonad\. \<^bold>\Workaround:\ Use a regular re-parenting X11 window manager. \<^item> \<^bold>\Problem:\ Various forks of Linux/X11 window managers and desktop environments (like Gnome) disrupt the handling of menu popups and mouse positions of Java/AWT/Swing. \<^bold>\Workaround:\ Use suitable version of Linux desktops. \<^item> \<^bold>\Problem:\ Full-screen mode via jEdit action @{action_ref "toggle-full-screen"} (default keyboard shortcut \<^verbatim>\F11\ or \<^verbatim>\S+F11\) works robustly on Windows, but not on macOS or various Linux/X11 window managers. For the latter platforms, it is approximated by educated guesses on the window size (excluding the macOS menu bar). \<^bold>\Workaround:\ Use native full-screen control of the macOS window manager. \<^item> \<^bold>\Problem:\ Heap space of the JVM may fill up and render the Prover IDE unresponsive, e.g.\ when editing big Isabelle sessions with many theories. \<^bold>\Workaround:\ Increase JVM heap parameters by editing platform-specific files (for ``properties'' or ``options'') that are associated with the main app bundle. \ end diff --git a/src/Doc/JEdit/document/build b/src/Doc/JEdit/document/build deleted file mode 100755 --- a/src/Doc/JEdit/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo jEdit -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/JEdit/document/root.tex b/src/Doc/JEdit/document/root.tex --- a/src/Doc/JEdit/document/root.tex +++ b/src/Doc/JEdit/document/root.tex @@ -1,89 +1,89 @@ \documentclass[12pt,a4paper]{report} \usepackage[T1]{fontenc} \usepackage{supertabular} \usepackage{rotating} \usepackage{graphicx} \usepackage{iman,extra,isar} \usepackage[nohyphen,strings]{underscore} \usepackage{amssymb} \usepackage{isabelle,isabellesym} \usepackage{railsetup} \usepackage{style} \usepackage{pdfsetup} \hyphenation{Edinburgh} \hyphenation{Isabelle} \hyphenation{Isar} \isadroptag{theory} \isabellestyle{literal} \def\isastylett{\footnotesize\tt} -\title{\includegraphics[scale=0.5]{isabelle_jedit} \\[4ex] Isabelle/jEdit} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Isabelle/jEdit} \author{\emph{Makarius Wenzel}} \makeindex \begin{document} \maketitle \begin{abstract} Isabelle/jEdit is a fully-featured Prover IDE, based on Isabelle/Scala and the jEdit text editor. This document provides an overview of general principles and its main IDE functionality. \end{abstract} \vspace*{2.5cm} \begin{quote} {\small\em Isabelle's user interface is no advance over LCF's, which is widely condemned as ``user-unfriendly'': hard to use, bewildering to beginners. Hence the interest in proof editors, where a proof can be constructed and modified rule-by-rule using windows, mouse, and menus. But Edinburgh LCF was invented because real proofs require millions of inferences. Sophisticated tools --- rules, tactics and tacticals, the language ML, the logics themselves --- are hard to learn, yet they are essential. We may demand a mouse, but we need better education and training.} Lawrence C. Paulson, ``Isabelle: The Next 700 Theorem Provers'' \end{quote} \vspace*{2.5cm} \subsubsection*{Acknowledgements} Research and implementation of concepts around PIDE and Isabelle/jEdit has started in 2008 and was kindly supported by: \begin{itemize} \item TU M\"unchen \url{https://www.in.tum.de} \item BMBF \url{https://www.bmbf.de} \item Universit\'e Paris-Sud \url{https://www.u-psud.fr} \item Digiteo \url{https://www.digiteo.fr} \item ANR \url{https://www.agence-nationale-recherche.fr} \end{itemize} \pagenumbering{roman} \tableofcontents \listoffigures \clearfirst \input{JEdit.tex} \begingroup \tocentry{\bibname} \bibliographystyle{abbrv} \small\raggedright\frenchspacing \bibliography{manual} \endgroup \tocentry{\indexname} \printindex \end{document} diff --git a/src/Doc/Locales/document/build b/src/Doc/Locales/document/build deleted file mode 100755 --- a/src/Doc/Locales/document/build +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Logics/document/build b/src/Doc/Logics/document/build deleted file mode 100755 --- a/src/Doc/Logics/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Logics/document/root.tex b/src/Doc/Logics/document/root.tex --- a/src/Doc/Logics/document/root.tex +++ b/src/Doc/Logics/document/root.tex @@ -1,55 +1,55 @@ \documentclass[12pt,a4paper]{report} \usepackage{isabelle,isabellesym} \usepackage{graphicx,iman,extra,ttbox,proof,latexsym,pdfsetup} %%%STILL NEEDS MODAL, LCF %%% to index derived rls: ^\([a-zA-Z0-9][a-zA-Z0-9_]*\) \\tdx{\1} %%% to index rulenames: ^ *(\([a-zA-Z0-9][a-zA-Z0-9_]*\), \\tdx{\1} %%% to index constants: \\tt \([a-zA-Z0-9][a-zA-Z0-9_]*\) \\cdx{\1} %%% to deverbify: \\verb|\([^|]*\)| \\ttindex{\1} %% run ../sedindex logics to prepare index file -\title{\includegraphics[scale=0.5]{isabelle} \\[4ex] Isabelle's Logics} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Isabelle's Logics} \author{{\em Lawrence C. Paulson}\\ Computer Laboratory \\ University of Cambridge \\ \texttt{lcp@cl.cam.ac.uk}\\[3ex] With Contributions by Tobias Nipkow and Markus Wenzel% \thanks{Markus Wenzel made numerous improvements. Sara Kalvala contributed Chap.\ts\ref{chap:sequents}. Philippe de Groote wrote the first version of the logic~LK. Tobias Nipkow developed LCF and~Cube. Martin Coen developed~Modal with assistance from Rajeev Gor\'e. The research has been funded by the EPSRC (grants GR/G53279, GR/H40570, GR/K57381, GR/K77051, GR/M75440) and by ESPRIT (projects 3245: Logical Frameworks, and 6453: Types), and by the DFG Schwerpunktprogramm \emph{Deduktion}.} } \newcommand\subcaption[1]{\par {\centering\normalsize\sc#1\par}\bigskip \hrule\bigskip} \newenvironment{constants}{\begin{center}\small\begin{tabular}{rrrr}}{\end{tabular}\end{center}} \newcommand\bs{\char '134 } % A backslash character for \tt font \makeindex \underscoreoff \setcounter{secnumdepth}{2} \setcounter{tocdepth}{2} %% {secnumdepth}{2}??? \pagestyle{headings} \sloppy \binperiod %%%treat . like a binary operator \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \clearfirst \input{preface} \input{syntax} \input{HOL} \input{LK} \input{Sequents} %%\input{Modal} \input{CTT} \bibliographystyle{plain} \bibliography{manual} \printindex \end{document} diff --git a/src/Doc/Logics_ZF/document/build b/src/Doc/Logics_ZF/document/build deleted file mode 100755 --- a/src/Doc/Logics_ZF/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo ZF -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Logics_ZF/document/logics.sty b/src/Doc/Logics_ZF/document/logics.sty --- a/src/Doc/Logics_ZF/document/logics.sty +++ b/src/Doc/Logics_ZF/document/logics.sty @@ -1,1 +1,179 @@ -% logics.sty : Logics Manuals Page Layout % \typeout{Document Style logics. Released 18 August 2003} \hyphenation{Isa-belle man-u-script man-u-scripts ap-pen-dix mut-u-al-ly} \hyphenation{data-type data-types co-data-type co-data-types } %usage: \iflabelundefined{LABEL}{if not defined}{if defined} \newcommand{\iflabelundefined}[1]{\@ifundefined{r@#1}} %%%INDEXING use isa-index to process the index \newcommand\seealso[2]{\emph{see also} #1} \usepackage{makeidx} %index, putting page numbers of definitions in boldface \def\bold#1{\textbf{#1}} \newcommand\fnote[1]{#1n} \newcommand\indexbold[1]{\index{#1|bold}} % The alternative to \protect\isa in the indexing macros is % \noexpand\noexpand \noexpand\isa % need TWO levels of \noexpand to delay the expansion of \isa: % the \noexpand\noexpand will leave one \noexpand, to be given to the % (still unexpanded) \isa token. See TeX by Topic, page 122. %%%% for indexing constants, symbols, theorems, ... \newcommand\cdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (constant)}} \newcommand\sdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (symbol)}} \newcommand\tdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)}} \newcommand\tdxbold[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)|bold}} \newcommand\cldx[1]{\isa{#1}\index{#1@\protect\isa{#1} (class)}} \newcommand\tydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (type)}} \newcommand\thydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theory)}} \newcommand\attrdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (attribute)}} \newcommand\cmmdx[1]{\index{#1@\protect\isacommand{#1} (command)}} \newcommand\commdx[1]{\isacommand{#1}\index{#1@\protect\isacommand{#1} (command)}} \newcommand\methdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (method)}} \newcommand\tooldx[1]{\isa{#1}\index{#1@\protect\isa{#1} (tool)}} \newcommand\settdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (setting)}} %set argument in \bf font and index in ROMAN font (for definitions in text!) \newcommand\bfindex[1]{{\bf#1}\index{#1|bold}\@} \newcommand\rmindex[1]{{#1}\index{#1}\@} \newcommand\ttindex[1]{\texttt{#1}\index{#1@\texttt{#1}}\@} \newcommand\ttindexbold[1]{\texttt{#1}\index{#1@\texttt{#1}|bold}\@} \newcommand{\indexboldpos}[2]{#1\@} \newcommand{\ttindexboldpos}[2]{\isa{#1}\@} %\newtheorem{theorem}{Theorem}[section] \newtheorem{Exercise}{Exercise}[section] \newenvironment{exercise}{\begin{Exercise}\rm}{\end{Exercise}} \newcommand{\ttlbr}{\texttt{[|}} \newcommand{\ttrbr}{\texttt{|]}} \newcommand{\ttor}{\texttt{|}} \newcommand{\ttall}{\texttt{!}} \newcommand{\ttuniquex}{\texttt{?!}} \newcommand{\ttEXU}{\texttt{EX!}} \newcommand{\ttAnd}{\texttt{!!}} \newcommand{\isasymignore}{} \newcommand{\isasymimp}{\isasymlongrightarrow} \newcommand{\isasymImp}{\isasymLongrightarrow} \newcommand{\isasymFun}{\isasymRightarrow} \newcommand{\isasymuniqex}{\isamath{\exists!\,}} \renewcommand{\S}{Sect.\ts} \renewenvironment{isamarkuptxt}{\begin{isamarkuptext}}{\end{isamarkuptext}} \newif\ifremarks \newcommand{\REMARK}[1]{\ifremarks\marginpar{\raggedright\footnotesize#1}\fi} %names of Isabelle rules \newcommand{\rulename}[1]{\hfill(#1)} \newcommand{\rulenamedx}[1]{\hfill(#1\index{#1@\protect\isa{#1} (theorem)|bold})} %%%% meta-logical connectives \let\Forall=\bigwedge \let\Imp=\Longrightarrow \let\To=\Rightarrow \newcommand{\Var}[1]{{?\!#1}} %%% underscores as ordinary characters, not for subscripting %% use @ or \sb for subscripting; use \at for @ %% only works in \tt font %% must not make _ an active char; would make \ttindex fail! \gdef\underscoreoff{\catcode`\@=8\catcode`\_=\other} \gdef\underscoreon{\catcode`\_=8\makeatother} \chardef\other=12 \chardef\at=`\@ % alternative underscore \def\_{\leavevmode\kern.06em\vbox{\hrule height.2ex width.3em}\hskip0.1em} %%%% ``WARNING'' environment \def\dbend{\vtop to 0pt{\vss\hbox{\Huge\bf!}\vss}} \newenvironment{warn}{\medskip\medbreak\begingroup \clubpenalty=10000 \small %%WAS\baselineskip=0.9\baselineskip \noindent \hangindent\parindent \hangafter=-2 \hbox to0pt{\hskip-\hangindent\dbend\hfill}\ignorespaces}% {\par\endgroup\medbreak} %%%% Standard logical symbols \let\turn=\vdash \let\conj=\wedge \let\disj=\vee \let\imp=\rightarrow \let\bimp=\leftrightarrow \newcommand\all[1]{\forall#1.} %quantification \newcommand\ex[1]{\exists#1.} \newcommand{\pair}[1]{\langle#1\rangle} \newcommand{\lparr}{\mathopen{(\!|}} \newcommand{\rparr}{\mathclose{|\!)}} \newcommand{\fs}{\mathpunct{,\,}} \newcommand{\ty}{\mathrel{::}} \newcommand{\asn}{\mathrel{:=}} \newcommand{\more}{\ldots} \newcommand{\record}[1]{\lparr #1 \rparr} \newcommand{\dtt}{\mathord.} \newcommand\lbrakk{\mathopen{[\![}} \newcommand\rbrakk{\mathclose{]\!]}} \newcommand\List[1]{\lbrakk#1\rbrakk} %was \obj \newcommand\vpile[1]{\begin{array}{c}#1\end{array}} \newenvironment{matharray}[1]{\[\begin{array}{#1}}{\end{array}\]} \newcommand{\Text}[1]{\mbox{#1}} \DeclareMathSymbol{\dshsym}{\mathalpha}{letters}{"2D} \newcommand{\dsh}{\mathit{\dshsym}} \let\int=\cap \let\un=\cup \let\inter=\bigcap \let\union=\bigcup \def\ML{{\sc ml}} \def\AST{{\sc ast}} %macros to change the treatment of symbols \def\relsemicolon{\mathcode`\;="303B} %treat ; like a relation \def\binperiod{\mathcode`\.="213A} %treat . like a binary operator \def\binvert{\mathcode`\|="226A} %treat | like a binary operator %redefinition of \sloppy and \fussy to use \emergencystretch \def\sloppy{\tolerance2000 \hfuzz.5pt \vfuzz.5pt \emergencystretch=15pt} \def\fussy{\tolerance200 \hfuzz.1pt \vfuzz.1pt \emergencystretch=0pt} %non-bf version of description \def\descrlabel#1{\hspace\labelsep #1} \def\descr{\list{}{\labelwidth\z@ \itemindent-\leftmargin\let\makelabel\descrlabel}} \let\enddescr\endlist % The mathcodes for the letters A, ..., Z, a, ..., z are changed to % generate text italic rather than math italic by default. This makes % multi-letter identifiers look better. The mathcode for character c % is set to |"7000| (variable family) + |"400| (text italic) + |c|. % \DeclareSymbolFont{italics}{\encodingdefault}{\rmdefault}{m}{it}% \def\@setmcodes#1#2#3{{\count0=#1 \count1=#3 \loop \global\mathcode\count0=\count1 \ifnum \count0<#2 \advance\count0 by1 \advance\count1 by1 \repeat}} \@setmcodes{`A}{`Z}{"7\hexnumber@\symitalics41} \@setmcodes{`a}{`z}{"7\hexnumber@\symitalics61} %%% \dquotes permits usage of "..." for \hbox{...} %%% also taken from under.sty {\catcode`\"=\active \gdef\dquotes{\catcode`\"=\active \let"=\@mathText}% \gdef\@mathText#1"{\hbox{\mathTextFont #1\/}}} \def\mathTextFont{\frenchspacing\tt} \def\dquotesoff{\catcode`\"=\other} \ No newline at end of file +% logics.sty : Logics Manuals Page Layout +% +\typeout{Document Style logics. Released 18 August 2003} + +\hyphenation{Isa-belle man-u-script man-u-scripts ap-pen-dix mut-u-al-ly} +\hyphenation{data-type data-types co-data-type co-data-types } + +%usage: \iflabelundefined{LABEL}{if not defined}{if defined} +\newcommand{\iflabelundefined}[1]{\@ifundefined{r@#1}} + + +%%%INDEXING use isa-index to process the index + +\newcommand\seealso[2]{\emph{see also} #1} +\usepackage{makeidx} + +%index, putting page numbers of definitions in boldface +\def\bold#1{\textbf{#1}} +\newcommand\fnote[1]{#1n} +\newcommand\indexbold[1]{\index{#1|bold}} + +% The alternative to \protect\isa in the indexing macros is +% \noexpand\noexpand \noexpand\isa +% need TWO levels of \noexpand to delay the expansion of \isa: +% the \noexpand\noexpand will leave one \noexpand, to be given to the +% (still unexpanded) \isa token. See TeX by Topic, page 122. + +%%%% for indexing constants, symbols, theorems, ... +\newcommand\cdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (constant)}} +\newcommand\sdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (symbol)}} + +\newcommand\tdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)}} +\newcommand\tdxbold[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)|bold}} + +\newcommand\cldx[1]{\isa{#1}\index{#1@\protect\isa{#1} (class)}} +\newcommand\tydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (type)}} +\newcommand\thydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theory)}} + +\newcommand\attrdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (attribute)}} +\newcommand\cmmdx[1]{\index{#1@\protect\isacommand{#1} (command)}} +\newcommand\commdx[1]{\isacommand{#1}\index{#1@\protect\isacommand{#1} (command)}} +\newcommand\methdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (method)}} + +%set argument in \bf font and index in ROMAN font (for definitions in text!) +\newcommand\bfindex[1]{{\bf#1}\index{#1|bold}\@} + +\newcommand\rmindex[1]{{#1}\index{#1}\@} +\newcommand\ttindex[1]{\texttt{#1}\index{#1@\texttt{#1}}\@} +\newcommand\ttindexbold[1]{\texttt{#1}\index{#1@\texttt{#1}|bold}\@} + +\newcommand{\indexboldpos}[2]{#1\@} +\newcommand{\ttindexboldpos}[2]{\isa{#1}\@} + +%\newtheorem{theorem}{Theorem}[section] +\newtheorem{Exercise}{Exercise}[section] +\newenvironment{exercise}{\begin{Exercise}\rm}{\end{Exercise}} +\newcommand{\ttlbr}{\texttt{[|}} +\newcommand{\ttrbr}{\texttt{|]}} +\newcommand{\ttor}{\texttt{|}} +\newcommand{\ttall}{\texttt{!}} +\newcommand{\ttuniquex}{\texttt{?!}} +\newcommand{\ttEXU}{\texttt{EX!}} +\newcommand{\ttAnd}{\texttt{!!}} + +\newcommand{\isasymignore}{} +\newcommand{\isasymimp}{\isasymlongrightarrow} +\newcommand{\isasymImp}{\isasymLongrightarrow} +\newcommand{\isasymFun}{\isasymRightarrow} +\newcommand{\isasymuniqex}{\isamath{\exists!\,}} +\renewcommand{\S}{Sect.\ts} + +\renewenvironment{isamarkuptxt}{\begin{isamarkuptext}}{\end{isamarkuptext}} + +\newif\ifremarks +\newcommand{\REMARK}[1]{\ifremarks\marginpar{\raggedright\footnotesize#1}\fi} + +%names of Isabelle rules +\newcommand{\rulename}[1]{\hfill(#1)} +\newcommand{\rulenamedx}[1]{\hfill(#1\index{#1@\protect\isa{#1} (theorem)|bold})} + +%%%% meta-logical connectives + +\let\Forall=\bigwedge +\let\Imp=\Longrightarrow +\let\To=\Rightarrow +\newcommand{\Var}[1]{{?\!#1}} + +%%% underscores as ordinary characters, not for subscripting +%% use @ or \sb for subscripting; use \at for @ +%% only works in \tt font +%% must not make _ an active char; would make \ttindex fail! +\gdef\underscoreoff{\catcode`\@=8\catcode`\_=\other} +\gdef\underscoreon{\catcode`\_=8\makeatother} +\chardef\other=12 +\chardef\at=`\@ + +% alternative underscore +\def\_{\leavevmode\kern.06em\vbox{\hrule height.2ex width.3em}\hskip0.1em} + + +%%%% ``WARNING'' environment +\def\dbend{\vtop to 0pt{\vss\hbox{\Huge\bf!}\vss}} +\newenvironment{warn}{\medskip\medbreak\begingroup \clubpenalty=10000 + \small %%WAS\baselineskip=0.9\baselineskip + \noindent \hangindent\parindent \hangafter=-2 + \hbox to0pt{\hskip-\hangindent\dbend\hfill}\ignorespaces}% + {\par\endgroup\medbreak} + + +%%%% Standard logical symbols +\let\turn=\vdash +\let\conj=\wedge +\let\disj=\vee +\let\imp=\rightarrow +\let\bimp=\leftrightarrow +\newcommand\all[1]{\forall#1.} %quantification +\newcommand\ex[1]{\exists#1.} +\newcommand{\pair}[1]{\langle#1\rangle} + +\newcommand{\lparr}{\mathopen{(\!|}} +\newcommand{\rparr}{\mathclose{|\!)}} +\newcommand{\fs}{\mathpunct{,\,}} +\newcommand{\ty}{\mathrel{::}} +\newcommand{\asn}{\mathrel{:=}} +\newcommand{\more}{\ldots} +\newcommand{\record}[1]{\lparr #1 \rparr} +\newcommand{\dtt}{\mathord.} + +\newcommand\lbrakk{\mathopen{[\![}} +\newcommand\rbrakk{\mathclose{]\!]}} +\newcommand\List[1]{\lbrakk#1\rbrakk} %was \obj +\newcommand\vpile[1]{\begin{array}{c}#1\end{array}} +\newenvironment{matharray}[1]{\[\begin{array}{#1}}{\end{array}\]} +\newcommand{\Text}[1]{\mbox{#1}} + +\DeclareMathSymbol{\dshsym}{\mathalpha}{letters}{"2D} +\newcommand{\dsh}{\mathit{\dshsym}} + +\let\int=\cap +\let\un=\cup +\let\inter=\bigcap +\let\union=\bigcup + +\def\ML{{\sc ml}} +\def\AST{{\sc ast}} + +%macros to change the treatment of symbols +\def\relsemicolon{\mathcode`\;="303B} %treat ; like a relation +\def\binperiod{\mathcode`\.="213A} %treat . like a binary operator +\def\binvert{\mathcode`\|="226A} %treat | like a binary operator + +%redefinition of \sloppy and \fussy to use \emergencystretch +\def\sloppy{\tolerance2000 \hfuzz.5pt \vfuzz.5pt \emergencystretch=15pt} +\def\fussy{\tolerance200 \hfuzz.1pt \vfuzz.1pt \emergencystretch=0pt} + +%non-bf version of description +\def\descrlabel#1{\hspace\labelsep #1} +\def\descr{\list{}{\labelwidth\z@ \itemindent-\leftmargin\let\makelabel\descrlabel}} +\let\enddescr\endlist + +% The mathcodes for the letters A, ..., Z, a, ..., z are changed to +% generate text italic rather than math italic by default. This makes +% multi-letter identifiers look better. The mathcode for character c +% is set to |"7000| (variable family) + |"400| (text italic) + |c|. +% +\DeclareSymbolFont{italics}{\encodingdefault}{\rmdefault}{m}{it}% +\def\@setmcodes#1#2#3{{\count0=#1 \count1=#3 + \loop \global\mathcode\count0=\count1 \ifnum \count0<#2 + \advance\count0 by1 \advance\count1 by1 \repeat}} +\@setmcodes{`A}{`Z}{"7\hexnumber@\symitalics41} +\@setmcodes{`a}{`z}{"7\hexnumber@\symitalics61} + +%%% \dquotes permits usage of "..." for \hbox{...} +%%% also taken from under.sty +{\catcode`\"=\active +\gdef\dquotes{\catcode`\"=\active \let"=\@mathText}% +\gdef\@mathText#1"{\hbox{\mathTextFont #1\/}}} +\def\mathTextFont{\frenchspacing\tt} +\def\dquotesoff{\catcode`\"=\other} diff --git a/src/Doc/Logics_ZF/document/root.tex b/src/Doc/Logics_ZF/document/root.tex --- a/src/Doc/Logics_ZF/document/root.tex +++ b/src/Doc/Logics_ZF/document/root.tex @@ -1,91 +1,91 @@ \documentclass[11pt,a4paper]{report} \usepackage{isabelle,isabellesym,railsetup} \usepackage{graphicx,logics,ttbox,proof,latexsym} \usepackage{isar} \usepackage{pdfsetup} %last package! \remarkstrue %%% to index derived rls: ^\([a-zA-Z0-9][a-zA-Z0-9_]*\) \\tdx{\1} %%% to index rulenames: ^ *(\([a-zA-Z0-9][a-zA-Z0-9_]*\), \\tdx{\1} %%% to index constants: \\tt \([a-zA-Z0-9][a-zA-Z0-9_]*\) \\cdx{\1} %%% to deverbify: \\verb|\([^|]*\)| \\ttindex{\1} -\title{\includegraphics[scale=0.5]{isabelle_zf} \\[4ex] +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Isabelle's Logics: FOL and ZF} \author{{\em Lawrence C. Paulson}\\ Computer Laboratory \\ University of Cambridge \\ \texttt{lcp@cl.cam.ac.uk}\\[3ex] With Contributions by Tobias Nipkow and Markus Wenzel} \newcommand\subcaption[1]{\par {\centering\normalsize\sc#1\par}\bigskip \hrule\bigskip} \newenvironment{constants}{\begin{center}\small\begin{tabular}{rrrr}}{\end{tabular}\end{center}} \let\ts=\thinspace \makeindex \underscoreoff \setcounter{secnumdepth}{2} \setcounter{tocdepth}{2} %% {secnumdepth}{2}??? \pagestyle{headings} \sloppy \binperiod %%%treat . like a binary operator \isadroptag{theory} \railtermfont{\isabellestyle{tt}} \railnontermfont{\isabellestyle{literal}} \railnamefont{\isabellestyle{literal}} \begin{document} \maketitle \begin{abstract} This manual describes Isabelle's formalizations of many-sorted first-order logic (\texttt{FOL}) and Zermelo-Fraenkel set theory (\texttt{ZF}). See the \emph{Reference Manual} for general Isabelle commands, and \emph{Introduction to Isabelle} for an overall tutorial. This manual is part of the earlier Isabelle documentation, which is somewhat superseded by the Isabelle/HOL \emph{Tutorial}~\cite{isa-tutorial}. However, the present document is the only available documentation for Isabelle's versions of first-order logic and set theory. Much of it is concerned with the primitives for conducting proofs using the ML top level. It has been rewritten to use the Isar proof language, but evidence of the old \ML{} orientation remains. \end{abstract} \subsubsection*{Acknowledgements} Markus Wenzel made numerous improvements. Philippe de Groote contributed to~ZF. Philippe No\"el and Martin Coen made many contributions to~ZF. The research has been funded by the EPSRC (grants GR/G53279, GR/H40570, GR/K57381, GR/K77051, GR/M75440) and by ESPRIT (projects 3245: Logical Frameworks, and 6453: Types) and by the DFG Schwerpunktprogramm \emph{Deduktion}. \pagenumbering{roman} \tableofcontents \cleardoublepage \pagenumbering{arabic} \setcounter{page}{1} \input{syntax} \input{FOL} \input{ZF} \isabellestyle{literal} \input{ZF_Isar} \isabellestyle{tt} \bibliographystyle{plain} \bibliography{manual} \printindex \end{document} diff --git a/src/Doc/Main/Main_Doc.thy b/src/Doc/Main/Main_Doc.thy --- a/src/Doc/Main/Main_Doc.thy +++ b/src/Doc/Main/Main_Doc.thy @@ -1,650 +1,650 @@ (*<*) theory Main_Doc imports Main begin setup \ - Thy_Output.antiquotation_pretty_source \<^binding>\term_type_only\ (Args.term -- Args.typ_abbrev) + Document_Output.antiquotation_pretty_source \<^binding>\term_type_only\ (Args.term -- Args.typ_abbrev) (fn ctxt => fn (t, T) => (if fastype_of t = Sign.certify_typ (Proof_Context.theory_of ctxt) T then () else error "term_type_only: type mismatch"; Syntax.pretty_typ ctxt T)) \ setup \ - Thy_Output.antiquotation_pretty_source \<^binding>\expanded_typ\ Args.typ + Document_Output.antiquotation_pretty_source \<^binding>\expanded_typ\ Args.typ Syntax.pretty_typ \ (*>*) text\ \begin{abstract} This document lists the main types, functions and syntax provided by theory \<^theory>\Main\. It is meant as a quick overview of what is available. For infix operators and their precedences see the final section. The sophisticated class structure is only hinted at. For details see \<^url>\https://isabelle.in.tum.de/library/HOL\. \end{abstract} \section*{HOL} The basic logic: \<^prop>\x = y\, \<^const>\True\, \<^const>\False\, \<^prop>\\ P\, \<^prop>\P \ Q\, \<^prop>\P \ Q\, \<^prop>\P \ Q\, \<^prop>\\x. P\, \<^prop>\\x. P\, \<^prop>\\! x. P\, \<^term>\THE x. P\. \<^smallskip> \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\HOL.undefined\ & \<^typeof>\HOL.undefined\\\ \<^const>\HOL.default\ & \<^typeof>\HOL.default\\\ \end{tabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \<^term>\\ (x = y)\ & @{term[source]"\ (x = y)"} & (\<^verbatim>\~=\)\\ @{term[source]"P \ Q"} & \<^term>\P \ Q\ \\ \<^term>\If x y z\ & @{term[source]"If x y z"}\\ \<^term>\Let e\<^sub>1 (\x. e\<^sub>2)\ & @{term[source]"Let e\<^sub>1 (\x. e\<^sub>2)"}\\ \end{supertabular} \section*{Orderings} A collection of classes defining basic orderings: preorder, partial order, linear order, dense linear order and wellorder. \<^smallskip> \begin{supertabular}{@ {} l @ {~::~} l l @ {}} \<^const>\Orderings.less_eq\ & \<^typeof>\Orderings.less_eq\ & (\<^verbatim>\<=\)\\ \<^const>\Orderings.less\ & \<^typeof>\Orderings.less\\\ \<^const>\Orderings.Least\ & \<^typeof>\Orderings.Least\\\ \<^const>\Orderings.Greatest\ & \<^typeof>\Orderings.Greatest\\\ \<^const>\Orderings.min\ & \<^typeof>\Orderings.min\\\ \<^const>\Orderings.max\ & \<^typeof>\Orderings.max\\\ @{const[source] top} & \<^typeof>\Orderings.top\\\ @{const[source] bot} & \<^typeof>\Orderings.bot\\\ \<^const>\Orderings.mono\ & \<^typeof>\Orderings.mono\\\ \<^const>\Orderings.strict_mono\ & \<^typeof>\Orderings.strict_mono\\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} @{term[source]"x \ y"} & \<^term>\x \ y\ & (\<^verbatim>\>=\)\\ @{term[source]"x > y"} & \<^term>\x > y\\\ \<^term>\\x\y. P\ & @{term[source]"\x. x \ y \ P"}\\ \<^term>\\x\y. P\ & @{term[source]"\x. x \ y \ P"}\\ \multicolumn{2}{@ {}l@ {}}{Similarly for $<$, $\ge$ and $>$}\\ \<^term>\LEAST x. P\ & @{term[source]"Least (\x. P)"}\\ \<^term>\GREATEST x. P\ & @{term[source]"Greatest (\x. P)"}\\ \end{supertabular} \section*{Lattices} Classes semilattice, lattice, distributive lattice and complete lattice (the latter in theory \<^theory>\HOL.Set\). \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Lattices.inf\ & \<^typeof>\Lattices.inf\\\ \<^const>\Lattices.sup\ & \<^typeof>\Lattices.sup\\\ \<^const>\Complete_Lattices.Inf\ & @{term_type_only Complete_Lattices.Inf "'a set \ 'a::Inf"}\\ \<^const>\Complete_Lattices.Sup\ & @{term_type_only Complete_Lattices.Sup "'a set \ 'a::Sup"}\\ \end{tabular} \subsubsection*{Syntax} Available by loading theory \Lattice_Syntax\ in directory \Library\. \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} @{text[source]"x \ y"} & \<^term>\x \ y\\\ @{text[source]"x \ y"} & \<^term>\x < y\\\ @{text[source]"x \ y"} & \<^term>\inf x y\\\ @{text[source]"x \ y"} & \<^term>\sup x y\\\ @{text[source]"\A"} & \<^term>\Inf A\\\ @{text[source]"\A"} & \<^term>\Sup A\\\ @{text[source]"\"} & @{term[source] top}\\ @{text[source]"\"} & @{term[source] bot}\\ \end{supertabular} \section*{Set} \begin{supertabular}{@ {} l @ {~::~} l l @ {}} \<^const>\Set.empty\ & @{term_type_only "Set.empty" "'a set"}\\ \<^const>\Set.insert\ & @{term_type_only insert "'a\'a set\'a set"}\\ \<^const>\Collect\ & @{term_type_only Collect "('a\bool)\'a set"}\\ \<^const>\Set.member\ & @{term_type_only Set.member "'a\'a set\bool"} & (\<^verbatim>\:\)\\ \<^const>\Set.union\ & @{term_type_only Set.union "'a set\'a set \ 'a set"} & (\<^verbatim>\Un\)\\ \<^const>\Set.inter\ & @{term_type_only Set.inter "'a set\'a set \ 'a set"} & (\<^verbatim>\Int\)\\ \<^const>\Union\ & @{term_type_only Union "'a set set\'a set"}\\ \<^const>\Inter\ & @{term_type_only Inter "'a set set\'a set"}\\ \<^const>\Pow\ & @{term_type_only Pow "'a set \'a set set"}\\ \<^const>\UNIV\ & @{term_type_only UNIV "'a set"}\\ \<^const>\image\ & @{term_type_only image "('a\'b)\'a set\'b set"}\\ \<^const>\Ball\ & @{term_type_only Ball "'a set\('a\bool)\bool"}\\ \<^const>\Bex\ & @{term_type_only Bex "'a set\('a\bool)\bool"}\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \{a\<^sub>1,\,a\<^sub>n}\ & \insert a\<^sub>1 (\ (insert a\<^sub>n {})\)\\\ \<^term>\a \ A\ & @{term[source]"\(x \ A)"}\\ \<^term>\A \ B\ & @{term[source]"A \ B"}\\ \<^term>\A \ B\ & @{term[source]"A < B"}\\ @{term[source]"A \ B"} & @{term[source]"B \ A"}\\ @{term[source]"A \ B"} & @{term[source]"B < A"}\\ \<^term>\{x. P}\ & @{term[source]"Collect (\x. P)"}\\ \{t | x\<^sub>1 \ x\<^sub>n. P}\ & \{v. \x\<^sub>1 \ x\<^sub>n. v = t \ P}\\\ @{term[source]"\x\I. A"} & @{term[source]"\((\x. A) ` I)"} & (\texttt{UN})\\ @{term[source]"\x. A"} & @{term[source]"\((\x. A) ` UNIV)"}\\ @{term[source]"\x\I. A"} & @{term[source]"\((\x. A) ` I)"} & (\texttt{INT})\\ @{term[source]"\x. A"} & @{term[source]"\((\x. A) ` UNIV)"}\\ \<^term>\\x\A. P\ & @{term[source]"Ball A (\x. P)"}\\ \<^term>\\x\A. P\ & @{term[source]"Bex A (\x. P)"}\\ \<^term>\range f\ & @{term[source]"f ` UNIV"}\\ \end{supertabular} \section*{Fun} \begin{supertabular}{@ {} l @ {~::~} l l @ {}} \<^const>\Fun.id\ & \<^typeof>\Fun.id\\\ \<^const>\Fun.comp\ & \<^typeof>\Fun.comp\ & (\texttt{o})\\ \<^const>\Fun.inj_on\ & @{term_type_only Fun.inj_on "('a\'b)\'a set\bool"}\\ \<^const>\Fun.inj\ & \<^typeof>\Fun.inj\\\ \<^const>\Fun.surj\ & \<^typeof>\Fun.surj\\\ \<^const>\Fun.bij\ & \<^typeof>\Fun.bij\\\ \<^const>\Fun.bij_betw\ & @{term_type_only Fun.bij_betw "('a\'b)\'a set\'b set\bool"}\\ \<^const>\Fun.fun_upd\ & \<^typeof>\Fun.fun_upd\\\ \end{supertabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\fun_upd f x y\ & @{term[source]"fun_upd f x y"}\\ \f(x\<^sub>1:=y\<^sub>1,\,x\<^sub>n:=y\<^sub>n)\ & \f(x\<^sub>1:=y\<^sub>1)\(x\<^sub>n:=y\<^sub>n)\\\ \end{tabular} \section*{Hilbert\_Choice} Hilbert's selection ($\varepsilon$) operator: \<^term>\SOME x. P\. \<^smallskip> \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Hilbert_Choice.inv_into\ & @{term_type_only Hilbert_Choice.inv_into "'a set \ ('a \ 'b) \ ('b \ 'a)"} \end{tabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\inv\ & @{term[source]"inv_into UNIV"} \end{tabular} \section*{Fixed Points} Theory: \<^theory>\HOL.Inductive\. Least and greatest fixed points in a complete lattice \<^typ>\'a\: \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Inductive.lfp\ & \<^typeof>\Inductive.lfp\\\ \<^const>\Inductive.gfp\ & \<^typeof>\Inductive.gfp\\\ \end{tabular} Note that in particular sets (\<^typ>\'a \ bool\) are complete lattices. \section*{Sum\_Type} Type constructor \+\. \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Sum_Type.Inl\ & \<^typeof>\Sum_Type.Inl\\\ \<^const>\Sum_Type.Inr\ & \<^typeof>\Sum_Type.Inr\\\ \<^const>\Sum_Type.Plus\ & @{term_type_only Sum_Type.Plus "'a set\'b set\('a+'b)set"} \end{tabular} \section*{Product\_Type} Types \<^typ>\unit\ and \\\. \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Product_Type.Unity\ & \<^typeof>\Product_Type.Unity\\\ \<^const>\Pair\ & \<^typeof>\Pair\\\ \<^const>\fst\ & \<^typeof>\fst\\\ \<^const>\snd\ & \<^typeof>\snd\\\ \<^const>\case_prod\ & \<^typeof>\case_prod\\\ \<^const>\curry\ & \<^typeof>\curry\\\ \<^const>\Product_Type.Sigma\ & @{term_type_only Product_Type.Sigma "'a set\('a\'b set)\('a*'b)set"}\\ \end{supertabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} ll @ {}} \<^term>\Pair a b\ & @{term[source]"Pair a b"}\\ \<^term>\case_prod (\x y. t)\ & @{term[source]"case_prod (\x y. t)"}\\ \<^term>\A \ B\ & \Sigma A (\\<^latex>\\_\. B)\ \end{tabular} Pairs may be nested. Nesting to the right is printed as a tuple, e.g.\ \mbox{\<^term>\(a,b,c)\} is really \mbox{\(a, (b, c))\.} Pattern matching with pairs and tuples extends to all binders, e.g.\ \mbox{\<^prop>\\(x,y)\A. P\,} \<^term>\{(x,y). P}\, etc. \section*{Relation} \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Relation.converse\ & @{term_type_only Relation.converse "('a * 'b)set \ ('b*'a)set"}\\ \<^const>\Relation.relcomp\ & @{term_type_only Relation.relcomp "('a*'b)set\('b*'c)set\('a*'c)set"}\\ \<^const>\Relation.Image\ & @{term_type_only Relation.Image "('a*'b)set\'a set\'b set"}\\ \<^const>\Relation.inv_image\ & @{term_type_only Relation.inv_image "('a*'a)set\('b\'a)\('b*'b)set"}\\ \<^const>\Relation.Id_on\ & @{term_type_only Relation.Id_on "'a set\('a*'a)set"}\\ \<^const>\Relation.Id\ & @{term_type_only Relation.Id "('a*'a)set"}\\ \<^const>\Relation.Domain\ & @{term_type_only Relation.Domain "('a*'b)set\'a set"}\\ \<^const>\Relation.Range\ & @{term_type_only Relation.Range "('a*'b)set\'b set"}\\ \<^const>\Relation.Field\ & @{term_type_only Relation.Field "('a*'a)set\'a set"}\\ \<^const>\Relation.refl_on\ & @{term_type_only Relation.refl_on "'a set\('a*'a)set\bool"}\\ \<^const>\Relation.refl\ & @{term_type_only Relation.refl "('a*'a)set\bool"}\\ \<^const>\Relation.sym\ & @{term_type_only Relation.sym "('a*'a)set\bool"}\\ \<^const>\Relation.antisym\ & @{term_type_only Relation.antisym "('a*'a)set\bool"}\\ \<^const>\Relation.trans\ & @{term_type_only Relation.trans "('a*'a)set\bool"}\\ \<^const>\Relation.irrefl\ & @{term_type_only Relation.irrefl "('a*'a)set\bool"}\\ \<^const>\Relation.total_on\ & @{term_type_only Relation.total_on "'a set\('a*'a)set\bool"}\\ \<^const>\Relation.total\ & @{term_type_only Relation.total "('a*'a)set\bool"}\\ \end{tabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \<^term>\converse r\ & @{term[source]"converse r"} & (\<^verbatim>\^-1\) \end{tabular} \<^medskip> \noindent Type synonym \ \<^typ>\'a rel\ \=\ @{expanded_typ "'a rel"} \section*{Equiv\_Relations} \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Equiv_Relations.equiv\ & @{term_type_only Equiv_Relations.equiv "'a set \ ('a*'a)set\bool"}\\ \<^const>\Equiv_Relations.quotient\ & @{term_type_only Equiv_Relations.quotient "'a set \ ('a \ 'a) set \ 'a set set"}\\ \<^const>\Equiv_Relations.congruent\ & @{term_type_only Equiv_Relations.congruent "('a*'a)set\('a\'b)\bool"}\\ \<^const>\Equiv_Relations.congruent2\ & @{term_type_only Equiv_Relations.congruent2 "('a*'a)set\('b*'b)set\('a\'b\'c)\bool"}\\ %@ {const Equiv_Relations.} & @ {term_type_only Equiv_Relations. ""}\\ \end{supertabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\congruent r f\ & @{term[source]"congruent r f"}\\ \<^term>\congruent2 r r f\ & @{term[source]"congruent2 r r f"}\\ \end{tabular} \section*{Transitive\_Closure} \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Transitive_Closure.rtrancl\ & @{term_type_only Transitive_Closure.rtrancl "('a*'a)set\('a*'a)set"}\\ \<^const>\Transitive_Closure.trancl\ & @{term_type_only Transitive_Closure.trancl "('a*'a)set\('a*'a)set"}\\ \<^const>\Transitive_Closure.reflcl\ & @{term_type_only Transitive_Closure.reflcl "('a*'a)set\('a*'a)set"}\\ \<^const>\Transitive_Closure.acyclic\ & @{term_type_only Transitive_Closure.acyclic "('a*'a)set\bool"}\\ \<^const>\compower\ & @{term_type_only "(^^) :: ('a*'a)set\nat\('a*'a)set" "('a*'a)set\nat\('a*'a)set"}\\ \end{tabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \<^term>\rtrancl r\ & @{term[source]"rtrancl r"} & (\<^verbatim>\^*\)\\ \<^term>\trancl r\ & @{term[source]"trancl r"} & (\<^verbatim>\^+\)\\ \<^term>\reflcl r\ & @{term[source]"reflcl r"} & (\<^verbatim>\^=\) \end{tabular} \section*{Algebra} Theories \<^theory>\HOL.Groups\, \<^theory>\HOL.Rings\, \<^theory>\HOL.Fields\ and \<^theory>\HOL.Divides\ define a large collection of classes describing common algebraic structures from semigroups up to fields. Everything is done in terms of overloaded operators: \begin{supertabular}{@ {} l @ {~::~} l l @ {}} \0\ & \<^typeof>\zero\\\ \1\ & \<^typeof>\one\\\ \<^const>\plus\ & \<^typeof>\plus\\\ \<^const>\minus\ & \<^typeof>\minus\\\ \<^const>\uminus\ & \<^typeof>\uminus\ & (\<^verbatim>\-\)\\ \<^const>\times\ & \<^typeof>\times\\\ \<^const>\inverse\ & \<^typeof>\inverse\\\ \<^const>\divide\ & \<^typeof>\divide\\\ \<^const>\abs\ & \<^typeof>\abs\\\ \<^const>\sgn\ & \<^typeof>\sgn\\\ \<^const>\Rings.dvd\ & \<^typeof>\Rings.dvd\\\ \<^const>\divide\ & \<^typeof>\divide\\\ \<^const>\modulo\ & \<^typeof>\modulo\\\ \end{supertabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\\x\\ & @{term[source] "abs x"} \end{tabular} \section*{Nat} \<^datatype>\nat\ \<^bigskip> \begin{tabular}{@ {} lllllll @ {}} \<^term>\(+) :: nat \ nat \ nat\ & \<^term>\(-) :: nat \ nat \ nat\ & \<^term>\(*) :: nat \ nat \ nat\ & \<^term>\(^) :: nat \ nat \ nat\ & \<^term>\(div) :: nat \ nat \ nat\& \<^term>\(mod) :: nat \ nat \ nat\& \<^term>\(dvd) :: nat \ nat \ bool\\\ \<^term>\(\) :: nat \ nat \ bool\ & \<^term>\(<) :: nat \ nat \ bool\ & \<^term>\min :: nat \ nat \ nat\ & \<^term>\max :: nat \ nat \ nat\ & \<^term>\Min :: nat set \ nat\ & \<^term>\Max :: nat set \ nat\\\ \end{tabular} \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Nat.of_nat\ & \<^typeof>\Nat.of_nat\\\ \<^term>\(^^) :: ('a \ 'a) \ nat \ 'a \ 'a\ & @{term_type_only "(^^) :: ('a \ 'a) \ nat \ 'a \ 'a" "('a \ 'a) \ nat \ 'a \ 'a"} \end{tabular} \section*{Int} Type \<^typ>\int\ \<^bigskip> \begin{tabular}{@ {} llllllll @ {}} \<^term>\(+) :: int \ int \ int\ & \<^term>\(-) :: int \ int \ int\ & \<^term>\uminus :: int \ int\ & \<^term>\(*) :: int \ int \ int\ & \<^term>\(^) :: int \ nat \ int\ & \<^term>\(div) :: int \ int \ int\& \<^term>\(mod) :: int \ int \ int\& \<^term>\(dvd) :: int \ int \ bool\\\ \<^term>\(\) :: int \ int \ bool\ & \<^term>\(<) :: int \ int \ bool\ & \<^term>\min :: int \ int \ int\ & \<^term>\max :: int \ int \ int\ & \<^term>\Min :: int set \ int\ & \<^term>\Max :: int set \ int\\\ \<^term>\abs :: int \ int\ & \<^term>\sgn :: int \ int\\\ \end{tabular} \begin{tabular}{@ {} l @ {~::~} l l @ {}} \<^const>\Int.nat\ & \<^typeof>\Int.nat\\\ \<^const>\Int.of_int\ & \<^typeof>\Int.of_int\\\ \<^const>\Int.Ints\ & @{term_type_only Int.Ints "'a::ring_1 set"} & (\<^verbatim>\Ints\) \end{tabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\of_nat::nat\int\ & @{term[source]"of_nat"}\\ \end{tabular} \section*{Finite\_Set} \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Finite_Set.finite\ & @{term_type_only Finite_Set.finite "'a set\bool"}\\ \<^const>\Finite_Set.card\ & @{term_type_only Finite_Set.card "'a set \ nat"}\\ \<^const>\Finite_Set.fold\ & @{term_type_only Finite_Set.fold "('a \ 'b \ 'b) \ 'b \ 'a set \ 'b"}\\ \end{supertabular} \section*{Lattices\_Big} \begin{supertabular}{@ {} l @ {~::~} l l @ {}} \<^const>\Lattices_Big.Min\ & \<^typeof>\Lattices_Big.Min\\\ \<^const>\Lattices_Big.Max\ & \<^typeof>\Lattices_Big.Max\\\ \<^const>\Lattices_Big.arg_min\ & \<^typeof>\Lattices_Big.arg_min\\\ \<^const>\Lattices_Big.is_arg_min\ & \<^typeof>\Lattices_Big.is_arg_min\\\ \<^const>\Lattices_Big.arg_max\ & \<^typeof>\Lattices_Big.arg_max\\\ \<^const>\Lattices_Big.is_arg_max\ & \<^typeof>\Lattices_Big.is_arg_max\\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \<^term>\ARG_MIN f x. P\ & @{term[source]"arg_min f (\x. P)"}\\ \<^term>\ARG_MAX f x. P\ & @{term[source]"arg_max f (\x. P)"}\\ \end{supertabular} \section*{Groups\_Big} \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Groups_Big.sum\ & @{term_type_only Groups_Big.sum "('a \ 'b) \ 'a set \ 'b::comm_monoid_add"}\\ \<^const>\Groups_Big.prod\ & @{term_type_only Groups_Big.prod "('a \ 'b) \ 'a set \ 'b::comm_monoid_mult"}\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l l @ {}} \<^term>\sum (\x. x) A\ & @{term[source]"sum (\x. x) A"} & (\<^verbatim>\SUM\)\\ \<^term>\sum (\x. t) A\ & @{term[source]"sum (\x. t) A"}\\ @{term[source] "\x|P. t"} & \<^term>\\x|P. t\\\ \multicolumn{2}{@ {}l@ {}}{Similarly for \\\ instead of \\\} & (\<^verbatim>\PROD\)\\ \end{supertabular} \section*{Wellfounded} \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Wellfounded.wf\ & @{term_type_only Wellfounded.wf "('a*'a)set\bool"}\\ \<^const>\Wellfounded.acc\ & @{term_type_only Wellfounded.acc "('a*'a)set\'a set"}\\ \<^const>\Wellfounded.measure\ & @{term_type_only Wellfounded.measure "('a\nat)\('a*'a)set"}\\ \<^const>\Wellfounded.lex_prod\ & @{term_type_only Wellfounded.lex_prod "('a*'a)set\('b*'b)set\(('a*'b)*('a*'b))set"}\\ \<^const>\Wellfounded.mlex_prod\ & @{term_type_only Wellfounded.mlex_prod "('a\nat)\('a*'a)set\('a*'a)set"}\\ \<^const>\Wellfounded.less_than\ & @{term_type_only Wellfounded.less_than "(nat*nat)set"}\\ \<^const>\Wellfounded.pred_nat\ & @{term_type_only Wellfounded.pred_nat "(nat*nat)set"}\\ \end{supertabular} \section*{Set\_Interval} % \<^theory>\HOL.Set_Interval\ \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\lessThan\ & @{term_type_only lessThan "'a::ord \ 'a set"}\\ \<^const>\atMost\ & @{term_type_only atMost "'a::ord \ 'a set"}\\ \<^const>\greaterThan\ & @{term_type_only greaterThan "'a::ord \ 'a set"}\\ \<^const>\atLeast\ & @{term_type_only atLeast "'a::ord \ 'a set"}\\ \<^const>\greaterThanLessThan\ & @{term_type_only greaterThanLessThan "'a::ord \ 'a \ 'a set"}\\ \<^const>\atLeastLessThan\ & @{term_type_only atLeastLessThan "'a::ord \ 'a \ 'a set"}\\ \<^const>\greaterThanAtMost\ & @{term_type_only greaterThanAtMost "'a::ord \ 'a \ 'a set"}\\ \<^const>\atLeastAtMost\ & @{term_type_only atLeastAtMost "'a::ord \ 'a \ 'a set"}\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\lessThan y\ & @{term[source] "lessThan y"}\\ \<^term>\atMost y\ & @{term[source] "atMost y"}\\ \<^term>\greaterThan x\ & @{term[source] "greaterThan x"}\\ \<^term>\atLeast x\ & @{term[source] "atLeast x"}\\ \<^term>\greaterThanLessThan x y\ & @{term[source] "greaterThanLessThan x y"}\\ \<^term>\atLeastLessThan x y\ & @{term[source] "atLeastLessThan x y"}\\ \<^term>\greaterThanAtMost x y\ & @{term[source] "greaterThanAtMost x y"}\\ \<^term>\atLeastAtMost x y\ & @{term[source] "atLeastAtMost x y"}\\ @{term[source] "\i\n. A"} & @{term[source] "\i \ {..n}. A"}\\ @{term[source] "\ii \ {..\\ instead of \\\}\\ \<^term>\sum (\x. t) {a..b}\ & @{term[source] "sum (\x. t) {a..b}"}\\ \<^term>\sum (\x. t) {a.. & @{term[source] "sum (\x. t) {a..\sum (\x. t) {..b}\ & @{term[source] "sum (\x. t) {..b}"}\\ \<^term>\sum (\x. t) {.. & @{term[source] "sum (\x. t) {..\\ instead of \\\}\\ \end{supertabular} \section*{Power} \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Power.power\ & \<^typeof>\Power.power\ \end{tabular} \section*{Option} \<^datatype>\option\ \<^bigskip> \begin{tabular}{@ {} l @ {~::~} l @ {}} \<^const>\Option.the\ & \<^typeof>\Option.the\\\ \<^const>\map_option\ & @{typ[source]"('a \ 'b) \ 'a option \ 'b option"}\\ \<^const>\set_option\ & @{term_type_only set_option "'a option \ 'a set"}\\ \<^const>\Option.bind\ & @{term_type_only Option.bind "'a option \ ('a \ 'b option) \ 'b option"} \end{tabular} \section*{List} \<^datatype>\list\ \<^bigskip> \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\List.append\ & \<^typeof>\List.append\\\ \<^const>\List.butlast\ & \<^typeof>\List.butlast\\\ \<^const>\List.concat\ & \<^typeof>\List.concat\\\ \<^const>\List.distinct\ & \<^typeof>\List.distinct\\\ \<^const>\List.drop\ & \<^typeof>\List.drop\\\ \<^const>\List.dropWhile\ & \<^typeof>\List.dropWhile\\\ \<^const>\List.filter\ & \<^typeof>\List.filter\\\ \<^const>\List.find\ & \<^typeof>\List.find\\\ \<^const>\List.fold\ & \<^typeof>\List.fold\\\ \<^const>\List.foldr\ & \<^typeof>\List.foldr\\\ \<^const>\List.foldl\ & \<^typeof>\List.foldl\\\ \<^const>\List.hd\ & \<^typeof>\List.hd\\\ \<^const>\List.last\ & \<^typeof>\List.last\\\ \<^const>\List.length\ & \<^typeof>\List.length\\\ \<^const>\List.lenlex\ & @{term_type_only List.lenlex "('a*'a)set\('a list * 'a list)set"}\\ \<^const>\List.lex\ & @{term_type_only List.lex "('a*'a)set\('a list * 'a list)set"}\\ \<^const>\List.lexn\ & @{term_type_only List.lexn "('a*'a)set\nat\('a list * 'a list)set"}\\ \<^const>\List.lexord\ & @{term_type_only List.lexord "('a*'a)set\('a list * 'a list)set"}\\ \<^const>\List.listrel\ & @{term_type_only List.listrel "('a*'b)set\('a list * 'b list)set"}\\ \<^const>\List.listrel1\ & @{term_type_only List.listrel1 "('a*'a)set\('a list * 'a list)set"}\\ \<^const>\List.lists\ & @{term_type_only List.lists "'a set\'a list set"}\\ \<^const>\List.listset\ & @{term_type_only List.listset "'a set list \ 'a list set"}\\ \<^const>\Groups_List.sum_list\ & \<^typeof>\Groups_List.sum_list\\\ \<^const>\Groups_List.prod_list\ & \<^typeof>\Groups_List.prod_list\\\ \<^const>\List.list_all2\ & \<^typeof>\List.list_all2\\\ \<^const>\List.list_update\ & \<^typeof>\List.list_update\\\ \<^const>\List.map\ & \<^typeof>\List.map\\\ \<^const>\List.measures\ & @{term_type_only List.measures "('a\nat)list\('a*'a)set"}\\ \<^const>\List.nth\ & \<^typeof>\List.nth\\\ \<^const>\List.nths\ & \<^typeof>\List.nths\\\ \<^const>\List.remdups\ & \<^typeof>\List.remdups\\\ \<^const>\List.removeAll\ & \<^typeof>\List.removeAll\\\ \<^const>\List.remove1\ & \<^typeof>\List.remove1\\\ \<^const>\List.replicate\ & \<^typeof>\List.replicate\\\ \<^const>\List.rev\ & \<^typeof>\List.rev\\\ \<^const>\List.rotate\ & \<^typeof>\List.rotate\\\ \<^const>\List.rotate1\ & \<^typeof>\List.rotate1\\\ \<^const>\List.set\ & @{term_type_only List.set "'a list \ 'a set"}\\ \<^const>\List.shuffles\ & \<^typeof>\List.shuffles\\\ \<^const>\List.sort\ & \<^typeof>\List.sort\\\ \<^const>\List.sorted\ & \<^typeof>\List.sorted\\\ \<^const>\List.sorted_wrt\ & \<^typeof>\List.sorted_wrt\\\ \<^const>\List.splice\ & \<^typeof>\List.splice\\\ \<^const>\List.take\ & \<^typeof>\List.take\\\ \<^const>\List.takeWhile\ & \<^typeof>\List.takeWhile\\\ \<^const>\List.tl\ & \<^typeof>\List.tl\\\ \<^const>\List.upt\ & \<^typeof>\List.upt\\\ \<^const>\List.upto\ & \<^typeof>\List.upto\\\ \<^const>\List.zip\ & \<^typeof>\List.zip\\\ \end{supertabular} \subsubsection*{Syntax} \begin{supertabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \[x\<^sub>1,\,x\<^sub>n]\ & \x\<^sub>1 # \ # x\<^sub>n # []\\\ \<^term>\[m.. & @{term[source]"upt m n"}\\ \<^term>\[i..j]\ & @{term[source]"upto i j"}\\ \<^term>\xs[n := x]\ & @{term[source]"list_update xs n x"}\\ \<^term>\\x\xs. e\ & @{term[source]"listsum (map (\x. e) xs)"}\\ \end{supertabular} \<^medskip> Filter input syntax \[pat \ e. b]\, where \pat\ is a tuple pattern, which stands for \<^term>\filter (\pat. b) e\. List comprehension input syntax: \[e. q\<^sub>1, \, q\<^sub>n]\ where each qualifier \q\<^sub>i\ is either a generator \mbox{\pat \ e\} or a guard, i.e.\ boolean expression. \section*{Map} Maps model partial functions and are often used as finite tables. However, the domain of a map may be infinite. \begin{supertabular}{@ {} l @ {~::~} l @ {}} \<^const>\Map.empty\ & \<^typeof>\Map.empty\\\ \<^const>\Map.map_add\ & \<^typeof>\Map.map_add\\\ \<^const>\Map.map_comp\ & \<^typeof>\Map.map_comp\\\ \<^const>\Map.restrict_map\ & @{term_type_only Map.restrict_map "('a\'b option)\'a set\('a\'b option)"}\\ \<^const>\Map.dom\ & @{term_type_only Map.dom "('a\'b option)\'a set"}\\ \<^const>\Map.ran\ & @{term_type_only Map.ran "('a\'b option)\'b set"}\\ \<^const>\Map.map_le\ & \<^typeof>\Map.map_le\\\ \<^const>\Map.map_of\ & \<^typeof>\Map.map_of\\\ \<^const>\Map.map_upds\ & \<^typeof>\Map.map_upds\\\ \end{supertabular} \subsubsection*{Syntax} \begin{tabular}{@ {} l @ {\quad$\equiv$\quad} l @ {}} \<^term>\Map.empty\ & \<^term>\\x. None\\\ \<^term>\m(x:=Some y)\ & @{term[source]"m(x:=Some y)"}\\ \m(x\<^sub>1\y\<^sub>1,\,x\<^sub>n\y\<^sub>n)\ & @{text[source]"m(x\<^sub>1\y\<^sub>1)\(x\<^sub>n\y\<^sub>n)"}\\ \[x\<^sub>1\y\<^sub>1,\,x\<^sub>n\y\<^sub>n]\ & @{text[source]"Map.empty(x\<^sub>1\y\<^sub>1,\,x\<^sub>n\y\<^sub>n)"}\\ \<^term>\map_upds m xs ys\ & @{term[source]"map_upds m xs ys"}\\ \end{tabular} \section*{Infix operators in Main} % \<^theory>\Main\ \begin{center} \begin{tabular}{llll} & Operator & precedence & associativity \\ \hline Meta-logic & \\\ & 1 & right \\ & \\\ & 2 \\ \hline Logic & \\\ & 35 & right \\ &\\\ & 30 & right \\ &\\\, \\\ & 25 & right\\ &\=\, \\\ & 50 & left\\ \hline Orderings & \\\, \<\, \\\, \>\ & 50 \\ \hline Sets & \\\, \\\, \\\, \\\ & 50 \\ &\\\, \\\ & 50 \\ &\\\ & 70 & left \\ &\\\ & 65 & left \\ \hline Functions and Relations & \\\ & 55 & left\\ &\`\ & 90 & right\\ &\O\ & 75 & right\\ &\``\ & 90 & right\\ &\^^\ & 80 & right\\ \hline Numbers & \+\, \-\ & 65 & left \\ &\*\, \/\ & 70 & left \\ &\div\, \mod\ & 70 & left\\ &\^\ & 80 & right\\ &\dvd\ & 50 \\ \hline Lists & \#\, \@\ & 65 & right\\ &\!\ & 100 & left \end{tabular} \end{center} \ (*<*) end (*>*) diff --git a/src/Doc/Main/document/build b/src/Doc/Main/document/build deleted file mode 100755 --- a/src/Doc/Main/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle latex -o "$FORMAT" -isabelle latex -o "$FORMAT" - diff --git a/src/Doc/Nitpick/document/build b/src/Doc/Nitpick/document/build deleted file mode 100755 --- a/src/Doc/Nitpick/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo Nitpick -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Nitpick/document/root.tex b/src/Doc/Nitpick/document/root.tex --- a/src/Doc/Nitpick/document/root.tex +++ b/src/Doc/Nitpick/document/root.tex @@ -1,2758 +1,2758 @@ \documentclass[a4paper,12pt]{article} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{color} \usepackage{footmisc} \usepackage{graphicx} %\usepackage{mathpazo} \usepackage{multicol} \usepackage{stmaryrd} %\usepackage[scaled=.85]{beramono} \usepackage{isabelle,iman,pdfsetup} %\oddsidemargin=4.6mm %\evensidemargin=4.6mm %\textwidth=150mm %\topmargin=4.6mm %\headheight=0mm %\headsep=0mm %\textheight=234mm \def\Colon{\mathord{:\mkern-1.5mu:}} %\def\lbrakk{\mathopen{\lbrack\mkern-3.25mu\lbrack}} %\def\rbrakk{\mathclose{\rbrack\mkern-3.255mu\rbrack}} \def\lparr{\mathopen{(\mkern-4mu\mid}} \def\rparr{\mathclose{\mid\mkern-4mu)}} %\def\unk{{?}} \def\unk{{\_}} \def\unkef{(\lambda x.\; \unk)} \def\undef{(\lambda x.\; \_)} %\def\unr{\textit{others}} \def\unr{\ldots} \def\Abs#1{\hbox{\rm{\guillemetleft}}{\,#1\,}\hbox{\rm{\guillemetright}}} \def\Q{{\smash{\lower.2ex\hbox{$\scriptstyle?$}}}} \hyphenation{Mini-Sat size-change First-Steps grand-parent nit-pick counter-example counter-examples data-type data-types co-data-type co-data-types in-duc-tive co-in-duc-tive} \urlstyle{tt} \renewcommand\_{\hbox{\textunderscore\kern-.05ex}} \begin{document} %%% TYPESETTING %\renewcommand\labelitemi{$\bullet$} \renewcommand\labelitemi{\raise.065ex\hbox{\small\textbullet}} -\title{\includegraphics[scale=0.5]{isabelle_nitpick} \\[4ex] +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Picking Nits \\[\smallskipamount] \Large A User's Guide to Nitpick for Isabelle/HOL} \author{\hbox{} \\ Jasmin Blanchette \\ {\normalsize Institut f\"ur Informatik, Technische Universit\"at M\"unchen} \\ \hbox{}} \maketitle \tableofcontents \setlength{\parskip}{.7em plus .2em minus .1em} \setlength{\parindent}{0pt} \setlength{\abovedisplayskip}{\parskip} \setlength{\abovedisplayshortskip}{.9\parskip} \setlength{\belowdisplayskip}{\parskip} \setlength{\belowdisplayshortskip}{.9\parskip} % General-purpose enum environment with correct spacing \newenvironment{enum}% {\begin{list}{}{% \setlength{\topsep}{.1\parskip}% \setlength{\partopsep}{.1\parskip}% \setlength{\itemsep}{\parskip}% \advance\itemsep by-\parsep}} {\end{list}} \def\pre{\begingroup\vskip0pt plus1ex\advance\leftskip by\leftmargin \advance\rightskip by\leftmargin} \def\post{\vskip0pt plus1ex\endgroup} \def\prew{\pre\advance\rightskip by-\leftmargin} \def\postw{\post} \section{Introduction} \label{introduction} Nitpick \cite{blanchette-nipkow-2010} is a counterexample generator for Isabelle/HOL \cite{isa-tutorial} that is designed to handle formulas combining (co)in\-duc\-tive datatypes, (co)in\-duc\-tively defined predicates, and quantifiers. It builds on Kodkod \cite{torlak-jackson-2007}, a highly optimized first-order relational model finder developed by the Software Design Group at MIT. It is conceptually similar to Refute \cite{weber-2008}, from which it borrows many ideas and code fragments, but it benefits from Kodkod's optimizations and a new encoding scheme. The name Nitpick is shamelessly appropriated from a now retired Alloy precursor. Nitpick is easy to use---you simply enter \textbf{nitpick} after a putative theorem and wait a few seconds. Nonetheless, there are situations where knowing how it works under the hood and how it reacts to various options helps increase the test coverage. This manual also explains how to install the tool on your workstation. Should the motivation fail you, think of the many hours of hard work Nitpick will save you. Proving non-theorems is \textsl{hard work}. Another common use of Nitpick is to find out whether the axioms of a locale are satisfiable, while the locale is being developed. To check this, it suffices to write \prew \textbf{lemma}~``$\textit{False\/}$'' \\ \textbf{nitpick}~[\textit{show\_all}] \postw after the locale's \textbf{begin} keyword. To falsify \textit{False}, Nitpick must find a model for the axioms. If it finds no model, we have an indication that the axioms might be unsatisfiable. Nitpick provides an automatic mode that can be enabled via the ``Auto Nitpick'' option under ``Plugins > Plugin Options > Isabelle > General'' in Isabelle/jEdit. In this mode, Nitpick is run on every newly entered theorem. \newbox\boxA \setbox\boxA=\hbox{\texttt{nospam}} \newcommand\authoremail{\texttt{jasmin.blan{\color{white}nospam}\kern-\wd\boxA{}chette@\allowbreak inria\allowbreak .\allowbreak fr}} To run Nitpick, you must also make sure that the theory \textit{Nitpick} is imported---this is rarely a problem in practice since it is part of \textit{Main}. The examples presented in this manual can be found in Isabelle's \texttt{src/HOL/\allowbreak Nitpick\_\allowbreak Examples/\allowbreak Manual\_Nits.thy} theory. The known bugs and limitations at the time of writing are listed in \S\ref{known-bugs-and-limitations}. Comments and bug reports concerning the tool or the manual should be directed to the author at \authoremail. \vskip2.5\smallskipamount \textbf{Acknowledgment.} The author would like to thank Mark Summerfield for suggesting several textual improvements. % and Perry James for reporting a typo. \section{Installation} \label{installation} Nitpick is part of Isabelle, so you do not need to install it. It relies on a third-party Kodkod front-end called Kodkodi, which in turn requires a Java virtual machine. Both are provided as official Isabelle components. %There are two main ways of installing Kodkodi: % %\begin{enum} %\item[\labelitemi] If you installed an official Isabelle package, %it should already include a properly setup version of Kodkodi. % %\item[\labelitemi] If you use a repository or snapshot version of Isabelle, you %an official Isabelle package, you can download the Isabelle-aware Kodkodi package %from \url{http://www21.in.tum.de/~blanchet/\#software}. Extract the archive, then add a %line to your \texttt{\$ISABELLE\_HOME\_USER\slash etc\slash components}% %\footnote{The variable \texttt{\$ISABELLE\_HOME\_USER} is set by Isabelle at %startup. Its value can be retrieved by executing \texttt{isabelle} %\texttt{getenv} \texttt{ISABELLE\_HOME\_USER} on the command line.} %file with the absolute path to Kodkodi. For example, if the %\texttt{components} file does not exist yet and you extracted Kodkodi to %\texttt{/usr/local/kodkodi-1.5.2}, create it with the single line % %\prew %\texttt{/usr/local/kodkodi-1.5.2} %\postw % %(including an invisible newline character) in it. %\end{enum} To check whether Kodkodi is successfully installed, you can try out the example in \S\ref{propositional-logic}. \section{First Steps} \label{first-steps} This section introduces Nitpick by presenting small examples. If possible, you should try out the examples on your workstation. Your theory file should start as follows: \prew \textbf{theory}~\textit{Scratch} \\ \textbf{imports}~\textit{Main~Quotient\_Product~RealDef} \\ \textbf{begin} \postw The results presented here were obtained using the JNI (Java Native Interface) version of MiniSat and with multithreading disabled to reduce nondeterminism. This was done by adding the line \prew \textbf{nitpick\_params} [\textit{sat\_solver}~= \textit{MiniSat\_JNI}, \,\textit{max\_threads}~= 1] \postw after the \textbf{begin} keyword. The JNI version of MiniSat is bundled with Kodkodi and is precompiled for Linux, Mac~OS~X, and Windows (Cygwin). Other SAT solvers can also be used, as explained in \S\ref{optimizations}. If you have already configured SAT solvers in Isabelle (e.g., for Refute), these will also be available to Nitpick. \subsection{Propositional Logic} \label{propositional-logic} Let's start with a trivial example from propositional logic: \prew \textbf{lemma}~``$P \longleftrightarrow Q$'' \\ \textbf{nitpick} \postw You should get the following output: \prew \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $P = \textit{True}$ \\ \hbox{}\qquad\qquad $Q = \textit{False}$ \postw Nitpick can also be invoked on individual subgoals, as in the example below: \prew \textbf{apply}~\textit{auto} \\[2\smallskipamount] {\slshape goal (2 subgoals): \\ \phantom{0}1. $P\,\Longrightarrow\, Q$ \\ \phantom{0}2. $Q\,\Longrightarrow\, P$} \\[2\smallskipamount] \textbf{nitpick}~1 \\[2\smallskipamount] {\slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $P = \textit{True}$ \\ \hbox{}\qquad\qquad $Q = \textit{False}$} \\[2\smallskipamount] \textbf{nitpick}~2 \\[2\smallskipamount] {\slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $P = \textit{False}$ \\ \hbox{}\qquad\qquad $Q = \textit{True}$} \\[2\smallskipamount] \textbf{oops} \postw \subsection{Type Variables} \label{type-variables} If you are left unimpressed by the previous example, don't worry. The next one is more mind- and computer-boggling: \prew \textbf{lemma} ``$x \in A\,\Longrightarrow\, (\textrm{THE}~y.\;y \in A) \in A$'' \postw \pagebreak[2] %% TYPESETTING The putative lemma involves the definite description operator, {THE}, presented in section 5.10.1 of the Isabelle tutorial \cite{isa-tutorial}. The operator is defined by the axiom $(\textrm{THE}~x.\; x = a) = a$. The putative lemma is merely asserting the indefinite description operator axiom with {THE} substituted for {SOME}. The free variable $x$ and the bound variable $y$ have type $'a$. For formulas containing type variables, Nitpick enumerates the possible domains for each type variable, up to a given cardinality (10 by default), looking for a finite countermodel: \prew \textbf{nitpick} [\textit{verbose}] \\[2\smallskipamount] \slshape Trying 10 scopes: \nopagebreak \\ \hbox{}\qquad \textit{card}~$'a$~= 1; \\ \hbox{}\qquad \textit{card}~$'a$~= 2; \\ \hbox{}\qquad $\qquad\vdots$ \\[.5\smallskipamount] \hbox{}\qquad \textit{card}~$'a$~= 10 \\[2\smallskipamount] Nitpick found a counterexample for \textit{card} $'a$~= 3: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $A = \{a_2,\, a_3\}$ \\ \hbox{}\qquad\qquad $x = a_3$ \\[2\smallskipamount] Total time: 963 ms \postw Nitpick found a counterexample in which $'a$ has cardinality 3. (For cardinalities 1 and 2, the formula holds.) In the counterexample, the three values of type $'a$ are written $a_1$, $a_2$, and $a_3$. The message ``Trying $n$ scopes: {\ldots}''\ is shown only if the option \textit{verbose} is enabled. You can specify \textit{verbose} each time you invoke \textbf{nitpick}, or you can set it globally using the command \prew \textbf{nitpick\_params} [\textit{verbose}] \postw This command also displays the current default values for all of the options supported by Nitpick. The options are listed in \S\ref{option-reference}. \subsection{Constants} \label{constants} By just looking at Nitpick's output, it might not be clear why the counterexample in \S\ref{type-variables} is genuine. Let's invoke Nitpick again, this time telling it to show the values of the constants that occur in the formula: \prew \textbf{lemma} ``$x \in A\,\Longrightarrow\, (\textrm{THE}~y.\;y \in A) \in A$'' \\ \textbf{nitpick}~[\textit{show\_consts}] \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 3: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $A = \{a_2,\, a_3\}$ \\ \hbox{}\qquad\qquad $x = a_3$ \\ \hbox{}\qquad Constant: \nopagebreak \\ \hbox{}\qquad\qquad $\hbox{\slshape THE}~y.\;y \in A = a_1$ \postw As the result of an optimization, Nitpick directly assigned a value to the subterm $\textrm{THE}~y.\;y \in A$, rather than to the \textit{The} constant. We can disable this optimization by using the command \prew \textbf{nitpick}~[\textit{dont\_specialize},\, \textit{show\_consts}] \postw Our misadventures with THE suggest adding `$\exists!x{.}$' (``there exists a unique $x$ such that'') at the front of our putative lemma's assumption: \prew \textbf{lemma} ``$\exists {!}x.\; x \in A\,\Longrightarrow\, (\textrm{THE}~y.\;y \in A) \in A$'' \postw The fix appears to work: \prew \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found no counterexample \postw We can further increase our confidence in the formula by exhausting all cardinalities up to 50: \prew \textbf{nitpick} [\textit{card} $'a$~= 1--50]\footnote{The symbol `--' is entered as \texttt{-} (hyphen).} \\[2\smallskipamount] \slshape Nitpick found no counterexample. \postw Let's see if Sledgehammer can find a proof: \prew \textbf{sledgehammer} \\[2\smallskipamount] {\slshape Sledgehammer: ``$e$'' on goal \\ Try this: \textbf{by}~(\textit{metis~theI}) (42 ms)} \\ \hbox{}\qquad\vdots \\[2\smallskipamount] \textbf{by}~(\textit{metis~theI\/}) \postw This must be our lucky day. \subsection{Skolemization} \label{skolemization} Are all invertible functions onto? Let's find out: \prew \textbf{lemma} ``$\exists g.\; \forall x.~g~(f~x) = x \,\Longrightarrow\, \forall y.\; \exists x.~y = f~x$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 2 and \textit{card} $'b$~=~1: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $f = \undef{}(b_1 := a_1)$ \\ \hbox{}\qquad Skolem constants: \nopagebreak \\ \hbox{}\qquad\qquad $g = \undef{}(a_1 := b_1,\> a_2 := b_1)$ \\ \hbox{}\qquad\qquad $y = a_2$ \postw (The Isabelle/HOL notation $f(x := y)$ denotes the function that maps $x$ to $y$ and that otherwise behaves like $f$.) Although $f$ is the only free variable occurring in the formula, Nitpick also displays values for the bound variables $g$ and $y$. These values are available to Nitpick because it performs skolemization as a preprocessing step. In the previous example, skolemization only affected the outermost quantifiers. This is not always the case, as illustrated below: \prew \textbf{lemma} ``$\exists x.\; \forall f.\; f~x = x$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 2: \\[2\smallskipamount] \hbox{}\qquad Skolem constant: \nopagebreak \\ \hbox{}\qquad\qquad $\lambda x.\; f = \undef{}(\!\begin{aligned}[t] & a_1 := \undef{}(a_1 := a_2,\> a_2 := a_1), \\[-2pt] & a_2 := \undef{}(a_1 := a_1,\> a_2 := a_1))\end{aligned}$ \postw The variable $f$ is bound within the scope of $x$; therefore, $f$ depends on $x$, as suggested by the notation $\lambda x.\,f$. If $x = a_1$, then $f$ is the function that maps $a_1$ to $a_2$ and vice versa; otherwise, $x = a_2$ and $f$ maps both $a_1$ and $a_2$ to $a_1$. In both cases, $f~x \not= x$. The source of the Skolem constants is sometimes more obscure: \prew \textbf{lemma} ``$\mathit{refl}~r\,\Longrightarrow\, \mathit{sym}~r$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 2: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $r = \{(a_1, a_1),\, (a_2, a_1),\, (a_2, a_2)\}$ \\ \hbox{}\qquad Skolem constants: \nopagebreak \\ \hbox{}\qquad\qquad $\mathit{sym}.x = a_2$ \\ \hbox{}\qquad\qquad $\mathit{sym}.y = a_1$ \postw What happened here is that Nitpick expanded \textit{sym} to its definition: \prew $\mathit{sym}~r \,\equiv\, \forall x\> y.\,\> (x, y) \in r \longrightarrow (y, x) \in r.$ \postw As their names suggest, the Skolem constants $\mathit{sym}.x$ and $\mathit{sym}.y$ are simply the bound variables $x$ and $y$ from \textit{sym}'s definition. \subsection{Natural Numbers and Integers} \label{natural-numbers-and-integers} Because of the axiom of infinity, the type \textit{nat} does not admit any finite models. To deal with this, Nitpick's approach is to consider finite subsets $N$ of \textit{nat} and maps all numbers $\notin N$ to the undefined value (displayed as `$\unk$'). The type \textit{int} is handled similarly. Internally, undefined values lead to a three-valued logic. Here is an example involving \textit{int\/}: \prew \textbf{lemma} ``$\lbrakk i \le j;\> n \le (m{\Colon}\mathit{int})\rbrakk \,\Longrightarrow\, i * n + j * m \le i * m + j * n$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $i = 0$ \\ \hbox{}\qquad\qquad $j = 1$ \\ \hbox{}\qquad\qquad $m = 1$ \\ \hbox{}\qquad\qquad $n = 0$ \postw Internally, Nitpick uses either a unary or a binary representation of numbers. The unary representation is more efficient but only suitable for numbers very close to zero. By default, Nitpick attempts to choose the more appropriate encoding by inspecting the formula at hand. This behavior can be overridden by passing either \textit{unary\_ints} or \textit{binary\_ints} as option. For binary notation, the number of bits to use can be specified using the \textit{bits} option. For example: \prew \textbf{nitpick} [\textit{binary\_ints}, \textit{bits}${} = 16$] \postw With infinite types, we don't always have the luxury of a genuine counterexample and must often content ourselves with a potentially spurious one. For example: \prew \textbf{lemma} ``$\forall n.\; \textit{Suc}~n \mathbin{\not=} n \,\Longrightarrow\, P$'' \\ \textbf{nitpick} [\textit{card~nat}~= 50] \\[2\smallskipamount] \slshape Warning: The conjecture either trivially holds for the given scopes or lies outside Nitpick's supported fragment; only potentially spurious counterexamples may be found \\[2\smallskipamount] Nitpick found a potentially spurious counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $P = \textit{False}$ \postw The issue is that the bound variable in $\forall n.\; \textit{Suc}~n \mathbin{\not=} n$ ranges over an infinite type. If Nitpick finds an $n$ such that $\textit{Suc}~n \mathbin{=} n$, it evaluates the assumption to \textit{False}; but otherwise, it does not know anything about values of $n \ge \textit{card~nat}$ and must therefore evaluate the assumption to~$\unk$, not \textit{True}. Since the assumption can never be fully satisfied by Nitpick, the putative lemma can never be falsified. Some conjectures involving elementary number theory make Nitpick look like a giant with feet of clay: \prew \textbf{lemma} ``$P~\textit{Suc\/}$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found no counterexample \postw On any finite set $N$, \textit{Suc} is a partial function; for example, if $N = \{0, 1, \ldots, k\}$, then \textit{Suc} is $\{0 \mapsto 1,\, 1 \mapsto 2,\, \ldots,\, k \mapsto \unk\}$, which evaluates to $\unk$ when passed as argument to $P$. As a result, $P~\textit{Suc}$ is always $\unk$. The next example is similar: \prew \textbf{lemma} ``$P~(\textit{op}~{+}\Colon \textit{nat}\mathbin{\Rightarrow}\textit{nat}\mathbin{\Rightarrow}\textit{nat})$'' \\ \textbf{nitpick} [\textit{card nat} = 1] \\[2\smallskipamount] {\slshape Nitpick found a counterexample:} \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $P = \unkef(\unkef(0 := \unkef(0 := 0)) := \mathit{False})$ \\[2\smallskipamount] \textbf{nitpick} [\textit{card nat} = 2] \\[2\smallskipamount] {\slshape Nitpick found no counterexample.} \postw The problem here is that \textit{op}~+ is total when \textit{nat} is taken to be $\{0\}$ but becomes partial as soon as we add $1$, because $1 + 1 \notin \{0, 1\}$. Because numbers are infinite and are approximated using a three-valued logic, there is usually no need to systematically enumerate domain sizes. If Nitpick cannot find a genuine counterexample for \textit{card~nat}~= $k$, it is very unlikely that one could be found for smaller domains. (The $P~(\textit{op}~{+})$ example above is an exception to this principle.) Nitpick nonetheless enumerates all cardinalities from 1 to 10 for \textit{nat}, mainly because smaller cardinalities are fast to handle and give rise to simpler counterexamples. This is explained in more detail in \S\ref{scope-monotonicity}. \subsection{Inductive Datatypes} \label{inductive-datatypes} Like natural numbers and integers, inductive datatypes with recursive constructors admit no finite models and must be approximated by a subterm-closed subset. For example, using a cardinality of 10 for ${'}a~\textit{list}$, Nitpick looks for all counterexamples that can be built using at most 10 different lists. Let's see with an example involving \textit{hd} (which returns the first element of a list) and $@$ (which concatenates two lists): \prew \textbf{lemma} ``$\textit{hd}~(\textit{xs} \mathbin{@} [y, y]) = \textit{hd}~\textit{xs\/}$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 3: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{xs} = []$ \\ \hbox{}\qquad\qquad $\textit{y} = a_1$ \postw To see why the counterexample is genuine, we enable \textit{show\_consts} and \textit{show\_\allowbreak datatypes}: \prew {\slshape Type:} \\ \hbox{}\qquad $'a$~\textit{list}~= $\{[],\, [a_1],\, [a_1, a_1],\, \unr\}$ \\ {\slshape Constants:} \\ \hbox{}\qquad $\lambda x_1.\; x_1 \mathbin{@} [y, y] = \unkef([] := [a_1, a_1])$ \\ \hbox{}\qquad $\textit{hd} = \unkef([] := a_2,\> [a_1] := a_1,\> [a_1, a_1] := a_1)$ \postw Since $\mathit{hd}~[]$ is undefined in the logic, it may be given any value, including $a_2$. The second constant, $\lambda x_1.\; x_1 \mathbin{@} [y, y]$, is simply the append operator whose second argument is fixed to be $[y, y]$. Appending $[a_1, a_1]$ to $[a_1]$ would normally give $[a_1, a_1, a_1]$, but this value is not representable in the subset of $'a$~\textit{list} considered by Nitpick, which is shown under the ``Type'' heading; hence the result is $\unk$. Similarly, appending $[a_1, a_1]$ to itself gives $\unk$. Given \textit{card}~$'a = 3$ and \textit{card}~$'a~\textit{list} = 3$, Nitpick considers the following subsets: \kern-.5\smallskipamount %% TYPESETTING \prew \begin{multicols}{3} $\{[],\, [a_1],\, [a_2]\}$; \\ $\{[],\, [a_1],\, [a_3]\}$; \\ $\{[],\, [a_2],\, [a_3]\}$; \\ $\{[],\, [a_1],\, [a_1, a_1]\}$; \\ $\{[],\, [a_1],\, [a_2, a_1]\}$; \\ $\{[],\, [a_1],\, [a_3, a_1]\}$; \\ $\{[],\, [a_2],\, [a_1, a_2]\}$; \\ $\{[],\, [a_2],\, [a_2, a_2]\}$; \\ $\{[],\, [a_2],\, [a_3, a_2]\}$; \\ $\{[],\, [a_3],\, [a_1, a_3]\}$; \\ $\{[],\, [a_3],\, [a_2, a_3]\}$; \\ $\{[],\, [a_3],\, [a_3, a_3]\}$. \end{multicols} \postw \kern-2\smallskipamount %% TYPESETTING All subterm-closed subsets of $'a~\textit{list}$ consisting of three values are listed and only those. As an example of a non-subterm-closed subset, consider $\mathcal{S} = \{[],\, [a_1],\,\allowbreak [a_1, a_2]\}$, and observe that $[a_1, a_2]$ (i.e., $a_1 \mathbin{\#} [a_2]$) has $[a_2] \notin \mathcal{S}$ as a subterm. Here's another m\"ochtegern-lemma that Nitpick can refute without a blink: \prew \textbf{lemma} ``$\lbrakk \textit{length}~\textit{xs} = 1;\> \textit{length}~\textit{ys} = 1 \rbrakk \,\Longrightarrow\, \textit{xs} = \textit{ys\/}$'' \\ \textbf{nitpick} [\textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 3: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{xs} = [a_2]$ \\ \hbox{}\qquad\qquad $\textit{ys} = [a_1]$ \\ \hbox{}\qquad Types: \\ \hbox{}\qquad\qquad $\textit{nat} = \{0,\, 1,\, 2,\, \unr\}$ \\ \hbox{}\qquad\qquad $'a$~\textit{list} = $\{[],\, [a_1],\, [a_2],\, \unr\}$ \postw Because datatypes are approximated using a three-valued logic, there is usually no need to systematically enumerate cardinalities: If Nitpick cannot find a genuine counterexample for \textit{card}~$'a~\textit{list}$~= 10, it is very unlikely that one could be found for smaller cardinalities. \subsection{Typedefs, Quotient Types, Records, Rationals, and Reals} \label{typedefs-quotient-types-records-rationals-and-reals} Nitpick generally treats types declared using \textbf{typedef} as datatypes whose single constructor is the corresponding \textit{Abs\_\kern.1ex} function. For example: \prew \textbf{typedef}~\textit{three} = ``$\{0\Colon\textit{nat},\, 1,\, 2\}$'' \\ \textbf{by}~\textit{blast} \\[2\smallskipamount] \textbf{definition}~$A \mathbin{\Colon} \textit{three}$ \textbf{where} ``\kern-.1em$A \,\equiv\, \textit{Abs\_\allowbreak three}~0$'' \\ \textbf{definition}~$B \mathbin{\Colon} \textit{three}$ \textbf{where} ``$B \,\equiv\, \textit{Abs\_three}~1$'' \\ \textbf{definition}~$C \mathbin{\Colon} \textit{three}$ \textbf{where} ``$C \,\equiv\, \textit{Abs\_three}~2$'' \\[2\smallskipamount] \textbf{lemma} ``$\lbrakk A \in X;\> B \in X\rbrakk \,\Longrightarrow\, c \in X$'' \\ \textbf{nitpick} [\textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $X = \{\Abs{0},\, \Abs{1}\}$ \\ \hbox{}\qquad\qquad $c = \Abs{2}$ \\ \hbox{}\qquad Types: \\ \hbox{}\qquad\qquad $\textit{nat} = \{0,\, 1,\, 2,\, \unr\}$ \\ \hbox{}\qquad\qquad $\textit{three} = \{\Abs{0},\, \Abs{1},\, \Abs{2},\, \unr\}$ \postw In the output above, $\Abs{n}$ abbreviates $\textit{Abs\_three}~n$. Quotient types are handled in much the same way. The following fragment defines the integer type \textit{my\_int} by encoding the integer $x$ by a pair of natural numbers $(m, n)$ such that $x + n = m$: \prew \textbf{fun} \textit{my\_int\_rel} \textbf{where} \\ ``$\textit{my\_int\_rel}~(x,\, y)~(u,\, v) = (x + v = u + y)$'' \\[2\smallskipamount] % \textbf{quotient\_type}~\textit{my\_int} = ``$\textit{nat} \times \textit{nat\/}$''$\;{/}\;$\textit{my\_int\_rel} \\ \textbf{by}~(\textit{auto simp add\/}:\ \textit{equivp\_def fun\_eq\_iff}) \\[2\smallskipamount] % \textbf{definition}~\textit{add\_raw}~\textbf{where} \\ ``$\textit{add\_raw} \,\equiv\, \lambda(x,\, y)~(u,\, v).\; (x + (u\Colon\textit{nat}), y + (v\Colon\textit{nat}))$'' \\[2\smallskipamount] % \textbf{quotient\_definition} ``$\textit{add\/}\Colon\textit{my\_int} \Rightarrow \textit{my\_int} \Rightarrow \textit{my\_int\/}$'' \textbf{is} \textit{add\_raw} \\[2\smallskipamount] % \textbf{lemma} ``$\textit{add}~x~y = \textit{add}~x~x$'' \\ \textbf{nitpick} [\textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $x = \Abs{(0,\, 0)}$ \\ \hbox{}\qquad\qquad $y = \Abs{(0,\, 1)}$ \\ \hbox{}\qquad Types: \\ \hbox{}\qquad\qquad $\textit{nat} = \{0,\, 1,\, \unr\}$ \\ \hbox{}\qquad\qquad $\textit{nat} \times \textit{nat}~[\textsl{boxed\/}] = \{(0,\, 0),\> (1,\, 0),\> \unr\}$ \\ \hbox{}\qquad\qquad $\textit{my\_int} = \{\Abs{(0,\, 0)},\> \Abs{(0,\, 1)},\> \unr\}$ \postw The values $\Abs{(0,\, 0)}$ and $\Abs{(0,\, 1)}$ represent the integers $0$ and $-1$, respectively. Other representants would have been possible---e.g., $\Abs{(5,\, 5)}$ and $\Abs{(11,\, 12)}$. If we are going to use \textit{my\_int} extensively, it pays off to install a term postprocessor that converts the pair notation to the standard mathematical notation: \prew $\textbf{ML}~\,\{{*} \\ \!\begin{aligned}[t] %& ({*}~\,\textit{Proof.context} \rightarrow \textit{string} \rightarrow (\textit{typ} \rightarrow \textit{term~list\/}) \rightarrow \textit{typ} \rightarrow \textit{term} \\[-2pt] %& \phantom{(*}~\,{\rightarrow}\;\textit{term}~\,{*}) \\[-2pt] & \textbf{fun}\,~\textit{my\_int\_postproc}~\_~\_~\_~T~(\textit{Const}~\_~\$~(\textit{Const}~\_~\$~\textit{t1}~\$~\textit{t2\/})) = {} \\[-2pt] & \phantom{fun}\,~\textit{HOLogic.mk\_number}~T~(\textit{snd}~(\textit{HOLogic.dest\_number~t1}) \\[-2pt] & \phantom{fun\,~\textit{HOLogic.mk\_number}~T~(}{-}~\textit{snd}~(\textit{HOLogic.dest\_number~t2\/})) \\[-2pt] & \phantom{fun}\!{\mid}\,~\textit{my\_int\_postproc}~\_~\_~\_~\_~t = t \\[-2pt] {*}\}\end{aligned}$ \\[2\smallskipamount] $\textbf{declaration}~\,\{{*} \\ \!\begin{aligned}[t] & \textit{Nitpick\_Model.register\_term\_postprocessor}~\!\begin{aligned}[t] & @\{\textrm{typ}~\textit{my\_int}\} \\[-2pt] & \textit{my\_int\_postproc}\end{aligned} \\[-2pt] {*}\}\end{aligned}$ \postw Records are handled as datatypes with a single constructor: \prew \textbf{record} \textit{point} = \\ \hbox{}\quad $\textit{Xcoord} \mathbin{\Colon} \textit{int}$ \\ \hbox{}\quad $\textit{Ycoord} \mathbin{\Colon} \textit{int}$ \\[2\smallskipamount] \textbf{lemma} ``$\textit{Xcoord}~(p\Colon\textit{point}) = \textit{Xcoord}~(q\Colon\textit{point})$'' \\ \textbf{nitpick} [\textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $p = \lparr\textit{Xcoord} = 1,\> \textit{Ycoord} = 1\rparr$ \\ \hbox{}\qquad\qquad $q = \lparr\textit{Xcoord} = 0,\> \textit{Ycoord} = 0\rparr$ \\ \hbox{}\qquad Types: \\ \hbox{}\qquad\qquad $\textit{int} = \{0,\, 1,\, \unr\}$ \\ \hbox{}\qquad\qquad $\textit{point} = \{\!\begin{aligned}[t] & \lparr\textit{Xcoord} = 0,\> \textit{Ycoord} = 0\rparr, \\[-2pt] %% TYPESETTING & \lparr\textit{Xcoord} = 1,\> \textit{Ycoord} = 1\rparr,\, \unr\}\end{aligned}$ \postw Finally, Nitpick provides rudimentary support for rationals and reals using a similar approach: \prew \textbf{lemma} ``$4 * x + 3 * (y\Colon\textit{real}) \not= 1/2$'' \\ \textbf{nitpick} [\textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $x = 1/2$ \\ \hbox{}\qquad\qquad $y = -1/2$ \\ \hbox{}\qquad Types: \\ \hbox{}\qquad\qquad $\textit{nat} = \{0,\, 1,\, 2,\, 3,\, 4,\, 5,\, 6,\, 7,\, \unr\}$ \\ \hbox{}\qquad\qquad $\textit{int} = \{-3,\, -2,\, -1,\, 0,\, 1,\, 2,\, 3,\, 4,\, \unr\}$ \\ \hbox{}\qquad\qquad $\textit{real} = \{-3/2,\, -1/2,\, 0,\, 1/2,\, 1,\, 2,\, 3,\, 4,\, \unr\}$ \postw \subsection{Inductive and Coinductive Predicates} \label{inductive-and-coinductive-predicates} Inductively defined predicates (and sets) are particularly problematic for counterexample generators. They can make Quickcheck~\cite{berghofer-nipkow-2004} loop forever and Refute~\cite{weber-2008} run out of resources. The crux of the problem is that they are defined using a least fixed-point construction. Nitpick's philosophy is that not all inductive predicates are equal. Consider the \textit{even} predicate below: \prew \textbf{inductive}~\textit{even}~\textbf{where} \\ ``\textit{even}~0'' $\,\mid$ \\ ``\textit{even}~$n\,\Longrightarrow\, \textit{even}~(\textit{Suc}~(\textit{Suc}~n))$'' \postw This predicate enjoys the desirable property of being well-founded, which means that the introduction rules don't give rise to infinite chains of the form \prew $\cdots\,\Longrightarrow\, \textit{even}~k'' \,\Longrightarrow\, \textit{even}~k' \,\Longrightarrow\, \textit{even}~k.$ \postw For \textit{even}, this is obvious: Any chain ending at $k$ will be of length $k/2 + 1$: \prew $\textit{even}~0\,\Longrightarrow\, \textit{even}~2\,\Longrightarrow\, \cdots \,\Longrightarrow\, \textit{even}~(k - 2) \,\Longrightarrow\, \textit{even}~k.$ \postw Wellfoundedness is desirable because it enables Nitpick to use a very efficient fixed-point computation.% \footnote{If an inductive predicate is well-founded, then it has exactly one fixed point, which is simultaneously the least and the greatest fixed point. In these circumstances, the computation of the least fixed point amounts to the computation of an arbitrary fixed point, which can be performed using a straightforward recursive equation.} Moreover, Nitpick can prove wellfoundedness of most well-founded predicates, just as Isabelle's \textbf{function} package usually discharges termination proof obligations automatically. Let's try an example: \prew \textbf{lemma} ``$\exists n.\; \textit{even}~n \mathrel{\land} \textit{even}~(\textit{Suc}~n)$'' \\ \textbf{nitpick}~[\textit{card nat}~= 50, \textit{unary\_ints}, \textit{verbose}] \\[2\smallskipamount] \slshape The inductive predicate ``\textit{even}'' was proved well-founded; Nitpick can compute it efficiently \\[2\smallskipamount] Trying 1 scope: \\ \hbox{}\qquad \textit{card nat}~= 50. \\[2\smallskipamount] Warning: The conjecture either trivially holds for the given scopes or lies outside Nitpick's supported fragment; only potentially spurious counterexamples may be found \\[2\smallskipamount] Nitpick found a potentially spurious counterexample for \textit{card nat}~= 50: \\[2\smallskipamount] \hbox{}\qquad Empty assignment \\[2\smallskipamount] Nitpick could not find a better counterexample It checked 1 of 1 scope \\[2\smallskipamount] Total time: 1.62 s. \postw No genuine counterexample is possible because Nitpick cannot rule out the existence of a natural number $n \ge 50$ such that both $\textit{even}~n$ and $\textit{even}~(\textit{Suc}~n)$ are true. To help Nitpick, we can bound the existential quantifier: \prew \textbf{lemma} ``$\exists n \mathbin{\le} 49.\; \textit{even}~n \mathrel{\land} \textit{even}~(\textit{Suc}~n)$'' \\ \textbf{nitpick}~[\textit{card nat}~= 50, \textit{unary\_ints}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Empty assignment \postw So far we were blessed by the wellfoundedness of \textit{even}. What happens if we use the following definition instead? \prew \textbf{inductive} $\textit{even}'$ \textbf{where} \\ ``$\textit{even}'~(0{\Colon}\textit{nat})$'' $\,\mid$ \\ ``$\textit{even}'~2$'' $\,\mid$ \\ ``$\lbrakk\textit{even}'~m;\> \textit{even}'~n\rbrakk \,\Longrightarrow\, \textit{even}'~(m + n)$'' \postw This definition is not well-founded: From $\textit{even}'~0$ and $\textit{even}'~0$, we can derive that $\textit{even}'~0$. Nonetheless, the predicates $\textit{even}$ and $\textit{even}'$ are equivalent. Let's check a property involving $\textit{even}'$. To make up for the foreseeable computational hurdles entailed by non-wellfoundedness, we decrease \textit{nat}'s cardinality to a mere 10: \prew \textbf{lemma}~``$\exists n \in \{0, 2, 4, 6, 8\}.\; \lnot\;\textit{even}'~n$'' \\ \textbf{nitpick}~[\textit{card nat}~= 10,\, \textit{verbose},\, \textit{show\_consts}] \\[2\smallskipamount] \slshape The inductive predicate ``$\textit{even}'\!$'' could not be proved well-founded; Nitpick might need to unroll it \\[2\smallskipamount] Trying 6 scopes: \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 0; \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 1; \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 2; \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 4; \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 8; \\ \hbox{}\qquad \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 9 \\[2\smallskipamount] Nitpick found a counterexample for \textit{card nat}~= 10 and \textit{iter} $\textit{even}'$~= 2: \\[2\smallskipamount] \hbox{}\qquad Constant: \nopagebreak \\ \hbox{}\qquad\qquad $\lambda i.\; \textit{even}'$ = $\unkef(\!\begin{aligned}[t] & 0 := \unkef(0 := \textit{True},\, 2 := \textit{True}),\, \\[-2pt] & 1 := \unkef(0 := \textit{True},\, 2 := \textit{True},\, 4 := \textit{True}),\, \\[-2pt] & 2 := \unkef(0 := \textit{True},\, 2 := \textit{True},\, 4 := \textit{True},\, \\[-2pt] & \phantom{2 := \unkef(}6 := \textit{True},\, 8 := \textit{True}))\end{aligned}$ \\[2\smallskipamount] Total time: 1.87 s. \postw Nitpick's output is very instructive. First, it tells us that the predicate is unrolled, meaning that it is computed iteratively from the empty set. Then it lists six scopes specifying different bounds on the numbers of iterations:\ 0, 1, 2, 4, 8, and~9. The output also shows how each iteration contributes to $\textit{even}'$. The notation $\lambda i.\; \textit{even}'$ indicates that the value of the predicate depends on an iteration counter. Iteration 0 provides the basis elements, $0$ and $2$. Iteration 1 contributes $4$ ($= 2 + 2$). Iteration 2 throws $6$ ($= 2 + 4 = 4 + 2$) and $8$ ($= 4 + 4$) into the mix. Further iterations would not contribute any new elements. The predicate $\textit{even}'$ evaluates to either \textit{True} or $\unk$, never \textit{False}. %Some values are marked with superscripted question %marks~(`\lower.2ex\hbox{$^\Q$}'). These are the elements for which the %predicate evaluates to $\unk$. When unrolling a predicate, Nitpick tries 0, 1, 2, 4, 8, 12, 16, 20, 24, and 28 iterations. However, these numbers are bounded by the cardinality of the predicate's domain. With \textit{card~nat}~= 10, no more than 9 iterations are ever needed to compute the value of a \textit{nat} predicate. You can specify the number of iterations using the \textit{iter} option, as explained in \S\ref{scope-of-search}. In the next formula, $\textit{even}'$ occurs both positively and negatively: \prew \textbf{lemma} ``$\textit{even}'~(n - 2) \,\Longrightarrow\, \textit{even}'~n$'' \\ \textbf{nitpick} [\textit{card nat} = 10, \textit{show\_consts}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $n = 1$ \\ \hbox{}\qquad Constants: \nopagebreak \\ \hbox{}\qquad\qquad $\lambda i.\; \textit{even}'$ = $\unkef(\!\begin{aligned}[t] & 0 := \unkef(0 := \mathit{True},\, 2 := \mathit{True}))\end{aligned}$ \\ \hbox{}\qquad\qquad $\textit{even}' \leq \unkef(\!\begin{aligned}[t] & 0 := \mathit{True},\, 1 := \mathit{False},\, 2 := \mathit{True},\, \\[-2pt] & 4 := \mathit{True},\, 6 := \mathit{True},\, 8 := \mathit{True})\end{aligned}$ \postw Notice the special constraint $\textit{even}' \leq \ldots$ in the output, whose right-hand side represents an arbitrary fixed point (not necessarily the least one). It is used to falsify $\textit{even}'~n$. In contrast, the unrolled predicate is used to satisfy $\textit{even}'~(n - 2)$. Coinductive predicates are handled dually. For example: \prew \textbf{coinductive} \textit{nats} \textbf{where} \\ ``$\textit{nats}~(x\Colon\textit{nat}) \,\Longrightarrow\, \textit{nats}~x$'' \\[2\smallskipamount] \textbf{lemma} ``$\textit{nats} = (\lambda n.\; n \mathbin\in \{0, 1, 2, 3, 4\})$'' \\ \textbf{nitpick}~[\textit{card nat} = 10,\, \textit{show\_consts}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Constants: \nopagebreak \\ \hbox{}\qquad\qquad $\lambda i.\; \textit{nats} = \unkef(0 := \unkef,\, 1 := \unkef,\, 2 := \unkef)$ \\ \hbox{}\qquad\qquad $\textit{nats} \geq \unkef(3 := \textit{True},\, 4 := \textit{False},\, 5 := \textit{True})$ \postw As a special case, Nitpick uses Kodkod's transitive closure operator to encode negative occurrences of non-well-founded ``linear inductive predicates,'' i.e., inductive predicates for which each the predicate occurs in at most one assumption of each introduction rule. For example: \prew \textbf{inductive} \textit{odd} \textbf{where} \\ ``$\textit{odd}~1$'' $\,\mid$ \\ ``$\lbrakk \textit{odd}~m;\>\, \textit{even}~n\rbrakk \,\Longrightarrow\, \textit{odd}~(m + n)$'' \\[2\smallskipamount] \textbf{lemma}~``$\textit{odd}~n \,\Longrightarrow\, \textit{odd}~(n - 2)$'' \\ \textbf{nitpick}~[\textit{card nat} = 4,\, \textit{show\_consts}] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $n = 1$ \\ \hbox{}\qquad Constants: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{even} = \unkef(0 := True, 1 := False, 2 := True, 3 := False)$ \\ \hbox{}\qquad\qquad $\textit{odd}_{\textsl{base}} = {}$ \\ \hbox{}\qquad\qquad\quad $\unkef(0 := \textit{False},\, 1 := \textit{True},\, 2 := \textit{False},\, 3 := \textit{False})$ \\ \hbox{}\qquad\qquad $\textit{odd}_{\textsl{step}} = \unkef$\\ \hbox{}\qquad\qquad\quad $( \!\begin{aligned}[t] & 0 := \unkef(0 := \textit{True},\, 1 := \textit{False},\, 2 := \textit{True},\, 3 := \textit{False}), \\[-2pt] & 1 := \unkef(0 := \textit{False},\, 1 := \textit{True},\, 2 := \textit{False},\, 3 := \textit{True}), \\[-2pt] & 2 := \unkef(0 := \textit{False},\, 1 := \textit{False},\, 2 := \textit{True},\, 3 := \textit{False}), \\[-2pt] & 3 := \unkef(0 := \textit{False},\, 1 := \textit{False},\, 2 := \textit{False},\, 3 := \textit{True})) \end{aligned}$ \\ \hbox{}\qquad\qquad $\textit{odd} \leq \unkef(0 := \textit{False},\, 1 := \textit{True},\, 2 := \textit{False},\, 3 := \textit{True})$ \postw \noindent In the output, $\textit{odd}_{\textrm{base}}$ represents the base elements and $\textit{odd}_{\textrm{step}}$ is a transition relation that computes new elements from known ones. The set $\textit{odd}$ consists of all the values reachable through the reflexive transitive closure of $\textit{odd}_{\textrm{step}}$ starting with any element from $\textit{odd}_{\textrm{base}}$, namely 1 and 3. Using Kodkod's transitive closure to encode linear predicates is normally either more thorough or more efficient than unrolling (depending on the value of \textit{iter}), but you can disable it by passing the \textit{dont\_star\_linear\_preds} option. \subsection{Coinductive Datatypes} \label{coinductive-datatypes} A coinductive datatype is similar to an inductive datatype but allows infinite objects. Thus, the infinite lists $\textit{ps}$ $=$ $[a, a, a, \ldots]$, $\textit{qs}$ $=$ $[a, b, a, b, \ldots]$, and $\textit{rs}$ $=$ $[0, 1, 2, 3, \ldots]$ can be defined as coinductive lists, or ``lazy lists,'' using the $\textit{LNil}\mathbin{\Colon}{'}a~\textit{llist}$ and $\textit{LCons}\mathbin{\Colon}{'}a \mathbin{\Rightarrow} {'}a~\textit{llist} \mathbin{\Rightarrow} {'}a~\textit{llist}$ constructors. Although it is otherwise no friend of infinity, Nitpick can find counterexamples involving cyclic lists such as \textit{ps} and \textit{qs} above as well as finite lists: \prew \textbf{codatatype} $'a$ \textit{llist} = \textit{LNil}~$\mid$~\textit{LCons}~$'a$~``$'a\;\textit{llist}$'' \\[2\smallskipamount] \textbf{lemma} ``$\textit{xs} \not= \textit{LCons}~a~\textit{xs\/}$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for {\itshape card}~$'a$ = 1: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{a} = a_1$ \\ \hbox{}\qquad\qquad $\textit{xs} = \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega$ \postw The notation $\textrm{THE}~\omega.\; \omega = t(\omega)$ stands for the infinite term $t(t(t(\ldots)))$. Hence, \textit{xs} is simply the infinite list $[a_1, a_1, a_1, \ldots]$. The next example is more interesting: \prew \textbf{primcorec}~$\textit{iterates}$~\textbf{where} \\ ``$\textit{iterates}~f\>a = \textit{LCons}~a~(\textit{iterates}~f\>(f\>a))$'' \\[2\smallskipamount] \textbf{lemma}~``$\lbrakk\textit{xs} = \textit{LCons}~a~\textit{xs};\>\, \textit{ys} = \textit{iterates}~(\lambda b.\> a)~b\rbrakk \,\Longrightarrow\, \textit{xs} = \textit{ys\/}$'' \\ \textbf{nitpick} [\textit{verbose}] \\[2\smallskipamount] \slshape The type $'a$ passed the monotonicity test; Nitpick might be able to skip some scopes \\[2\smallskipamount] Trying 10 scopes: \\ \hbox{}\qquad \textit{card} $'a$~= 1, \textit{card} ``\kern1pt$'a~\textit{list\/}$''~= 1, and \textit{bisim\_depth}~= 0; \\ \hbox{}\qquad $\qquad\vdots$ \\[.5\smallskipamount] \hbox{}\qquad \textit{card} $'a$~= 10, \textit{card} ``\kern1pt$'a~\textit{list\/}$''~= 10, and \textit{bisim\_depth}~= 9 \\[2\smallskipamount] Nitpick found a counterexample for {\itshape card}~$'a$ = 2, \textit{card}~``\kern1pt$'a~\textit{llist\/}$''~= 2, and \textit{bisim\_\allowbreak depth}~= 1: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{a} = a_1$ \\ \hbox{}\qquad\qquad $\textit{b} = a_2$ \\ \hbox{}\qquad\qquad $\textit{xs} = \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega$ \\ \hbox{}\qquad\qquad $\textit{ys} = \textit{LCons}~a_2~(\textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega)$ \\[2\smallskipamount] Total time: 1.11 s \postw The lazy list $\textit{xs}$ is simply $[a_1, a_1, a_1, \ldots]$, whereas $\textit{ys}$ is $[a_2, a_1, a_1, a_1, \ldots]$, i.e., a lasso-shaped list with $[a_2]$ as its stem and $[a_1]$ as its cycle. In general, the list segment within the scope of the {THE} binder corresponds to the lasso's cycle, whereas the segment leading to the binder is the stem. A salient property of coinductive datatypes is that two objects are considered equal if and only if they lead to the same observations. For example, the two lazy lists % \begin{gather*} \textrm{THE}~\omega.\; \omega = \textit{LCons}~a~(\textit{LCons}~b~\omega) \\ \textit{LCons}~a~(\textrm{THE}~\omega.\; \omega = \textit{LCons}~b~(\textit{LCons}~a~\omega)) \end{gather*} % are identical, because both lead to the sequence of observations $a$, $b$, $a$, $b$, \hbox{\ldots} (or, equivalently, both encode the infinite list $[a, b, a, b, \ldots]$). This concept of equality for coinductive datatypes is called bisimulation and is defined coinductively. Internally, Nitpick encodes the coinductive bisimilarity predicate as part of the Kodkod problem to ensure that distinct objects lead to different observations. This precaution is somewhat expensive and often unnecessary, so it can be disabled by setting the \textit{bisim\_depth} option to $-1$. The bisimilarity check is then performed \textsl{after} the counterexample has been found to ensure correctness. If this after-the-fact check fails, the counterexample is tagged as ``quasi genuine'' and Nitpick recommends to try again with \textit{bisim\_depth} set to a nonnegative integer. The next formula illustrates the need for bisimilarity (either as a Kodkod predicate or as an after-the-fact check) to prevent spurious counterexamples: \prew \textbf{lemma} ``$\lbrakk xs = \textit{LCons}~a~\textit{xs};\>\, \textit{ys} = \textit{LCons}~a~\textit{ys}\rbrakk \,\Longrightarrow\, \textit{xs} = \textit{ys\/}$'' \\ \textbf{nitpick} [\textit{bisim\_depth} = $-1$, \textit{show\_types}] \\[2\smallskipamount] \slshape Nitpick found a quasi genuine counterexample for $\textit{card}~'a$ = 2: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $a = a_1$ \\ \hbox{}\qquad\qquad $\textit{xs} = \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega$ \\ \hbox{}\qquad\qquad $\textit{ys} = \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega$ \\ \hbox{}\qquad Type:\strut \nopagebreak \\ \hbox{}\qquad\qquad $'a~\textit{llist} = \{\!\begin{aligned}[t] & \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega, \\[-2pt] & \textsl{THE}~\omega.\; \omega = \textit{LCons}~a_1~\omega,\> \unr\}\end{aligned}$ \\[2\smallskipamount] Try again with ``\textit{bisim\_depth}'' set to a nonnegative value to confirm that the counterexample is genuine \\[2\smallskipamount] {\upshape\textbf{nitpick}} \\[2\smallskipamount] \slshape Nitpick found no counterexample \postw In the first \textbf{nitpick} invocation, the after-the-fact check discovered that the two known elements of type $'a~\textit{llist}$ are bisimilar, prompting Nitpick to label the example as only ``quasi genuine.'' A compromise between leaving out the bisimilarity predicate from the Kodkod problem and performing the after-the-fact check is to specify a low nonnegative \textit{bisim\_depth} value. In general, a value of $K$ means that Nitpick will require all lists to be distinguished from each other by their prefixes of length $K$. However, setting $K$ to a too low value can overconstrain Nitpick, preventing it from finding any counterexamples. \subsection{Boxing} \label{boxing} Nitpick normally maps function and product types directly to the corresponding Kodkod concepts. As a consequence, if $'a$ has cardinality 3 and $'b$ has cardinality 4, then $'a \times {'}b$ has cardinality 12 ($= 4 \times 3$) and $'a \Rightarrow {'}b$ has cardinality 64 ($= 4^3$). In some circumstances, it pays off to treat these types in the same way as plain datatypes, by approximating them by a subset of a given cardinality. This technique is called ``boxing'' and is particularly useful for functions passed as arguments to other functions, for high-arity functions, and for large tuples. Under the hood, boxing involves wrapping occurrences of the types $'a \times {'}b$ and $'a \Rightarrow {'}b$ in isomorphic datatypes, as can be seen by enabling the \textit{debug} option. To illustrate boxing, we consider a formalization of $\lambda$-terms represented using de Bruijn's notation: \prew \textbf{datatype} \textit{tm} = \textit{Var}~\textit{nat}~$\mid$~\textit{Lam}~\textit{tm} $\mid$ \textit{App~tm~tm} \postw The $\textit{lift}~t~k$ function increments all variables with indices greater than or equal to $k$ by one: \prew \textbf{primrec} \textit{lift} \textbf{where} \\ ``$\textit{lift}~(\textit{Var}~j)~k = \textit{Var}~(\textrm{if}~j < k~\textrm{then}~j~\textrm{else}~j + 1)$'' $\mid$ \\ ``$\textit{lift}~(\textit{Lam}~t)~k = \textit{Lam}~(\textit{lift}~t~(k + 1))$'' $\mid$ \\ ``$\textit{lift}~(\textit{App}~t~u)~k = \textit{App}~(\textit{lift}~t~k)~(\textit{lift}~u~k)$'' \postw The $\textit{loose}~t~k$ predicate returns \textit{True} if and only if term $t$ has a loose variable with index $k$ or more: \prew \textbf{primrec}~\textit{loose} \textbf{where} \\ ``$\textit{loose}~(\textit{Var}~j)~k = (j \ge k)$'' $\mid$ \\ ``$\textit{loose}~(\textit{Lam}~t)~k = \textit{loose}~t~(\textit{Suc}~k)$'' $\mid$ \\ ``$\textit{loose}~(\textit{App}~t~u)~k = (\textit{loose}~t~k \mathrel{\lor} \textit{loose}~u~k)$'' \postw Next, the $\textit{subst}~\sigma~t$ function applies the substitution $\sigma$ on $t$: \prew \textbf{primrec}~\textit{subst} \textbf{where} \\ ``$\textit{subst}~\sigma~(\textit{Var}~j) = \sigma~j$'' $\mid$ \\ ``$\textit{subst}~\sigma~(\textit{Lam}~t) = {}$\phantom{''} \\ \phantom{``}$\textit{Lam}~(\textit{subst}~(\lambda n.\> \textrm{case}~n~\textrm{of}~0 \Rightarrow \textit{Var}~0 \mid \textit{Suc}~m \Rightarrow \textit{lift}~(\sigma~m)~1)~t)$'' $\mid$ \\ ``$\textit{subst}~\sigma~(\textit{App}~t~u) = \textit{App}~(\textit{subst}~\sigma~t)~(\textit{subst}~\sigma~u)$'' \postw A substitution is a function that maps variable indices to terms. Observe that $\sigma$ is a function passed as argument and that Nitpick can't optimize it away, because the recursive call for the \textit{Lam} case involves an altered version. Also notice the \textit{lift} call, which increments the variable indices when moving under a \textit{Lam}. A reasonable property to expect of substitution is that it should leave closed terms unchanged. Alas, even this simple property does not hold: \pre \textbf{lemma}~``$\lnot\,\textit{loose}~t~0 \,\Longrightarrow\, \textit{subst}~\sigma~t = t$'' \\ \textbf{nitpick} [\textit{verbose}] \\[2\smallskipamount] \slshape Trying 10 scopes: \nopagebreak \\ \hbox{}\qquad \textit{card~nat}~= 1, \textit{card tm}~= 1, and \textit{card} ``$\textit{nat} \Rightarrow \textit{tm\/}$'' = 1; \\ \hbox{}\qquad \textit{card~nat}~= 2, \textit{card tm}~= 2, and \textit{card} ``$\textit{nat} \Rightarrow \textit{tm\/}$'' = 2; \\ \hbox{}\qquad $\qquad\vdots$ \\[.5\smallskipamount] \hbox{}\qquad \textit{card~nat}~= 10, \textit{card tm}~= 10, and \textit{card} ``$\textit{nat} \Rightarrow \textit{tm\/}$'' = 10 \\[2\smallskipamount] Nitpick found a counterexample for \textit{card~nat}~= 6, \textit{card~tm}~= 6, and \textit{card}~``$\textit{nat} \Rightarrow \textit{tm\/}$''~= 6: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\sigma = \unkef(\!\begin{aligned}[t] & 0 := \textit{Var}~0,\> 1 := \textit{Var}~0,\> 2 := \textit{Var}~0, \\[-2pt] & 3 := \textit{Var}~0,\> 4 := \textit{Var}~0,\> 5 := \textit{Lam}~(\textit{Lam}~(\textit{Var}~0)))\end{aligned}$ \\ \hbox{}\qquad\qquad $t = \textit{Lam}~(\textit{Lam}~(\textit{Var}~1))$ \\[2\smallskipamount] Total time: 3.08 s \postw Using \textit{eval}, we find out that $\textit{subst}~\sigma~t = \textit{Lam}~(\textit{Lam}~(\textit{Var}~0))$. Using the traditional $\lambda$-calculus notation, $t$ is $\lambda x\, y.\> x$ whereas $\textit{subst}~\sigma~t$ is (wrongly) $\lambda x\, y.\> y$. The bug is in \textit{subst\/}: The $\textit{lift}~(\sigma~m)~1$ call should be replaced with $\textit{lift}~(\sigma~m)~0$. An interesting aspect of Nitpick's verbose output is that it assigned inceasing cardinalities from 1 to 10 to the type $\textit{nat} \Rightarrow \textit{tm}$ of the higher-order argument $\sigma$ of \textit{subst}. For the formula of interest, knowing 6 values of that type was enough to find the counterexample. Without boxing, $6^6 = 46\,656$ values must be considered, a hopeless undertaking: \prew \textbf{nitpick} [\textit{dont\_box}] \\[2\smallskipamount] {\slshape Nitpick ran out of time after checking 3 of 10 scopes} \postw Boxing can be enabled or disabled globally or on a per-type basis using the \textit{box} option. Nitpick usually performs reasonable choices about which types should be boxed, but option tweaking sometimes helps. %A related optimization, %``finitization,'' attempts to wrap functions that are constant at all but finitely %many points (e.g., finite sets); see the documentation for the \textit{finitize} %option in \S\ref{scope-of-search} for details. \subsection{Scope Monotonicity} \label{scope-monotonicity} The \textit{card} option (together with \textit{iter}, \textit{bisim\_depth}, and \textit{max}) controls which scopes are actually tested. In general, to exhaust all models below a certain cardinality bound, the number of scopes that Nitpick must consider increases exponentially with the number of type variables (and \textbf{typedecl}'d types) occurring in the formula. Given the default cardinality specification of 1--10, no fewer than $10^4 = 10\,000$ scopes must be considered for a formula involving $'a$, $'b$, $'c$, and $'d$. Fortunately, many formulas exhibit a property called \textsl{scope monotonicity}, meaning that if the formula is falsifiable for a given scope, it is also falsifiable for all larger scopes \cite[p.~165]{jackson-2006}. Consider the formula \prew \textbf{lemma}~``$\textit{length~xs} = \textit{length~ys} \,\Longrightarrow\, \textit{rev}~(\textit{zip~xs~ys}) = \textit{zip~xs}~(\textit{rev~ys})$'' \postw where \textit{xs} is of type $'a~\textit{list}$ and \textit{ys} is of type $'b~\textit{list}$. A priori, Nitpick would need to consider $1\,000$ scopes to exhaust the specification \textit{card}~= 1--10 (10 cardinalies for $'a$ $\times$ 10 cardinalities for $'b$ $\times$ 10 cardinalities for the datatypes). However, our intuition tells us that any counterexample found with a small scope would still be a counterexample in a larger scope---by simply ignoring the fresh $'a$ and $'b$ values provided by the larger scope. Nitpick comes to the same conclusion after a careful inspection of the formula and the relevant definitions: \prew \textbf{nitpick}~[\textit{verbose}] \\[2\smallskipamount] \slshape The types $'a$ and $'b$ passed the monotonicity test; Nitpick might be able to skip some scopes. \\[2\smallskipamount] Trying 10 scopes: \\ \hbox{}\qquad \textit{card} $'a$~= 1, \textit{card} $'b$~= 1, \textit{card} \textit{nat}~= 1, \textit{card} ``$('a \times {'}b)$ \textit{list\/}''~= 1, \\ \hbox{}\qquad\quad \textit{card} ``\kern1pt$'a$ \textit{list\/}''~= 1, and \textit{card} ``\kern1pt$'b$ \textit{list\/}''~= 1; \\ \hbox{}\qquad \textit{card} $'a$~= 2, \textit{card} $'b$~= 2, \textit{card} \textit{nat}~= 2, \textit{card} ``$('a \times {'}b)$ \textit{list\/}''~= 2, \\ \hbox{}\qquad\quad \textit{card} ``\kern1pt$'a$ \textit{list\/}''~= 2, and \textit{card} ``\kern1pt$'b$ \textit{list\/}''~= 2; \\ \hbox{}\qquad $\qquad\vdots$ \\[.5\smallskipamount] \hbox{}\qquad \textit{card} $'a$~= 10, \textit{card} $'b$~= 10, \textit{card} \textit{nat}~= 10, \textit{card} ``$('a \times {'}b)$ \textit{list\/}''~= 10, \\ \hbox{}\qquad\quad \textit{card} ``\kern1pt$'a$ \textit{list\/}''~= 10, and \textit{card} ``\kern1pt$'b$ \textit{list\/}''~= 10 \\[2\smallskipamount] Nitpick found a counterexample for \textit{card} $'a$~= 5, \textit{card} $'b$~= 5, \textit{card} \textit{nat}~= 5, \textit{card} ``$('a \times {'}b)$ \textit{list\/}''~= 5, \textit{card} ``\kern1pt$'a$ \textit{list\/}''~= 5, and \textit{card} ``\kern1pt$'b$ \textit{list\/}''~= 5: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $\textit{xs} = [a_1, a_2]$ \\ \hbox{}\qquad\qquad $\textit{ys} = [b_1, b_1]$ \\[2\smallskipamount] Total time: 1.63 s. \postw In theory, it should be sufficient to test a single scope: \prew \textbf{nitpick}~[\textit{card}~= 10] \postw However, this is often less efficient in practice and may lead to overly complex counterexamples. If the monotonicity check fails but we believe that the formula is monotonic (or we don't mind missing some counterexamples), we can pass the \textit{mono} option. To convince yourself that this option is risky, simply consider this example from \S\ref{skolemization}: \prew \textbf{lemma} ``$\exists g.\; \forall x\Colon 'b.~g~(f~x) = x \,\Longrightarrow\, \forall y\Colon {'}a.\; \exists x.~y = f~x$'' \\ \textbf{nitpick} [\textit{mono}] \\[2\smallskipamount] {\slshape Nitpick found no counterexample} \\[2\smallskipamount] \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$~= 2 and \textit{card} $'b$~=~1: \\ \hbox{}\qquad $\vdots$ \postw (It turns out the formula holds if and only if $\textit{card}~'a \le \textit{card}~'b$.) Although this is rarely advisable, the automatic monotonicity checks can be disabled by passing \textit{non\_mono} (\S\ref{optimizations}). As insinuated in \S\ref{natural-numbers-and-integers} and \S\ref{inductive-datatypes}, \textit{nat}, \textit{int}, and inductive datatypes are normally monotonic and treated as such. The same is true for record types, \textit{rat}, and \textit{real}. Thus, given the cardinality specification 1--10, a formula involving \textit{nat}, \textit{int}, \textit{int~list}, \textit{rat}, and \textit{rat~list} will lead Nitpick to consider only 10~scopes instead of $10^4 = 10\,000$. On the other hand, \textbf{typedef}s and quotient types are generally nonmonotonic. \subsection{Inductive Properties} \label{inductive-properties} Inductive properties are a particular pain to prove, because the failure to establish an induction step can mean several things: % \begin{enumerate} \item The property is invalid. \item The property is valid but is too weak to support the induction step. \item The property is valid and strong enough; it's just that we haven't found the proof yet. \end{enumerate} % Depending on which scenario applies, we would take the appropriate course of action: % \begin{enumerate} \item Repair the statement of the property so that it becomes valid. \item Generalize the property and/or prove auxiliary properties. \item Work harder on a proof. \end{enumerate} % How can we distinguish between the three scenarios? Nitpick's normal mode of operation can often detect scenario 1, and Isabelle's automatic tactics help with scenario 3. Using appropriate techniques, it is also often possible to use Nitpick to identify scenario 2. Consider the following transition system, in which natural numbers represent states: \prew \textbf{inductive\_set}~\textit{reach}~\textbf{where} \\ ``$(4\Colon\textit{nat}) \in \textit{reach\/}$'' $\mid$ \\ ``$\lbrakk n < 4;\> n \in \textit{reach\/}\rbrakk \,\Longrightarrow\, 3 * n + 1 \in \textit{reach\/}$'' $\mid$ \\ ``$n \in \textit{reach} \,\Longrightarrow n + 2 \in \textit{reach\/}$'' \postw We will try to prove that only even numbers are reachable: \prew \textbf{lemma}~``$n \in \textit{reach} \,\Longrightarrow\, 2~\textrm{dvd}~n$'' \postw Does this property hold? Nitpick cannot find a counterexample within 30 seconds, so let's attempt a proof by induction: \prew \textbf{apply}~(\textit{induct~set}{:}~\textit{reach\/}) \\ \textbf{apply}~\textit{auto} \postw This leaves us in the following proof state: \prew {\slshape goal (2 subgoals): \\ \phantom{0}1. ${\bigwedge}n.\;\, \lbrakk n \in \textit{reach\/};\, n < 4;\, 2~\textsl{dvd}~n\rbrakk \,\Longrightarrow\, 2~\textsl{dvd}~\textit{Suc}~(3 * n)$ \\ \phantom{0}2. ${\bigwedge}n.\;\, \lbrakk n \in \textit{reach\/};\, 2~\textsl{dvd}~n\rbrakk \,\Longrightarrow\, 2~\textsl{dvd}~\textit{Suc}~(\textit{Suc}~n)$ } \postw If we run Nitpick on the first subgoal, it still won't find any counterexample; and yet, \textit{auto} fails to go further, and \textit{arith} is helpless. However, notice the $n \in \textit{reach}$ assumption, which strengthens the induction hypothesis but is not immediately usable in the proof. If we remove it and invoke Nitpick, this time we get a counterexample: \prew \textbf{apply}~(\textit{thin\_tac}~``$n \in \textit{reach\/}$'') \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Skolem constant: \nopagebreak \\ \hbox{}\qquad\qquad $n = 0$ \postw Indeed, 0 < 4, 2 divides 0, but 2 does not divide 1. We can use this information to strength the lemma: \prew \textbf{lemma}~``$n \in \textit{reach} \,\Longrightarrow\, 2~\textrm{dvd}~n \mathrel{\lor} n \not= 0$'' \postw Unfortunately, the proof by induction still gets stuck, except that Nitpick now finds the counterexample $n = 2$. We generalize the lemma further to \prew \textbf{lemma}~``$n \in \textit{reach} \,\Longrightarrow\, 2~\textrm{dvd}~n \mathrel{\lor} n \ge 4$'' \postw and this time \textit{arith} can finish off the subgoals. \section{Case Studies} \label{case-studies} As a didactic device, the previous section focused mostly on toy formulas whose validity can easily be assessed just by looking at the formula. We will now review two somewhat more realistic case studies that are within Nitpick's reach:\ a context-free grammar modeled by mutually inductive sets and a functional implementation of AA trees. The results presented in this section were produced with the following settings: \prew \textbf{nitpick\_params} [\textit{max\_potential}~= 0] \postw \subsection{A Context-Free Grammar} \label{a-context-free-grammar} Our first case study is taken from section 7.4 in the Isabelle tutorial \cite{isa-tutorial}. The following grammar, originally due to Hopcroft and Ullman, produces all strings with an equal number of $a$'s and $b$'s: \prew \begin{tabular}{@{}r@{$\;\,$}c@{$\;\,$}l@{}} $S$ & $::=$ & $\epsilon \mid bA \mid aB$ \\ $A$ & $::=$ & $aS \mid bAA$ \\ $B$ & $::=$ & $bS \mid aBB$ \end{tabular} \postw The intuition behind the grammar is that $A$ generates all strings with one more $a$ than $b$'s and $B$ generates all strings with one more $b$ than $a$'s. The alphabet consists exclusively of $a$'s and $b$'s: \prew \textbf{datatype} \textit{alphabet}~= $a$ $\mid$ $b$ \postw Strings over the alphabet are represented by \textit{alphabet list}s. Nonterminals in the grammar become sets of strings. The production rules presented above can be expressed as a mutually inductive definition: \prew \textbf{inductive\_set} $S$ \textbf{and} $A$ \textbf{and} $B$ \textbf{where} \\ \textit{R1}:\kern.4em ``$[] \in S$'' $\,\mid$ \\ \textit{R2}:\kern.4em ``$w \in A\,\Longrightarrow\, b \mathbin{\#} w \in S$'' $\,\mid$ \\ \textit{R3}:\kern.4em ``$w \in B\,\Longrightarrow\, a \mathbin{\#} w \in S$'' $\,\mid$ \\ \textit{R4}:\kern.4em ``$w \in S\,\Longrightarrow\, a \mathbin{\#} w \in A$'' $\,\mid$ \\ \textit{R5}:\kern.4em ``$w \in S\,\Longrightarrow\, b \mathbin{\#} w \in S$'' $\,\mid$ \\ \textit{R6}:\kern.4em ``$\lbrakk v \in B;\> v \in B\rbrakk \,\Longrightarrow\, a \mathbin{\#} v \mathbin{@} w \in B$'' \postw The conversion of the grammar into the inductive definition was done manually by Joe Blow, an underpaid undergraduate student. As a result, some errors might have sneaked in. Debugging faulty specifications is at the heart of Nitpick's \textsl{raison d'\^etre}. A good approach is to state desirable properties of the specification (here, that $S$ is exactly the set of strings over $\{a, b\}$ with as many $a$'s as $b$'s) and check them with Nitpick. If the properties are correctly stated, counterexamples will point to bugs in the specification. For our grammar example, we will proceed in two steps, separating the soundness and the completeness of the set $S$. First, soundness: \prew \textbf{theorem}~\textit{S\_sound\/}: \\ ``$w \in S \longrightarrow \textit{length}~[x\mathbin{\leftarrow} w.\; x = a] = \textit{length}~[x\mathbin{\leftarrow} w.\; x = b]$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $w = [b]$ \postw It would seem that $[b] \in S$. How could this be? An inspection of the introduction rules reveals that the only rule with a right-hand side of the form $b \mathbin{\#} {\ldots} \in S$ that could have introduced $[b]$ into $S$ is \textit{R5}: \prew ``$w \in S\,\Longrightarrow\, b \mathbin{\#} w \in S$'' \postw On closer inspection, we can see that this rule is wrong. To match the production $B ::= bS$, the second $S$ should be a $B$. We fix the typo and try again: \prew \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $w = [a, a, b]$ \postw Some detective work is necessary to find out what went wrong here. To get $[a, a, b] \in S$, we need $[a, b] \in B$ by \textit{R3}, which in turn can only come from \textit{R6}: \prew ``$\lbrakk v \in B;\> v \in B\rbrakk \,\Longrightarrow\, a \mathbin{\#} v \mathbin{@} w \in B$'' \postw Now, this formula must be wrong: The same assumption occurs twice, and the variable $w$ is unconstrained. Clearly, one of the two occurrences of $v$ in the assumptions should have been a $w$. With the correction made, we don't get any counterexample from Nitpick. Let's move on and check completeness: \prew \textbf{theorem}~\textit{S\_complete}: \\ ``$\textit{length}~[x\mathbin{\leftarrow} w.\; x = a] = \textit{length}~[x\mathbin{\leftarrow} w.\; x = b] \longrightarrow w \in S$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variable: \nopagebreak \\ \hbox{}\qquad\qquad $w = [b, b, a, a]$ \postw Apparently, $[b, b, a, a] \notin S$, even though it has the same numbers of $a$'s and $b$'s. But since our inductive definition passed the soundness check, the introduction rules we have are probably correct. Perhaps we simply lack an introduction rule. Comparing the grammar with the inductive definition, our suspicion is confirmed: Joe Blow simply forgot the production $A ::= bAA$, without which the grammar cannot generate two or more $b$'s in a row. So we add the rule \prew ``$\lbrakk v \in A;\> w \in A\rbrakk \,\Longrightarrow\, b \mathbin{\#} v \mathbin{@} w \in A$'' \postw With this last change, we don't get any counterexamples from Nitpick for either soundness or completeness. We can even generalize our result to cover $A$ and $B$ as well: \prew \textbf{theorem} \textit{S\_A\_B\_sound\_and\_complete}: \\ ``$w \in S \longleftrightarrow \textit{length}~[x \mathbin{\leftarrow} w.\; x = a] = \textit{length}~[x \mathbin{\leftarrow} w.\; x = b]$'' \\ ``$w \in A \longleftrightarrow \textit{length}~[x \mathbin{\leftarrow} w.\; x = a] = \textit{length}~[x \mathbin{\leftarrow} w.\; x = b] + 1$'' \\ ``$w \in B \longleftrightarrow \textit{length}~[x \mathbin{\leftarrow} w.\; x = b] = \textit{length}~[x \mathbin{\leftarrow} w.\; x = a] + 1$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found no counterexample \postw \subsection{AA Trees} \label{aa-trees} AA trees are a kind of balanced trees discovered by Arne Andersson that provide similar performance to red-black trees, but with a simpler implementation \cite{andersson-1993}. They can be used to store sets of elements equipped with a total order $<$. We start by defining the datatype and some basic extractor functions: \prew \textbf{datatype} $'a$~\textit{aa\_tree} = \\ \hbox{}\quad $\Lambda$ $\mid$ $N$ ``\kern1pt$'a\Colon \textit{linorder\/}$'' \textit{nat} ``\kern1pt$'a$ \textit{aa\_tree}'' ``\kern1pt$'a$ \textit{aa\_tree}'' \\[2\smallskipamount] \textbf{primrec} \textit{data} \textbf{where} \\ ``$\textit{data}~\Lambda = \unkef$'' $\,\mid$ \\ ``$\textit{data}~(N~x~\_~\_~\_) = x$'' \\[2\smallskipamount] \textbf{primrec} \textit{dataset} \textbf{where} \\ ``$\textit{dataset}~\Lambda = \{\}$'' $\,\mid$ \\ ``$\textit{dataset}~(N~x~\_~t~u) = \{x\} \cup \textit{dataset}~t \mathrel{\cup} \textit{dataset}~u$'' \\[2\smallskipamount] \textbf{primrec} \textit{level} \textbf{where} \\ ``$\textit{level}~\Lambda = 0$'' $\,\mid$ \\ ``$\textit{level}~(N~\_~k~\_~\_) = k$'' \\[2\smallskipamount] \textbf{primrec} \textit{left} \textbf{where} \\ ``$\textit{left}~\Lambda = \Lambda$'' $\,\mid$ \\ ``$\textit{left}~(N~\_~\_~t~\_) = t$'' \\[2\smallskipamount] \textbf{primrec} \textit{right} \textbf{where} \\ ``$\textit{right}~\Lambda = \Lambda$'' $\,\mid$ \\ ``$\textit{right}~(N~\_~\_~\_~u) = u$'' \postw The wellformedness criterion for AA trees is fairly complex. Wikipedia states it as follows \cite{wikipedia-2009-aa-trees}: \kern.2\parskip %% TYPESETTING \pre Each node has a level field, and the following invariants must remain true for the tree to be valid: \raggedright \kern-.4\parskip %% TYPESETTING \begin{enum} \item[] \begin{enum} \item[1.] The level of a leaf node is one. \item[2.] The level of a left child is strictly less than that of its parent. \item[3.] The level of a right child is less than or equal to that of its parent. \item[4.] The level of a right grandchild is strictly less than that of its grandparent. \item[5.] Every node of level greater than one must have two children. \end{enum} \end{enum} \post \kern.4\parskip %% TYPESETTING The \textit{wf} predicate formalizes this description: \prew \textbf{primrec} \textit{wf} \textbf{where} \\ ``$\textit{wf}~\Lambda = \textit{True\/}$'' $\,\mid$ \\ ``$\textit{wf}~(N~\_~k~t~u) =$ \\ \phantom{``}$(\textrm{if}~t = \Lambda~\textrm{then}$ \\ \phantom{``$(\quad$}$k = 1 \mathrel{\land} (u = \Lambda \mathrel{\lor} (\textit{level}~u = 1 \mathrel{\land} \textit{left}~u = \Lambda \mathrel{\land} \textit{right}~u = \Lambda))$ \\ \phantom{``$($}$\textrm{else}$ \\ \hbox{}\phantom{``$(\quad$}$\textit{wf}~t \mathrel{\land} \textit{wf}~u \mathrel{\land} u \not= \Lambda \mathrel{\land} \textit{level}~t < k \mathrel{\land} \textit{level}~u \le k$ \\ \hbox{}\phantom{``$(\quad$}${\land}\; \textit{level}~(\textit{right}~u) < k)$'' \postw Rebalancing the tree upon insertion and removal of elements is performed by two auxiliary functions called \textit{skew} and \textit{split}, defined below: \prew \textbf{primrec} \textit{skew} \textbf{where} \\ ``$\textit{skew}~\Lambda = \Lambda$'' $\,\mid$ \\ ``$\textit{skew}~(N~x~k~t~u) = {}$ \\ \phantom{``}$(\textrm{if}~t \not= \Lambda \mathrel{\land} k = \textit{level}~t~\textrm{then}$ \\ \phantom{``(\quad}$N~(\textit{data}~t)~k~(\textit{left}~t)~(N~x~k~ (\textit{right}~t)~u)$ \\ \phantom{``(}$\textrm{else}$ \\ \phantom{``(\quad}$N~x~k~t~u)$'' \postw \prew \textbf{primrec} \textit{split} \textbf{where} \\ ``$\textit{split}~\Lambda = \Lambda$'' $\,\mid$ \\ ``$\textit{split}~(N~x~k~t~u) = {}$ \\ \phantom{``}$(\textrm{if}~u \not= \Lambda \mathrel{\land} k = \textit{level}~(\textit{right}~u)~\textrm{then}$ \\ \phantom{``(\quad}$N~(\textit{data}~u)~(\textit{Suc}~k)~ (N~x~k~t~(\textit{left}~u))~(\textit{right}~u)$ \\ \phantom{``(}$\textrm{else}$ \\ \phantom{``(\quad}$N~x~k~t~u)$'' \postw Performing a \textit{skew} or a \textit{split} should have no impact on the set of elements stored in the tree: \prew \textbf{theorem}~\textit{dataset\_skew\_split\/}:\\ ``$\textit{dataset}~(\textit{skew}~t) = \textit{dataset}~t$'' \\ ``$\textit{dataset}~(\textit{split}~t) = \textit{dataset}~t$'' \\ \textbf{nitpick} \\[2\smallskipamount] {\slshape Nitpick ran out of time after checking 9 of 10 scopes} \postw Furthermore, applying \textit{skew} or \textit{split} on a well-formed tree should not alter the tree: \prew \textbf{theorem}~\textit{wf\_skew\_split\/}:\\ ``$\textit{wf}~t\,\Longrightarrow\, \textit{skew}~t = t$'' \\ ``$\textit{wf}~t\,\Longrightarrow\, \textit{split}~t = t$'' \\ \textbf{nitpick} \\[2\smallskipamount] {\slshape Nitpick found no counterexample} \postw Insertion is implemented recursively. It preserves the sort order: \prew \textbf{primrec}~\textit{insort} \textbf{where} \\ ``$\textit{insort}~\Lambda~x = N~x~1~\Lambda~\Lambda$'' $\,\mid$ \\ ``$\textit{insort}~(N~y~k~t~u)~x =$ \\ \phantom{``}$({*}~(\textit{split} \circ \textit{skew})~{*})~(N~y~k~(\textrm{if}~x < y~\textrm{then}~\textit{insort}~t~x~\textrm{else}~t)$ \\ \phantom{``$({*}~(\textit{split} \circ \textit{skew})~{*})~(N~y~k~$}$(\textrm{if}~x > y~\textrm{then}~\textit{insort}~u~x~\textrm{else}~u))$'' \postw Notice that we deliberately commented out the application of \textit{skew} and \textit{split}. Let's see if this causes any problems: \prew \textbf{theorem}~\textit{wf\_insort\/}:\kern.4em ``$\textit{wf}~t\,\Longrightarrow\, \textit{wf}~(\textit{insort}~t~x)$'' \\ \textbf{nitpick} \\[2\smallskipamount] \slshape Nitpick found a counterexample for \textit{card} $'a$ = 4: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $t = N~a_1~1~\Lambda~\Lambda$ \\ \hbox{}\qquad\qquad $x = a_2$ \postw It's hard to see why this is a counterexample. To improve readability, we will restrict the theorem to \textit{nat}, so that we don't need to look up the value of the $\textit{op}~{<}$ constant to find out which element is smaller than the other. In addition, we will tell Nitpick to display the value of $\textit{insort}~t~x$ using the \textit{eval} option. This gives \prew \textbf{theorem} \textit{wf\_insort\_nat\/}:\kern.4em ``$\textit{wf}~t\,\Longrightarrow\, \textit{wf}~(\textit{insort}~t~(x\Colon\textit{nat}))$'' \\ \textbf{nitpick} [\textit{eval} = ``$\textit{insort}~t~x$''] \\[2\smallskipamount] \slshape Nitpick found a counterexample: \\[2\smallskipamount] \hbox{}\qquad Free variables: \nopagebreak \\ \hbox{}\qquad\qquad $t = N~1~1~\Lambda~\Lambda$ \\ \hbox{}\qquad\qquad $x = 0$ \\ \hbox{}\qquad Evaluated term: \\ \hbox{}\qquad\qquad $\textit{insort}~t~x = N~1~1~(N~0~1~\Lambda~\Lambda)~\Lambda$ \postw Nitpick's output reveals that the element $0$ was added as a left child of $1$, where both nodes have a level of 1. This violates the second AA tree invariant, which states that a left child's level must be less than its parent's. This shouldn't come as a surprise, considering that we commented out the tree rebalancing code. Reintroducing the code seems to solve the problem: \prew \textbf{theorem}~\textit{wf\_insort\/}:\kern.4em ``$\textit{wf}~t\,\Longrightarrow\, \textit{wf}~(\textit{insort}~t~x)$'' \\ \textbf{nitpick} \\[2\smallskipamount] {\slshape Nitpick ran out of time after checking 8 of 10 scopes} \postw Insertion should transform the set of elements represented by the tree in the obvious way: \prew \textbf{theorem} \textit{dataset\_insort\/}:\kern.4em ``$\textit{dataset}~(\textit{insort}~t~x) = \{x\} \cup \textit{dataset}~t$'' \\ \textbf{nitpick} \\[2\smallskipamount] {\slshape Nitpick ran out of time after checking 7 of 10 scopes} \postw We could continue like this and sketch a full-blown theory of AA trees. Once the definitions and main theorems are in place and have been thoroughly tested using Nitpick, we could start working on the proofs. Developing theories this way usually saves time, because faulty theorems and definitions are discovered much earlier in the process. \section{Option Reference} \label{option-reference} \def\defl{\{} \def\defr{\}} \def\flushitem#1{\item[]\noindent\kern-\leftmargin \textbf{#1}} \def\qty#1{$\left<\textit{#1}\right>$} \def\qtybf#1{$\mathbf{\left<\textbf{\textit{#1}}\right>}$} \def\optrue#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{true}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opfalse#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{false}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opsmart#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{smart\_bool}$\bigr]$\enskip \defl\textit{smart}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opnodefault#1#2{\flushitem{\textit{#1} = \qtybf{#2}} \nopagebreak\\[\parskip]} \def\opdefault#1#2#3{\flushitem{\textit{#1} = \qtybf{#2}\enskip \defl\textit{#3}\defr} \nopagebreak\\[\parskip]} \def\oparg#1#2#3{\flushitem{\textit{#1} \qtybf{#2} = \qtybf{#3}} \nopagebreak\\[\parskip]} \def\opargbool#1#2#3{\flushitem{\textit{#1} \qtybf{#2} $\bigl[$= \qtybf{bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} \def\opargboolorsmart#1#2#3{\flushitem{\textit{#1} \qtybf{#2} $\bigl[$= \qtybf{smart\_bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} Nitpick's behavior can be influenced by various options, which can be specified in brackets after the \textbf{nitpick} command. Default values can be set using \textbf{nitpick\_\allowbreak params}. For example: \prew \textbf{nitpick\_params} [\textit{verbose}, \,\textit{timeout} = 60] \postw The options are categorized as follows:\ mode of operation (\S\ref{mode-of-operation}), scope of search (\S\ref{scope-of-search}), output format (\S\ref{output-format}), regression testing (\S\ref{regression-testing}), optimizations (\S\ref{optimizations}), and timeouts (\S\ref{timeouts}). Nitpick also provides an automatic mode that can be enabled via the ``Auto Nitpick'' option under ``Plugins > Plugin Options > Isabelle > General'' in Isabelle/jEdit. For automatic runs, \textit{user\_axioms} (\S\ref{mode-of-operation}), \textit{assms} (\S\ref{mode-of-operation}), and \textit{mono} (\S\ref{scope-of-search}) are implicitly enabled, \textit{verbose} (\S\ref{output-format}) and \textit{debug} (\S\ref{output-format}) are disabled, \textit{max\_potential} (\S\ref{output-format}) is taken to be 0, \textit{max\_threads} (\S\ref{optimizations}) is taken to be 1, and \textit{timeout} (\S\ref{timeouts}) is superseded by the ``Auto Time Limit'' in Isabelle/jEdit. Nitpick's output is also more concise. The number of options can be overwhelming at first glance. Do not let that worry you: Nitpick's defaults have been chosen so that it almost always does the right thing, and the most important options have been covered in context in \S\ref{first-steps}. The descriptions below refer to the following syntactic quantities: \begin{enum} \item[\labelitemi] \qtybf{string}: A string. \item[\labelitemi] \qtybf{string\_list\/}: A space-separated list of strings (e.g., ``\textit{ichi ni san}''). \item[\labelitemi] \qtybf{bool\/}: \textit{true} or \textit{false}. \item[\labelitemi] \qtybf{smart\_bool\/}: \textit{true}, \textit{false}, or \textit{smart}. \item[\labelitemi] \qtybf{int\/}: An integer. Negative integers are prefixed with a hyphen. \item[\labelitemi] \qtybf{smart\_int\/}: An integer or \textit{smart}. \item[\labelitemi] \qtybf{int\_range}: An integer (e.g., 3) or a range of nonnegative integers (e.g., $1$--$4$). The range symbol `--' is entered as \texttt{-} (hyphen). \item[\labelitemi] \qtybf{int\_seq}: A comma-separated sequence of ranges of integers (e.g.,~1{,}3{,}\allowbreak6--8). \item[\labelitemi] \qtybf{float}: An floating-point number (e.g., 0.5 or 60) expressing a number of seconds. \item[\labelitemi] \qtybf{const\/}: The name of a HOL constant. \item[\labelitemi] \qtybf{term}: A HOL term (e.g., ``$f~x$''). \item[\labelitemi] \qtybf{term\_list\/}: A space-separated list of HOL terms (e.g., ``$f~x$''~``$g~y$''). \item[\labelitemi] \qtybf{type}: A HOL type. \end{enum} Default values are indicated in curly brackets (\textrm{\{\}}). Boolean options have a negated counterpart (e.g., \textit{mono} vs.\ \textit{non\_mono}). When setting them, ``= \textit{true}'' may be omitted. \subsection{Mode of Operation} \label{mode-of-operation} \begin{enum} \optrue{falsify}{satisfy} Specifies whether Nitpick should look for falsifying examples (countermodels) or satisfying examples (models). This manual assumes throughout that \textit{falsify} is enabled. \opsmart{user\_axioms}{no\_user\_axioms} Specifies whether the user-defined axioms (specified using \textbf{axiomatization} and \textbf{axioms}) should be considered. If the option is set to \textit{smart}, Nitpick performs an ad hoc axiom selection based on the constants that occur in the formula to falsify. The option is implicitly set to \textit{true} for automatic runs. \textbf{Warning:} If the option is set to \textit{true}, Nitpick might nonetheless ignore some polymorphic axioms. Counterexamples generated under these conditions are tagged as ``quasi genuine.'' The \textit{debug} (\S\ref{output-format}) option can be used to find out which axioms were considered. \nopagebreak {\small See also \textit{assms} (\S\ref{mode-of-operation}) and \textit{debug} (\S\ref{output-format}).} \optrue{assms}{no\_assms} Specifies whether the relevant assumptions in structured proofs should be considered. The option is implicitly enabled for automatic runs. \nopagebreak {\small See also \textit{user\_axioms} (\S\ref{mode-of-operation}).} \opfalse{spy}{dont\_spy} Specifies whether Nitpick should record statistics in \texttt{\$ISA\-BELLE\_\allowbreak HOME\_\allowbreak USER/\allowbreak spy\_\allowbreak nitpick}. These statistics can be useful to the developer of Nitpick. If you are willing to have your interactions recorded in the name of science, please enable this feature and send the statistics file every now and then to the author of this manual (\authoremail). To change the default value of this option globally, set the environment variable \texttt{NITPICK\_SPY} to \texttt{yes}. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}).} \opfalse{overlord}{no\_overlord} Specifies whether Nitpick should put its temporary files in \texttt{\$ISABELLE\_\allowbreak HOME\_\allowbreak USER}, which is useful for debugging Nitpick but also unsafe if several instances of the tool are run simultaneously. The files are identified by the extensions \texttt{.kki}, \texttt{.cnf}, \texttt{.out}, and \texttt{.err}; you may safely remove them after Nitpick has run. \textbf{Warning:} This option is not thread-safe. Use at your own risks. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}).} \end{enum} \subsection{Scope of Search} \label{scope-of-search} \begin{enum} \oparg{card}{type}{int\_seq} Specifies the sequence of cardinalities to use for a given type. For free types, and often also for \textbf{typedecl}'d types, it usually makes sense to specify cardinalities as a range of the form \textit{$1$--$n$}. \nopagebreak {\small See also \textit{box} (\S\ref{scope-of-search}) and \textit{mono} (\S\ref{scope-of-search}).} \opdefault{card}{int\_seq}{\upshape 1--10} Specifies the default sequence of cardinalities to use. This can be overridden on a per-type basis using the \textit{card}~\qty{type} option described above. \oparg{max}{const}{int\_seq} Specifies the sequence of maximum multiplicities to use for a given (co)in\-duc\-tive datatype constructor. A constructor's multiplicity is the number of distinct values that it can construct. Nonsensical values (e.g., \textit{max}~[]~$=$~2) are silently repaired. This option is only available for datatypes equipped with several constructors. \opnodefault{max}{int\_seq} Specifies the default sequence of maximum multiplicities to use for (co)in\-duc\-tive datatype constructors. This can be overridden on a per-constructor basis using the \textit{max}~\qty{const} option described above. \opsmart{binary\_ints}{unary\_ints} Specifies whether natural numbers and integers should be encoded using a unary or binary notation. In unary mode, the cardinality fully specifies the subset used to approximate the type. For example: % $$\hbox{\begin{tabular}{@{}rll@{}}% \textit{card nat} = 4 & induces & $\{0,\, 1,\, 2,\, 3\}$ \\ \textit{card int} = 4 & induces & $\{-1,\, 0,\, +1,\, +2\}$ \\ \textit{card int} = 5 & induces & $\{-2,\, -1,\, 0,\, +1,\, +2\}.$% \end{tabular}}$$ % In general: % $$\hbox{\begin{tabular}{@{}rll@{}}% \textit{card nat} = $K$ & induces & $\{0,\, \ldots,\, K - 1\}$ \\ \textit{card int} = $K$ & induces & $\{-\lceil K/2 \rceil + 1,\, \ldots,\, +\lfloor K/2 \rfloor\}.$% \end{tabular}}$$ % In binary mode, the cardinality specifies the number of distinct values that can be constructed. Each of these value is represented by a bit pattern whose length is specified by the \textit{bits} (\S\ref{scope-of-search}) option. By default, Nitpick attempts to choose the more appropriate encoding by inspecting the formula at hand, preferring the binary notation for problems involving multiplicative operators or large constants. \textbf{Warning:} For technical reasons, Nitpick always reverts to unary for problems that refer to the types \textit{rat} or \textit{real} or the constants \textit{Suc}, \textit{gcd}, or \textit{lcm}. {\small See also \textit{bits} (\S\ref{scope-of-search}) and \textit{show\_types} (\S\ref{output-format}).} \opdefault{bits}{int\_seq}{\upshape 1--10} Specifies the number of bits to use to represent natural numbers and integers in binary, excluding the sign bit. The minimum is 1 and the maximum is 31. {\small See also \textit{binary\_ints} (\S\ref{scope-of-search}).} \opargboolorsmart{wf}{const}{non\_wf} Specifies whether the specified (co)in\-duc\-tively defined predicate is well-founded. The option can take the following values: \begin{enum} \item[\labelitemi] \textbf{\textit{true}:} Tentatively treat the (co)in\-duc\-tive predicate as if it were well-founded. Since this is generally not sound when the predicate is not well-founded, the counterexamples are tagged as ``quasi genuine.'' \item[\labelitemi] \textbf{\textit{false}:} Treat the (co)in\-duc\-tive predicate as if it were not well-founded. The predicate is then unrolled as prescribed by the \textit{star\_linear\_preds}, \textit{iter}~\qty{const}, and \textit{iter} options. \item[\labelitemi] \textbf{\textit{smart}:} Try to prove that the inductive predicate is well-founded using Isabelle's \textit{lexicographic\_order} and \textit{size\_change} tactics. If this succeeds (or the predicate occurs with an appropriate polarity in the formula to falsify), use an efficient fixed-point equation as specification of the predicate; otherwise, unroll the predicates according to the \textit{iter}~\qty{const} and \textit{iter} options. \end{enum} \nopagebreak {\small See also \textit{iter} (\S\ref{scope-of-search}), \textit{star\_linear\_preds} (\S\ref{optimizations}), and \textit{tac\_timeout} (\S\ref{timeouts}).} \opsmart{wf}{non\_wf} Specifies the default wellfoundedness setting to use. This can be overridden on a per-predicate basis using the \textit{wf}~\qty{const} option above. \oparg{iter}{const}{int\_seq} Specifies the sequence of iteration counts to use when unrolling a given (co)in\-duc\-tive predicate. By default, unrolling is applied for inductive predicates that occur negatively and coinductive predicates that occur positively in the formula to falsify and that cannot be proved to be well-founded, but this behavior is influenced by the \textit{wf} option. The iteration counts are automatically bounded by the cardinality of the predicate's domain. {\small See also \textit{wf} (\S\ref{scope-of-search}) and \textit{star\_linear\_preds} (\S\ref{optimizations}).} \opdefault{iter}{int\_seq}{\upshape 0{,}1{,}2{,}4{,}8{,}12{,}16{,}20{,}24{,}28} Specifies the sequence of iteration counts to use when unrolling (co)in\-duc\-tive predicates. This can be overridden on a per-predicate basis using the \textit{iter} \qty{const} option above. \opdefault{bisim\_depth}{int\_seq}{\upshape 9} Specifies the sequence of iteration counts to use when unrolling the bisimilarity predicate generated by Nitpick for coinductive datatypes. A value of $-1$ means that no predicate is generated, in which case Nitpick performs an after-the-fact check to see if the known coinductive datatype values are bidissimilar. If two values are found to be bisimilar, the counterexample is tagged as ``quasi genuine.'' The iteration counts are automatically bounded by the sum of the cardinalities of the coinductive datatypes occurring in the formula to falsify. \opargboolorsmart{box}{type}{dont\_box} Specifies whether Nitpick should attempt to wrap (``box'') a given function or product type in an isomorphic datatype internally. Boxing is an effective mean to reduce the search space and speed up Nitpick, because the isomorphic datatype is approximated by a subset of the possible function or pair values. Like other drastic optimizations, it can also prevent the discovery of counterexamples. The option can take the following values: \begin{enum} \item[\labelitemi] \textbf{\textit{true}:} Box the specified type whenever practicable. \item[\labelitemi] \textbf{\textit{false}:} Never box the type. \item[\labelitemi] \textbf{\textit{smart}:} Box the type only in contexts where it is likely to help. For example, $n$-tuples where $n > 2$ and arguments to higher-order functions are good candidates for boxing. \end{enum} \nopagebreak {\small See also \textit{finitize} (\S\ref{scope-of-search}), \textit{verbose} (\S\ref{output-format}), and \textit{debug} (\S\ref{output-format}).} \opsmart{box}{dont\_box} Specifies the default boxing setting to use. This can be overridden on a per-type basis using the \textit{box}~\qty{type} option described above. \opargboolorsmart{finitize}{type}{dont\_finitize} Specifies whether Nitpick should attempt to finitize an infinite datatype. The option can then take the following values: \begin{enum} \item[\labelitemi] \textbf{\textit{true}:} Finitize the datatype. Since this is unsound, counterexamples generated under these conditions are tagged as ``quasi genuine.'' \item[\labelitemi] \textbf{\textit{false}:} Don't attempt to finitize the datatype. \item[\labelitemi] \textbf{\textit{smart}:} If the datatype's constructors don't appear in the problem, perform a monotonicity analysis to detect whether the datatype can be soundly finitized; otherwise, don't finitize it. \end{enum} \nopagebreak {\small See also \textit{box} (\S\ref{scope-of-search}), \textit{mono} (\S\ref{scope-of-search}), \textit{verbose} (\S\ref{output-format}), and \textit{debug} (\S\ref{output-format}).} \opsmart{finitize}{dont\_finitize} Specifies the default finitization setting to use. This can be overridden on a per-type basis using the \textit{finitize}~\qty{type} option described above. \opargboolorsmart{mono}{type}{non\_mono} Specifies whether the given type should be considered monotonic when enumerating scopes and finitizing types. If the option is set to \textit{smart}, Nitpick performs a monotonicity check on the type. Setting this option to \textit{true} can reduce the number of scopes tried, but it can also diminish the chance of finding a counterexample, as demonstrated in \S\ref{scope-monotonicity}. The option is implicitly set to \textit{true} for automatic runs. \nopagebreak {\small See also \textit{card} (\S\ref{scope-of-search}), \textit{finitize} (\S\ref{scope-of-search}), \textit{merge\_type\_vars} (\S\ref{scope-of-search}), and \textit{verbose} (\S\ref{output-format}).} \opsmart{mono}{non\_mono} Specifies the default monotonicity setting to use. This can be overridden on a per-type basis using the \textit{mono}~\qty{type} option described above. \opfalse{merge\_type\_vars}{dont\_merge\_type\_vars} Specifies whether type variables with the same sort constraints should be merged. Setting this option to \textit{true} can reduce the number of scopes tried and the size of the generated Kodkod formulas, but it also diminishes the theoretical chance of finding a counterexample. {\small See also \textit{mono} (\S\ref{scope-of-search}).} \end{enum} \subsection{Output Format} \label{output-format} \begin{enum} \opfalse{verbose}{quiet} Specifies whether the \textbf{nitpick} command should explain what it does. This option is useful to determine which scopes are tried or which SAT solver is used. This option is implicitly disabled for automatic runs. \opfalse{debug}{no\_debug} Specifies whether Nitpick should display additional debugging information beyond what \textit{verbose} already displays. Enabling \textit{debug} also enables \textit{verbose} and \textit{show\_all} behind the scenes. The \textit{debug} option is implicitly disabled for automatic runs. \nopagebreak {\small See also \textit{spy} (\S\ref{mode-of-operation}), \textit{overlord} (\S\ref{mode-of-operation}), and \textit{batch\_size} (\S\ref{optimizations}).} \opfalse{show\_types}{hide\_types} Specifies whether the subsets used to approximate (co)in\-duc\-tive data\-types should be displayed as part of counterexamples. Such subsets are sometimes helpful when investigating whether a potentially spurious counterexample is genuine, but their potential for clutter is real. \optrue{show\_skolems}{hide\_skolem} Specifies whether the values of Skolem constants should be displayed as part of counterexamples. Skolem constants correspond to bound variables in the original formula and usually help us to understand why the counterexample falsifies the formula. \opfalse{show\_consts}{hide\_consts} Specifies whether the values of constants occurring in the formula (including its axioms) should be displayed along with any counterexample. These values are sometimes helpful when investigating why a counterexample is genuine, but they can clutter the output. \opnodefault{show\_all}{bool} Abbreviation for \textit{show\_types}, \textit{show\_skolems}, and \textit{show\_consts}. \opdefault{max\_potential}{int}{\upshape 1} Specifies the maximum number of potentially spurious counterexamples to display. Setting this option to 0 speeds up the search for a genuine counterexample. This option is implicitly set to 0 for automatic runs. If you set this option to a value greater than 1, you will need an incremental SAT solver, such as \textit{MiniSat\_JNI} (recommended) and \textit{SAT4J}. Be aware that many of the counterexamples may be identical. \nopagebreak {\small See also \textit{sat\_solver} (\S\ref{optimizations}).} \opdefault{max\_genuine}{int}{\upshape 1} Specifies the maximum number of genuine counterexamples to display. If you set this option to a value greater than 1, you will need an incremental SAT solver, such as \textit{MiniSat\_JNI} (recommended) and \textit{SAT4J}. Be aware that many of the counterexamples may be identical. \nopagebreak {\small See also \textit{sat\_solver} (\S\ref{optimizations}).} \opnodefault{eval}{term\_list} Specifies the list of terms whose values should be displayed along with counterexamples. This option suffers from an ``observer effect'': Nitpick might find different counterexamples for different values of this option. \oparg{atoms}{type}{string\_list} Specifies the names to use to refer to the atoms of the given type. By default, Nitpick generates names of the form $a_1, \ldots, a_n$, where $a$ is the first letter of the type's name. \opnodefault{atoms}{string\_list} Specifies the default names to use to refer to atoms of any type. For example, to call the three atoms of type ${'}a$ \textit{ichi}, \textit{ni}, and \textit{san} instead of $a_1$, $a_2$, $a_3$, specify the option ``\textit{atoms}~${'}a$ = \textit{ichi~ni~san}''. The default names can be overridden on a per-type basis using the \textit{atoms}~\qty{type} option described above. \oparg{format}{term}{int\_seq} Specifies how to uncurry the value displayed for a variable or constant. Uncurrying sometimes increases the readability of the output for high-arity functions. For example, given the variable $y \mathbin{\Colon} {'a}\Rightarrow {'b}\Rightarrow {'c}\Rightarrow {'d}\Rightarrow {'e}\Rightarrow {'f}\Rightarrow {'g}$, setting \textit{format}~$y$ = 3 tells Nitpick to group the last three arguments, as if the type had been ${'a}\Rightarrow {'b}\Rightarrow {'c}\Rightarrow {'d}\times {'e}\times {'f}\Rightarrow {'g}$. In general, a list of values $n_1,\ldots,n_k$ tells Nitpick to show the last $n_k$ arguments as an $n_k$-tuple, the previous $n_{k-1}$ arguments as an $n_{k-1}$-tuple, and so on; arguments that are not accounted for are left alone, as if the specification had been $1,\ldots,1,n_1,\ldots,n_k$. \opdefault{format}{int\_seq}{\upshape 1} Specifies the default format to use. Irrespective of the default format, the extra arguments to a Skolem constant corresponding to the outer bound variables are kept separated from the remaining arguments, the \textbf{for} arguments of an inductive definitions are kept separated from the remaining arguments, and the iteration counter of an unrolled inductive definition is shown alone. The default format can be overridden on a per-variable or per-constant basis using the \textit{format}~\qty{term} option described above. \end{enum} \subsection{Regression Testing} \label{regression-testing} \begin{enum} \opnodefault{expect}{string} Specifies the expected outcome, which must be one of the following: \begin{enum} \item[\labelitemi] \textbf{\textit{genuine}:} Nitpick found a genuine counterexample. \item[\labelitemi] \textbf{\textit{quasi\_genuine}:} Nitpick found a ``quasi genuine'' counterexample (i.e., a counterexample that is genuine unless it contradicts a missing axiom or a dangerous option was used inappropriately). \item[\labelitemi] \textbf{\textit{potential}:} Nitpick found a potentially spurious counterexample. \item[\labelitemi] \textbf{\textit{none}:} Nitpick found no counterexample. \item[\labelitemi] \textbf{\textit{unknown}:} Nitpick encountered some problem (e.g., Kodkod ran out of memory). \end{enum} Nitpick emits an error if the actual outcome differs from the expected outcome. This option is useful for regression testing. \end{enum} \subsection{Optimizations} \label{optimizations} \def\cpp{C\nobreak\raisebox{.1ex}{+}\nobreak\raisebox{.1ex}{+}} \sloppy \begin{enum} \opdefault{sat\_solver}{string}{smart} Specifies which SAT solver to use. SAT solvers implemented in C or \cpp{} tend to be faster than their Java counterparts, but they can be more difficult to install. Also, if you set the \textit{max\_potential} (\S\ref{output-format}) or \textit{max\_genuine} (\S\ref{output-format}) option to a value greater than 1, you will need an incremental SAT solver, such as \textit{MiniSat\_JNI} (recommended) or \textit{SAT4J}. The supported solvers are listed below: \begin{enum} \item[\labelitemi] \textbf{\textit{Lingeling\_JNI}:} Lingeling is an efficient solver written in C. The JNI (Java Native Interface) version of Lingeling is bundled with Kodkodi and is precompiled for Linux and Mac~OS~X. It is also available from the Kodkod web site \cite{kodkod-2009}. \item[\labelitemi] \textbf{\textit{CryptoMiniSat}:} CryptoMiniSat is the winner of the 2010 SAT Race. To use CryptoMiniSat, set the environment variable \texttt{CRYPTO\-MINISAT\_}\discretionary{}{}{}\texttt{HOME} to the directory that contains the \texttt{crypto\-minisat} executable.% \footnote{Important note for Cygwin users: The path must be specified using native Windows syntax. Make sure to escape backslashes properly.% \label{cygwin-paths}} The \cpp{} sources and executables for Crypto\-Mini\-Sat are available at \url{http://planete.inrialpes.fr/~soos/}\allowbreak\url{CryptoMiniSat2/index.php}. Nitpick has been tested with version 2.51. \item[\labelitemi] \textbf{\textit{CryptoMiniSat\_JNI}:} The JNI (Java Native Interface) version of CryptoMiniSat is bundled with Kodkodi and is precompiled for Linux and Mac~OS~X. It is also available from the Kodkod web site \cite{kodkod-2009}. \item[\labelitemi] \textbf{\textit{MiniSat}:} MiniSat is an efficient solver written in \cpp{}. To use MiniSat, set the environment variable \texttt{MINISAT\_HOME} to the directory that contains the \texttt{minisat} executable.% \footref{cygwin-paths} The \cpp{} sources and executables for MiniSat are available at \url{http://minisat.se/MiniSat.html}. Nitpick has been tested with versions 1.14 and 2.2. \item[\labelitemi] \textbf{\textit{MiniSat\_JNI}:} The JNI version of MiniSat is bundled with Kodkodi and is precompiled for Linux, Mac~OS~X, and Windows (Cygwin). It is also available from the Kodkod web site \cite{kodkod-2009}. Unlike the standard version of MiniSat, the JNI version can be used incrementally. \item[\labelitemi] \textbf{\textit{Riss3g}:} Riss3g is an efficient solver written in \cpp{}. To use Riss3g, set the environment variable \texttt{RISS3G\_HOME} to the directory that contains the \texttt{riss3g} executable.% \footref{cygwin-paths} The \cpp{} sources for Riss3g are available at \url{http://tools.computational-logic.org/content/riss3g.php}. Nitpick has been tested with the SAT Competition 2013 version. \item[\labelitemi] \textbf{\textit{zChaff}:} zChaff is an older solver written in \cpp{}. To use zChaff, set the environment variable \texttt{ZCHAFF\_HOME} to the directory that contains the \texttt{zchaff} executable.% \footref{cygwin-paths} The \cpp{} sources and executables for zChaff are available at \url{http://www.princeton.edu/~chaff/zchaff.html}. Nitpick has been tested with versions 2004-05-13, 2004-11-15, and 2007-03-12. \item[\labelitemi] \textbf{\textit{RSat}:} RSat is an efficient solver written in \cpp{}. To use RSat, set the environment variable \texttt{RSAT\_HOME} to the directory that contains the \texttt{rsat} executable.% \footref{cygwin-paths} The \cpp{} sources for RSat are available at \url{http://reasoning.cs.ucla.edu/rsat/}. Nitpick has been tested with version 2.01. \item[\labelitemi] \textbf{\textit{BerkMin}:} BerkMin561 is an efficient solver written in C. To use BerkMin, set the environment variable \texttt{BERKMIN\_HOME} to the directory that contains the \texttt{BerkMin561} executable.\footref{cygwin-paths} The BerkMin executables are available at \url{http://eigold.tripod.com/BerkMin.html}. \item[\labelitemi] \textbf{\textit{BerkMin\_Alloy}:} Variant of BerkMin that is included with Alloy 4 and calls itself ``sat56'' in its banner text. To use this version of BerkMin, set the environment variable \texttt{BERKMINALLOY\_HOME} to the directory that contains the \texttt{berkmin} executable.% \footref{cygwin-paths} \item[\labelitemi] \textbf{\textit{SAT4J}:} SAT4J is a reasonably efficient solver written in Java that can be used incrementally. It is bundled with Kodkodi and requires no further installation or configuration steps. Do not attempt to install the official SAT4J packages, because their API is incompatible with Kodkod. \item[\labelitemi] \textbf{\textit{SAT4J\_Light}:} Variant of SAT4J that is optimized for small problems. It can also be used incrementally. \item[\labelitemi] \textbf{\textit{smart}:} If \textit{sat\_solver} is set to \textit{smart}, Nitpick selects the first solver among the above that is recognized by Isabelle. If \textit{verbose} (\S\ref{output-format}) is enabled, Nitpick displays which SAT solver was chosen. \end{enum} \fussy \opdefault{batch\_size}{smart\_int}{smart} Specifies the maximum number of Kodkod problems that should be lumped together when invoking Kodkodi. Each problem corresponds to one scope. Lumping problems together ensures that Kodkodi is launched less often, but it makes the verbose output less readable and is sometimes detrimental to performance. If \textit{batch\_size} is set to \textit{smart}, the actual value used is 1 if \textit{debug} (\S\ref{output-format}) is set and 50 otherwise. \optrue{destroy\_constrs}{dont\_destroy\_constrs} Specifies whether formulas involving (co)in\-duc\-tive datatype constructors should be rewritten to use (automatically generated) discriminators and destructors. This optimization can drastically reduce the size of the Boolean formulas given to the SAT solver. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}).} \optrue{specialize}{dont\_specialize} Specifies whether functions invoked with static arguments should be specialized. This optimization can drastically reduce the search space, especially for higher-order functions. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}) and \textit{show\_consts} (\S\ref{output-format}).} \optrue{star\_linear\_preds}{dont\_star\_linear\_preds} Specifies whether Nitpick should use Kodkod's transitive closure operator to encode non-well-founded ``linear inductive predicates,'' i.e., inductive predicates for which each the predicate occurs in at most one assumption of each introduction rule. Using the reflexive transitive closure is in principle equivalent to setting \textit{iter} to the cardinality of the predicate's domain, but it is usually more efficient. {\small See also \textit{wf} (\S\ref{scope-of-search}), \textit{debug} (\S\ref{output-format}), and \textit{iter} (\S\ref{scope-of-search}).} \opnodefault{whack}{term\_list} Specifies a list of atomic terms (usually constants, but also free and schematic variables) that should be taken as being $\unk$ (unknown). This can be useful to reduce the size of the Kodkod problem if you can guess in advance that a constant might not be needed to find a countermodel. {\small See also \textit{debug} (\S\ref{output-format}).} \opnodefault{need}{term\_list} Specifies a list of datatype values (normally ground constructor terms) that should be part of the subterm-closed subsets used to approximate datatypes. If you know that a value must necessarily belong to the subset of representable values that approximates a datatype, specifying it can speed up the search, especially for high cardinalities. %By default, Nitpick inspects the conjecture to infer needed datatype values. \opsmart{total\_consts}{partial\_consts} Specifies whether constants occurring in the problem other than constructors can be assumed to be considered total for the representable values that approximate a datatype. This option is highly incomplete; it should be used only for problems that do not construct datatype values explicitly. Since this option is (in rare cases) unsound, counterexamples generated under these conditions are tagged as ``quasi genuine.'' \opdefault{datatype\_sym\_break}{int}{\upshape 5} Specifies an upper bound on the number of datatypes for which Nitpick generates symmetry breaking predicates. Symmetry breaking can speed up the SAT solver considerably, especially for unsatisfiable problems, but too much of it can slow it down. \opdefault{kodkod\_sym\_break}{int}{\upshape 15} Specifies an upper bound on the number of relations for which Kodkod generates symmetry breaking predicates. Symmetry breaking can speed up the SAT solver considerably, especially for unsatisfiable problems, but too much of it can slow it down. \optrue{peephole\_optim}{no\_peephole\_optim} Specifies whether Nitpick should simplify the generated Kodkod formulas using a peephole optimizer. These optimizations can make a significant difference. Unless you are tracking down a bug in Nitpick or distrust the peephole optimizer, you should leave this option enabled. \opdefault{max\_threads}{int}{\upshape 0} Specifies the maximum number of threads to use in Kodkod. If this option is set to 0, Kodkod will compute an appropriate value based on the number of processor cores available. The option is implicitly set to 1 for automatic runs. \nopagebreak {\small See also \textit{batch\_size} (\S\ref{optimizations}) and \textit{timeout} (\S\ref{timeouts}).} \end{enum} \subsection{Timeouts} \label{timeouts} \begin{enum} \opdefault{timeout}{float}{\upshape 30} Specifies the maximum number of seconds that the \textbf{nitpick} command should spend looking for a counterexample. Nitpick tries to honor this constraint as well as it can but offers no guarantees. For automatic runs, the ``Auto Time Limit'' option under ``Plugins > Plugin Options > Isabelle > General'' is used instead. \nopagebreak {\small See also \textit{max\_threads} (\S\ref{optimizations}).} \opdefault{tac\_timeout}{float}{\upshape 0.5} Specifies the maximum number of seconds that should be used by internal tactics---\textit{lexicographic\_order} and \textit{size\_change} when checking whether a (co)in\-duc\-tive predicate is well-founded or the monotonicity inference. Nitpick tries to honor this constraint but offers no guarantees. \nopagebreak {\small See also \textit{wf} (\S\ref{scope-of-search}) and \textit{mono} (\S\ref{scope-of-search}).} \end{enum} \section{Attribute Reference} \label{attribute-reference} Nitpick needs to consider the definitions of all constants occurring in a formula in order to falsify it. For constants introduced using the \textbf{definition} command, the definition is simply the associated \textit{\_def} axiom. In contrast, instead of using the internal representation of functions synthesized by Isabelle's \textbf{primrec}, \textbf{function}, and \textbf{nominal\_primrec} packages, Nitpick relies on the more natural equational specification entered by the user. Behind the scenes, Isabelle's built-in packages and theories rely on the following attributes to affect Nitpick's behavior: \begin{enum} \flushitem{\textit{nitpick\_unfold}} \nopagebreak This attribute specifies an equation that Nitpick should use to expand a constant. The equation should be logically equivalent to the constant's actual definition and should be of the form \qquad $c~{?}x_1~\ldots~{?}x_n \,=\, t$, or \qquad $c~{?}x_1~\ldots~{?}x_n \,\equiv\, t$, where ${?}x_1, \ldots, {?}x_n$ are distinct variables and $c$ does not occur in $t$. Each occurrence of $c$ in the problem is expanded to $\lambda x_1\,\ldots x_n.\; t$. \flushitem{\textit{nitpick\_simp}} \nopagebreak This attribute specifies the equations that constitute the specification of a constant. The \textbf{primrec}, \textbf{function}, and \textbf{nominal\_\allowbreak primrec} packages automatically attach this attribute to their \textit{simps} rules. The equations must be of the form \qquad $c~t_1~\ldots\ t_n \;\bigl[{=}\; u\bigr]$ or \qquad $c~t_1~\ldots\ t_n \,\equiv\, u.$ \flushitem{\textit{nitpick\_psimp}} \nopagebreak This attribute specifies the equations that constitute the partial specification of a constant. The \textbf{function} package automatically attaches this attribute to its \textit{psimps} rules. The conditional equations must be of the form \qquad $\lbrakk P_1;\> \ldots;\> P_m\rbrakk \,\Longrightarrow\, c\ t_1\ \ldots\ t_n \;\bigl[{=}\; u\bigr]$ or \qquad $\lbrakk P_1;\> \ldots;\> P_m\rbrakk \,\Longrightarrow\, c\ t_1\ \ldots\ t_n \,\equiv\, u$. \flushitem{\textit{nitpick\_choice\_spec}} \nopagebreak This attribute specifies the (free-form) specification of a constant defined using the \textbf{specification} command. \end{enum} When faced with a constant, Nitpick proceeds as follows: \begin{enum} \item[1.] If the \textit{nitpick\_simp} set associated with the constant is not empty, Nitpick uses these rules as the specification of the constant. \item[2.] Otherwise, if the \textit{nitpick\_psimp} set associated with the constant is not empty, it uses these rules as the specification of the constant. \item[3.] Otherwise, if the constant was defined using the \allowbreak\textbf{specification} command and the \textit{nitpick\_choice\_spec} set associated with the constant is not empty, it uses these theorems as the specification of the constant. \item[4.] Otherwise, it looks up the definition of the constant. If the \textit{nitpick\_unfold} set associated with the constant is not empty, it uses the latest rule added to the set as the definition of the constant; otherwise it uses the actual definition axiom. \begin{enum} \item[1.] If the definition is of the form \qquad $c~{?}x_1~\ldots~{?}x_m \,\equiv\, \lambda y_1~\ldots~y_n.\; \textit{lfp}~(\lambda f.\; t)$ or \qquad $c~{?}x_1~\ldots~{?}x_m \,\equiv\, \lambda y_1~\ldots~y_n.\; \textit{gfp}~(\lambda f.\; t).$ Nitpick assumes that the definition was made using a (co)inductive package based on the user-specified introduction rules registered in Isabelle's internal \textit{Spec\_Rules} table. The tool uses the introduction rules to ascertain whether the definition is well-founded and the definition to generate a fixed-point equation or an unrolled equation. \item[2.] If the definition is compact enough, the constant is \textsl{unfolded} wherever it appears; otherwise, it is defined equationally, as with the \textit{nitpick\_simp} attribute. \end{enum} \end{enum} As an illustration, consider the inductive definition \prew \textbf{inductive}~\textit{odd}~\textbf{where} \\ ``\textit{odd}~1'' $\,\mid$ \\ ``\textit{odd}~$n\,\Longrightarrow\, \textit{odd}~(\textit{Suc}~(\textit{Suc}~n))$'' \postw By default, Nitpick uses the \textit{lfp}-based definition in conjunction with the introduction rules. To override this, you can specify an alternative definition as follows: \prew \textbf{lemma} $\mathit{odd\_alt\_unfold}$ [\textit{nitpick\_unfold}]:\kern.4em ``$\textit{odd}~n \,\equiv\, n~\textrm{mod}~2 = 1$'' \postw Nitpick then expands all occurrences of $\mathit{odd}~n$ to $n~\textrm{mod}~2 = 1$. Alternatively, you can specify an equational specification of the constant: \prew \textbf{lemma} $\mathit{odd\_simp}$ [\textit{nitpick\_simp}]:\kern.4em ``$\textit{odd}~n = (n~\textrm{mod}~2 = 1)$'' \postw Such tweaks should be done with great care, because Nitpick will assume that the constant is completely defined by its equational specification. For example, if you make ``$\textit{odd}~(2 * k + 1)$'' a \textit{nitpick\_simp} rule and neglect to provide rules to handle the $2 * k$ case, Nitpick will define $\textit{odd}~n$ arbitrarily for even values of $n$. The \textit{debug} (\S\ref{output-format}) option is extremely useful to understand what is going on when experimenting with \textit{nitpick\_} attributes. Because of its internal three-valued logic, Nitpick tends to lose a lot of precision in the presence of partially specified constants. For example, \prew \textbf{lemma} \textit{odd\_simp} [\textit{nitpick\_simp}]:\kern.4em ``$\textit{odd~x} = \lnot\, \textit{even}~x$'' \postw is superior to \prew \textbf{lemma} \textit{odd\_psimps} [\textit{nitpick\_simp}]: \\ ``$\textit{even~x} \,\Longrightarrow\, \textit{odd~x} = \textit{False\/}$'' \\ ``$\lnot\, \textit{even~x} \,\Longrightarrow\, \textit{odd~x} = \textit{True\/}$'' \postw Because Nitpick sometimes unfolds definitions but never simplification rules, you can ensure that a constant is defined explicitly using the \textit{nitpick\_simp}. For example: \prew \textbf{definition}~\textit{optimum} \textbf{where} [\textit{nitpick\_simp}]: \\ ``$\textit{optimum}~t = (\forall u.\; \textit{consistent}~u \mathrel{\land} \textit{alphabet}~t = \textit{alphabet}~u$ \\ \phantom{``$\textit{optimum}~t = (\forall u.\;$}${\mathrel{\land}}\; \textit{freq}~t = \textit{freq}~u \longrightarrow \textit{cost}~t \le \textit{cost}~u)$'' \postw In some rare occasions, you might want to provide an inductive or coinductive view on top of an existing constant $c$. The easiest way to achieve this is to define a new constant $c'$ (co)inductively. Then prove that $c$ equals $c'$ and let Nitpick know about it: \prew \textbf{lemma} \textit{c\_alt\_unfold} [\textit{nitpick\_unfold}]:\kern.4em ``$c \equiv c'$\kern2pt '' \postw This ensures that Nitpick will substitute $c'$ for $c$ and use the (co)inductive definition. \section{Standard ML Interface} \label{standard-ml-interface} Nitpick provides a rich Standard ML interface used mainly for internal purposes and debugging. Among the most interesting functions exported by Nitpick are those that let you invoke the tool programmatically and those that let you register and unregister custom term postprocessors as well as coinductive datatypes. \subsection{Invoking Nitpick} \label{invoking-nitpick} The \textit{Nitpick} structure offers the following functions for invoking your favorite counterexample generator: \prew $\textbf{val}\,~\textit{pick\_nits\_in\_term} : \\ \hbox{}\quad\textit{Proof.state} \rightarrow \textit{params} \rightarrow \textit{mode} \rightarrow \textit{int} \rightarrow \textit{int} \rightarrow \textit{int}$ \\ $\hbox{}\quad{\rightarrow}\; (\textit{term} * \textit{term})~\textit{list} \rightarrow \textit{term~list} \rightarrow \textit{term} \rightarrow \textit{string} * \textit{Proof.state}$ \\ $\textbf{val}\,~\textit{pick\_nits\_in\_subgoal} : \\ \hbox{}\quad\textit{Proof.state} \rightarrow \textit{params} \rightarrow \textit{mode} \rightarrow \textit{int} \rightarrow \textit{int} \rightarrow \textit{string} * \textit{Proof.state}$ \postw The return value is a new proof state paired with an outcome string (``genuine'', ``quasi\_genuine'', ``potential'', ``none'', or ``unknown''). The \textit{params} type is a large record that lets you set Nitpick's options. The current default options can be retrieved by calling the following function defined in the \textit{Nitpick\_Isar} structure: \prew $\textbf{val}\,~\textit{default\_params} :\, \textit{theory} \rightarrow (\textit{string} * \textit{string})~\textit{list} \rightarrow \textit{params}$ \postw The second argument lets you override option values before they are parsed and put into a \textit{params} record. Here is an example where Nitpick is invoked on subgoal $i$ of $n$ with no time limit: \prew $\textbf{val}\,~\textit{params} = \textit{Nitpick\_Isar.default\_params}~\textit{thy}~[(\textrm{``}\textrm{timeout\/}\textrm{''},\, \textrm{``}\textrm{none}\textrm{''})]$ \\ $\textbf{val}\,~(\textit{outcome},\, \textit{state}') = {}$ \\ $\hbox{}\quad\textit{Nitpick.pick\_nits\_in\_subgoal}~\textit{state}~\textit{params}~\textit{Nitpick.Normal}~\textit{i}~\textit{n}$ \postw \let\antiq=\textrm \subsection{Registering Term Postprocessors} \label{registering-term-postprocessors} It is possible to change the output of any term that Nitpick considers a datatype by registering a term postprocessor. The interface for registering and unregistering postprocessors consists of the following pair of functions defined in the \textit{Nitpick\_Model} structure: \prew $\textbf{type}\,~\textit{term\_postprocessor}\,~{=} {}$ \\ $\hbox{}\quad\textit{Proof.context} \rightarrow \textit{string} \rightarrow (\textit{typ} \rightarrow \textit{term~list\/}) \rightarrow \textit{typ} \rightarrow \textit{term} \rightarrow \textit{term}$ \\ $\textbf{val}\,~\textit{register\_term\_postprocessor} : {}$ \\ $\hbox{}\quad\textit{typ} \rightarrow \textit{term\_postprocessor} \rightarrow \textit{morphism} \rightarrow \textit{Context.generic}$ \\ $\hbox{}\quad{\rightarrow}\; \textit{Context.generic}$ \\ $\textbf{val}\,~\textit{unregister\_term\_postprocessor} : {}$ \\ $\hbox{}\quad\textit{typ} \rightarrow \textit{morphism} \rightarrow \textit{Context.generic} \rightarrow \textit{Context.generic}$ \postw \S\ref{typedefs-quotient-types-records-rationals-and-reals} and \texttt{src/HOL/Library/Multiset.thy} illustrate this feature in context. \subsection{Registering Coinductive Datatypes} \label{registering-coinductive-datatypes} Coinductive datatypes defined using the \textbf{codatatype} command that do not involve nested recursion through non-codatatypes are supported by Nitpick. If you have defined a custom coinductive datatype, you can tell Nitpick about it, so that it can use an efficient Kodkod axiomatization. The interface for registering and unregistering coinductive datatypes consists of the following pair of functions defined in the \textit{Nitpick\_HOL} structure: \prew $\textbf{val}\,~\textit{register\_codatatype\/} : {}$ \\ $\hbox{}\quad\textit{morphism} \rightarrow \textit{typ} \rightarrow \textit{string} \rightarrow (\textit{string} \times \textit{typ})\;\textit{list} \rightarrow \textit{Context.generic} {}$ \\ $\hbox{}\quad{\rightarrow}\; \textit{Context.generic}$ \\ $\textbf{val}\,~\textit{unregister\_codatatype\/} : {}$ \\ $\hbox{}\quad\textit{morphism} \rightarrow \textit{typ} \rightarrow \textit{Context.generic} \rightarrow \textit{Context.generic} {}$ \postw The type $'a~\textit{llist}$ of lazy lists is already registered; had it not been, you could have told Nitpick about it by adding the following line to your theory file: \prew $\textbf{declaration}~\,\{{*}$ \\ $\hbox{}\quad\textit{Nitpick\_HOL.register\_codatatype}~@\{\antiq{typ}~``\kern1pt'a~\textit{llist\/}\textrm{''}\}$ \\ $\hbox{}\qquad\quad @\{\antiq{const\_name}~ \textit{llist\_case}\}$ \\ $\hbox{}\qquad\quad (\textit{map}~\textit{dest\_Const}~[@\{\antiq{term}~\textit{LNil}\},\, @\{\antiq{term}~\textit{LCons}\}])$ \\ ${*}\}$ \postw The \textit{register\_codatatype} function takes a coinductive datatype, its case function, and the list of its constructors (in addition to the current morphism and generic proof context). The case function must take its arguments in the order that the constructors are listed. If no case function with the correct signature is available, simply pass the empty string. On the other hand, if your goal is to cripple Nitpick, add the following line to your theory file and try to check a few conjectures about lazy lists: \prew $\textbf{declaration}~\,\{{*}$ \\ $\hbox{}\quad\textit{Nitpick\_HOL.unregister\_codatatype}~@\{\antiq{typ}~``\kern1pt'a~\textit{llist\/}\textrm{''}\}$ \\ ${*}\}$ \postw Inductive datatypes can be registered as coinductive datatypes, given appropriate coinductive constructors. However, doing so precludes the use of the inductive constructors---Nitpick will generate an error if they are needed. \section{Known Bugs and Limitations} \label{known-bugs-and-limitations} Here are the known bugs and limitations in Nitpick at the time of writing: \begin{enum} \item[\labelitemi] Underspecified functions defined using the \textbf{primrec}, \textbf{function}, or \textbf{nominal\_\allowbreak primrec} packages can lead Nitpick to generate spurious counterexamples for theorems that refer to values for which the function is not defined. For example: \prew \textbf{primrec} \textit{prec} \textbf{where} \\ ``$\textit{prec}~(\textit{Suc}~n) = n$'' \\[2\smallskipamount] \textbf{lemma} ``$\textit{prec}~0 = \textit{undefined\/}$'' \\ \textbf{nitpick} \\[2\smallskipamount] \quad{\slshape Nitpick found a counterexample for \textit{card nat}~= 2: \nopagebreak \\[2\smallskipamount] \hbox{}\qquad Empty assignment} \nopagebreak\\[2\smallskipamount] \textbf{by}~(\textit{auto simp}:~\textit{prec\_def}) \postw Such theorems are generally considered bad style because they rely on the internal representation of functions synthesized by Isabelle, an implementation detail. \item[\labelitemi] Similarly, Nitpick might find spurious counterexamples for theorems that rely on the use of the indefinite description operator internally by \textbf{specification} and \textbf{quot\_type}. \item[\labelitemi] Axioms or definitions that restrict the possible values of the \textit{undefined} constant or other partially specified built-in Isabelle constants (e.g., \textit{Abs\_} and \textit{Rep\_} constants) are in general ignored. Again, such nonconservative extensions are generally considered bad style. \item[\labelitemi] Nitpick produces spurious counterexamples when invoked after a \textbf{guess} command in a structured proof. \item[\labelitemi] Datatypes defined using \textbf{datatype} and codatatypes defined using \textbf{codatatype} that involve nested (co)recursion through non-(co)datatypes are not properly supported and may result in spurious counterexamples. \item[\labelitemi] Types that are registered with several distinct sets of constructors, including \textit{enat} if the \textit{Coinductive} entry of the \textit{Archive of Formal Proofs} is loaded, can confuse Nitpick. \item[\labelitemi] The \textit{nitpick\_xxx} attributes and the \textit{Nitpick\_xxx.register\_yyy} functions can cause havoc if used improperly. \item[\labelitemi] Although this has never been observed, arbitrary theorem morphisms could possibly confuse Nitpick, resulting in spurious counterexamples. \item[\labelitemi] All constants, types, free variables, and schematic variables whose names start with \textit{Nitpick}{.} are reserved for internal use. \item[\labelitemi] Some users report technical issues with the default SAT solver on Windows. Setting the \textit{sat\_solver} option (\S\ref{optimizations}) to \textit{MiniSat\_JNI} should solve this. \end{enum} \let\em=\sl \bibliography{manual}{} \bibliographystyle{abbrv} \end{document} diff --git a/src/Doc/Prog_Prove/LaTeXsugar.thy b/src/Doc/Prog_Prove/LaTeXsugar.thy --- a/src/Doc/Prog_Prove/LaTeXsugar.thy +++ b/src/Doc/Prog_Prove/LaTeXsugar.thy @@ -1,56 +1,57 @@ (* Title: HOL/Library/LaTeXsugar.thy Author: Gerwin Klein, Tobias Nipkow, Norbert Schirmer Copyright 2005 NICTA and TUM *) (*<*) theory LaTeXsugar imports Main begin (* DUMMY *) consts DUMMY :: 'a ("\<^latex>\\\_\") (* THEOREMS *) notation (Rule output) Pure.imp ("\<^latex>\\\mbox{}\\inferrule{\\mbox{\_\<^latex>\}}\\<^latex>\{\\mbox{\_\<^latex>\}}\") syntax (Rule output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\\\mbox{}\\inferrule{\_\<^latex>\}\\<^latex>\{\\mbox{\_\<^latex>\}}\") "_asms" :: "prop \ asms \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\\\\\/ _") "_asm" :: "prop \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\") notation (Axiom output) "Trueprop" ("\<^latex>\\\mbox{}\\inferrule{\\mbox{}}{\\mbox{\_\<^latex>\}}\") notation (IfThen output) Pure.imp ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _/ \<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") syntax (IfThen output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _ /\<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") "_asms" :: "prop \ asms \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\ /\<^latex>\{\\normalsize \\,\and\<^latex>\\\,}\/ _") "_asm" :: "prop \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\") notation (IfThenNoBox output) Pure.imp ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _/ \<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") syntax (IfThenNoBox output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _ /\<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") "_asms" :: "prop \ asms \ asms" ("_ /\<^latex>\{\\normalsize \\,\and\<^latex>\\\,}\/ _") "_asm" :: "prop \ asms" ("_") setup \ - Thy_Output.antiquotation_pretty_source \<^binding>\const_typ\ (Scan.lift Args.embedded_inner_syntax) + Document_Output.antiquotation_pretty_source \<^binding>\const_typ\ + (Scan.lift Args.embedded_inner_syntax) (fn ctxt => fn c => let val tc = Proof_Context.read_const {proper = false, strict = false} ctxt c in - Pretty.block [Thy_Output.pretty_term ctxt tc, Pretty.str " ::", + Pretty.block [Document_Output.pretty_term ctxt tc, Pretty.str " ::", Pretty.brk 1, Syntax.pretty_typ ctxt (fastype_of tc)] end) \ end (*>*) diff --git a/src/Doc/Prog_Prove/document/build b/src/Doc/Prog_Prove/document/build deleted file mode 100755 --- a/src/Doc/Prog_Prove/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo HOL -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Prog_Prove/document/root.tex b/src/Doc/Prog_Prove/document/root.tex --- a/src/Doc/Prog_Prove/document/root.tex +++ b/src/Doc/Prog_Prove/document/root.tex @@ -1,52 +1,52 @@ \documentclass[envcountsame,envcountchap]{svmono} \input{prelude} \newif\ifsem \begin{document} \title{Programming and Proving in Isabelle/HOL} -\subtitle{\includegraphics[scale=.7]{isabelle_hol}} +\subtitle{\includegraphics[scale=.7]{isabelle_logo}} \author{Tobias Nipkow} \maketitle \frontmatter%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \setcounter{tocdepth}{1} \tableofcontents \mainmatter%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\part{Isabelle} \chapter{Introduction} \input{intro-isabelle.tex} \chapter{Programming and Proving} \label{sec:FP} \input{Basics.tex} \input{Bool_nat_list} \input{Types_and_funs} %\chapter{Case Study: IMP Expressions} %\label{sec:CaseStudyExp} %\input{../generated/Expressions} \chapter{Logic and Proof Beyond Equality} \label{ch:Logic} \input{Logic} \chapter{Isar: A Language for Structured Proofs} \label{ch:Isar} \input{Isar} \backmatter%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliographystyle{plain} \bibliography{root} %\printindex \end{document} diff --git a/src/Doc/ROOT b/src/Doc/ROOT --- a/src/Doc/ROOT +++ b/src/Doc/ROOT @@ -1,514 +1,490 @@ chapter Doc session Classes (doc) in "Classes" = HOL + - options [document_variants = "classes", quick_and_dirty] + options [document_logo = "Isar", document_bibliography, + document_variants = "classes", quick_and_dirty] theories [document = false] Setup theories Classes document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Codegen (doc) in "Codegen" = HOL + - options [document_variants = "codegen", print_mode = "no_brackets,iff"] + options [document_logo = "Isar", document_bibliography, document_variants = "codegen", + print_mode = "no_brackets,iff"] sessions "HOL-Library" theories [document = false] Setup theories Introduction Foundations Refinement Inductive_Predicate Evaluation Computations Adaptation Further document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Corec (doc) in "Corec" = Datatypes + - options [document_variants = "corec"] + options [document_bibliography, document_variants = "corec"] theories Corec document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Datatypes (doc) in "Datatypes" = HOL + - options [document_variants = "datatypes"] + options [document_bibliography, document_variants = "datatypes"] sessions "HOL-Library" theories [document = false] Setup theories Datatypes document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Eisbach (doc) in "Eisbach" = HOL + - options [document_variants = "eisbach", quick_and_dirty, - print_mode = "no_brackets,iff", show_question_marks = false] + options [document_logo = "Eisbach", document_bibliography, document_variants = "eisbach", + quick_and_dirty, print_mode = "no_brackets,iff", show_question_marks = false] sessions "HOL-Eisbach" theories [document = false] Base theories Preface Manual document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "ttbox.sty" "underscore.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Functions (doc) in "Functions" = HOL + - options [document_variants = "functions", skip_proofs = false, quick_and_dirty] + options [document_bibliography, document_variants = "functions", + skip_proofs = false, quick_and_dirty] theories Functions document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "conclusion.tex" "intro.tex" "root.tex" "style.sty" session How_to_Prove_it (no_doc) in "How_to_Prove_it" = HOL + options [document_variants = "how_to_prove_it", show_question_marks = false] theories How_to_Prove_it document_files "root.tex" "root.bib" "prelude.tex" session Intro (doc) in "Intro" = Pure + - options [document_variants = "intro"] + options [document_logo = "_", document_bibliography, document_build = "build", + document_variants = "intro"] document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "ttbox.sty" "manual.bib" document_files "advanced.tex" "build" "foundations.tex" "getting.tex" "root.tex" session Implementation (doc) in "Implementation" = HOL + - options [document_variants = "implementation", quick_and_dirty] + options [document_logo = "Isar", document_bibliography, + document_variants = "implementation", quick_and_dirty] theories Eq Integration Isar Local_Theory "ML" Prelim Proof Syntax Tactic theories [parallel_proofs = 0] Logic document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "ttbox.sty" "underscore.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" session Isar_Ref (doc) in "Isar_Ref" = HOL + - options [document_variants = "isar-ref", quick_and_dirty, thy_output_source] + options [document_logo = "Isar", document_bibliography, document_variants = "isar-ref", + quick_and_dirty, thy_output_source] sessions "HOL-Library" theories Preface Synopsis Framework First_Order_Logic Outer_Syntax Document_Preparation Spec Proof Proof_Script Inner_Syntax Generic HOL_Specific Quick_Reference Symbols document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "ttbox.sty" "underscore.sty" "manual.bib" document_files - "build" "isar-vm.pdf" "isar-vm.svg" "root.tex" - "showsymbols" "style.sty" session JEdit (doc) in "JEdit" = HOL + - options [document_variants = "jedit", thy_output_source] + options [document_logo = "jEdit", document_bibliography, document_variants = "jedit", + thy_output_source] theories JEdit document_files (in "..") "extra.sty" "iman.sty" "isar.sty" "manual.bib" "pdfsetup.sty" - "prepare_document" "ttbox.sty" "underscore.sty" document_files (in "../Isar_Ref/document") "style.sty" document_files "auto-tools.png" "bibtex-mode.png" - "build" "cite-completion.png" "isabelle-jedit.png" "markdown-document.png" "ml-debugger.png" "output-and-state.png" "output-including-state.png" "output.png" "popup1.png" "popup2.png" "query.png" "root.tex" "scope1.png" "scope2.png" "sidekick-document.png" "sidekick.png" "sledgehammer.png" "theories.png" session Sugar (doc) in "Sugar" = HOL + - options [document_variants = "sugar"] + options [document_bibliography, document_variants = "sugar"] sessions "HOL-Library" theories Sugar document_files (in "..") - "prepare_document" "pdfsetup.sty" document_files - "build" "root.bib" "root.tex" session Locales (doc) in "Locales" = HOL + - options [document_variants = "locales", thy_output_margin = 65, skip_proofs = false] + options [document_bibliography, document_variants = "locales", + thy_output_margin = 65, skip_proofs = false] theories Examples1 Examples2 Examples3 document_files (in "..") - "prepare_document" "pdfsetup.sty" document_files - "build" "root.bib" "root.tex" session Logics (doc) in "Logics" = Pure + - options [document_variants = "logics"] + options [document_logo = "_", document_bibliography, document_build = "build", + document_variants = "logics"] document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "ttbox.sty" "manual.bib" + document_files (in "../Intro/document") + "build" document_files "CTT.tex" "HOL.tex" "LK.tex" "Sequents.tex" - "build" "preface.tex" "root.tex" "syntax.tex" session Logics_ZF (doc) in "Logics_ZF" = ZF + - options [document_variants = "logics-ZF", print_mode = "brackets", - thy_output_source] + options [document_logo = "ZF", document_bibliography, document_build = "build", + document_variants = "logics-ZF", print_mode = "brackets", thy_output_source] sessions FOL theories IFOL_examples FOL_examples ZF_examples If ZF_Isar document_files (in "..") - "prepare_document" "pdfsetup.sty" "isar.sty" "ttbox.sty" "manual.bib" + document_files (in "../Intro/document") + "build" document_files (in "../Logics/document") "syntax.tex" document_files "FOL.tex" "ZF.tex" - "build" "logics.sty" "root.tex" session Main (doc) in "Main" = HOL + options [document_variants = "main"] theories Main_Doc document_files (in "..") - "prepare_document" "pdfsetup.sty" document_files - "build" "root.tex" session Nitpick (doc) in "Nitpick" = Pure + - options [document_variants = "nitpick"] + options [document_logo = "Nitpick", document_bibliography, document_variants = "nitpick"] document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "manual.bib" document_files - "build" "root.tex" session Prog_Prove (doc) in "Prog_Prove" = HOL + - options [document_variants = "prog-prove", show_question_marks = false] + options [document_logo = "HOL", document_bibliography, document_variants = "prog-prove", + show_question_marks = false] theories Basics Bool_nat_list MyList Types_and_funs Logic Isar document_files (in ".") "MyList.thy" document_files (in "..") - "prepare_document" "pdfsetup.sty" document_files "bang.pdf" - "build" "intro-isabelle.tex" "prelude.tex" "root.bib" "root.tex" "svmono.cls" session Sledgehammer (doc) in "Sledgehammer" = Pure + - options [document_variants = "sledgehammer"] + options [document_logo = "S/H", document_bibliography, document_variants = "sledgehammer"] document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "manual.bib" document_files - "build" "root.tex" session System (doc) in "System" = Pure + - options [document_variants = "system", thy_output_source] + options [document_logo = "_", document_bibliography, document_variants = "system", + thy_output_source] sessions "HOL-Library" theories Environment Sessions Presentation Server Scala Phabricator Misc document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "ttbox.sty" "underscore.sty" "manual.bib" document_files (in "../Isar_Ref/document") "style.sty" document_files - "build" "root.tex" session Tutorial (doc) in "Tutorial" = HOL + - options [document_variants = "tutorial", print_mode = "brackets", skip_proofs = false] + options [document_logo = "HOL", document_bibliography, document_build = "build", + document_variants = "tutorial", print_mode = "brackets", skip_proofs = false] directories "Advanced" "CTL" "CodeGen" "Datatype" "Documents" "Fun" "Ifexpr" "Inductive" "Misc" "Protocol" "Rules" "Sets" "ToyList" "Trie" "Types" theories [document = false] Base theories [threads = 1] "ToyList/ToyList_Test" theories [thy_output_indent = 5] "ToyList/ToyList" "Ifexpr/Ifexpr" "CodeGen/CodeGen" "Trie/Trie" "Datatype/ABexpr" "Datatype/unfoldnested" "Datatype/Nested" "Datatype/Fundata" "Fun/fun0" "Advanced/simp2" "CTL/PDL" "CTL/CTL" "CTL/CTLind" "Inductive/Even" "Inductive/Mutual" "Inductive/Star" "Inductive/AB" "Inductive/Advanced" "Misc/Tree" "Misc/Tree2" "Misc/Plus" "Misc/case_exprs" "Misc/fakenat" "Misc/natsum" "Misc/pairs2" "Misc/Option2" "Misc/types" "Misc/prime_def" "Misc/simp" "Misc/Itrev" "Misc/AdvancedInd" "Misc/appendix" theories "Protocol/NS_Public" "Documents/Documents" theories [thy_output_margin = 64, thy_output_indent = 0] "Types/Numbers" "Types/Pairs" "Types/Records" "Types/Typedefs" "Types/Overloading" "Types/Axioms" "Rules/Basic" "Rules/Blast" "Rules/Force" theories [thy_output_margin = 64, thy_output_indent = 5] "Rules/TPrimes" "Rules/Forward" "Rules/Tacticals" "Rules/find2" "Sets/Examples" "Sets/Functions" "Sets/Relations" "Sets/Recur" document_files (in "ToyList") "ToyList1.txt" "ToyList2.txt" document_files (in "..") "pdfsetup.sty" "ttbox.sty" "manual.bib" document_files "advanced0.tex" "appendix0.tex" "basics.tex" "build" "cl2emono-modified.sty" "ctl0.tex" "documents0.tex" "fp.tex" "inductive0.tex" "isa-index" "Isa-logics.pdf" "numerics.tex" "pghead.pdf" "preface.tex" "protocol.tex" "root.tex" "rules.tex" "sets.tex" "tutorial.sty" "typedef.pdf" "types0.tex" session Typeclass_Hierarchy (doc) in "Typeclass_Hierarchy" = HOL + - options [document_variants = "typeclass_hierarchy"] + options [document_logo = "Isar", document_bibliography, document_variants = "typeclass_hierarchy"] sessions "HOL-Library" theories [document = false] Setup theories Typeclass_Hierarchy document_files (in "..") - "prepare_document" "pdfsetup.sty" "iman.sty" "extra.sty" "isar.sty" "manual.bib" document_files - "build" "root.tex" "style.sty" diff --git a/src/Doc/Sledgehammer/document/build b/src/Doc/Sledgehammer/document/build deleted file mode 100755 --- a/src/Doc/Sledgehammer/document/build +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo -o isabelle_sledgehammer.pdf "S/H" -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" diff --git a/src/Doc/Sledgehammer/document/root.tex b/src/Doc/Sledgehammer/document/root.tex --- a/src/Doc/Sledgehammer/document/root.tex +++ b/src/Doc/Sledgehammer/document/root.tex @@ -1,1322 +1,1322 @@ \documentclass[a4paper,12pt]{article} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{color} \usepackage{footmisc} \usepackage{graphicx} %\usepackage{mathpazo} \usepackage{multicol} \usepackage{stmaryrd} %\usepackage[scaled=.85]{beramono} \usepackage{isabelle,iman,pdfsetup} \newcommand\download{\url{https://isabelle.in.tum.de/components/}} \let\oldS=\S \def\S{\oldS\,} \def\qty#1{\ensuremath{\left<\mathit{#1\/}\right>}} \def\qtybf#1{$\mathbf{\left<\textbf{\textit{#1\/}}\right>}$} \newcommand\const[1]{\textsf{#1}} %\oddsidemargin=4.6mm %\evensidemargin=4.6mm %\textwidth=150mm %\topmargin=4.6mm %\headheight=0mm %\headsep=0mm %\textheight=234mm \def\Colon{\mathord{:\mkern-1.5mu:}} %\def\lbrakk{\mathopen{\lbrack\mkern-3.25mu\lbrack}} %\def\rbrakk{\mathclose{\rbrack\mkern-3.255mu\rbrack}} \def\lparr{\mathopen{(\mkern-4mu\mid}} \def\rparr{\mathclose{\mid\mkern-4mu)}} \def\unk{{?}} \def\undef{(\lambda x.\; \unk)} %\def\unr{\textit{others}} \def\unr{\ldots} \def\Abs#1{\hbox{\rm{\guillemetleft}}{\,#1\,}\hbox{\rm{\guillemetright}}} \def\Q{{\smash{\lower.2ex\hbox{$\scriptstyle?$}}}} \urlstyle{tt} \renewcommand\_{\hbox{\textunderscore\kern-.05ex}} \hyphenation{Isa-belle super-posi-tion zipper-posi-tion} \begin{document} %%% TYPESETTING %\renewcommand\labelitemi{$\bullet$} \renewcommand\labelitemi{\raise.065ex\hbox{\small\textbullet}} -\title{\includegraphics[scale=0.5]{isabelle_sledgehammer} \\[4ex] +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] Hammering Away \\[\smallskipamount] \Large A User's Guide to Sledgehammer for Isabelle/HOL} \author{\hbox{} \\ Jasmin Blanchette \\ {\normalsize Institut f\"ur Informatik, Technische Universit\"at M\"unchen} \\[4\smallskipamount] {\normalsize with contributions from} \\[4\smallskipamount] Martin Desharnais \\ {\normalsize Forschungsinstitut CODE, Universit\"at der Bundeswehr M\"unchen} \\[4\smallskipamount] Lawrence C. Paulson \\ {\normalsize Computer Laboratory, University of Cambridge} \\ \hbox{}} \maketitle \tableofcontents \setlength{\parskip}{.7em plus .2em minus .1em} \setlength{\parindent}{0pt} \setlength{\abovedisplayskip}{\parskip} \setlength{\abovedisplayshortskip}{.9\parskip} \setlength{\belowdisplayskip}{\parskip} \setlength{\belowdisplayshortskip}{.9\parskip} % general-purpose enum environment with correct spacing \newenvironment{enum}% {\begin{list}{}{% \setlength{\topsep}{.1\parskip}% \setlength{\partopsep}{.1\parskip}% \setlength{\itemsep}{\parskip}% \advance\itemsep by-\parsep}} {\end{list}} \def\pre{\begingroup\vskip0pt plus1ex\advance\leftskip by\leftmargin \advance\rightskip by\leftmargin} \def\post{\vskip0pt plus1ex\endgroup} \def\prew{\pre\advance\rightskip by-\leftmargin} \def\postw{\post} \section{Introduction} \label{introduction} Sledgehammer is a tool that applies automatic theorem provers (ATPs) and satisfiability-modulo-theories (SMT) solvers on the current goal.% \footnote{The distinction between ATPs and SMT solvers is convenient but mostly historical.} % The supported ATPs include agsyHOL \cite{agsyHOL}, Alt-Ergo \cite{alt-ergo}, E \cite{schulz-2019}, iProver \cite{korovin-2009}, LEO-II \cite{leo2}, Leo-III \cite{leo3}, Satallax \cite{satallax}, SPASS \cite{weidenbach-et-al-2009}, Vampire \cite{riazanov-voronkov-2002}, Waldmeister \cite{waldmeister}, and Zipperposition \cite{cruanes-2014}. The ATPs are run either locally or remotely via the System\-On\-TPTP web service \cite{sutcliffe-2000}. The supported SMT solvers are CVC3 \cite{cvc3}, CVC4 \cite{cvc4}, veriT \cite{bouton-et-al-2009}, and Z3 \cite{de-moura-2008}. These are always run locally. The problem passed to the external provers (or solvers) consists of your current goal together with a heuristic selection of hundreds of facts (theorems) from the current theory context, filtered by relevance. The result of a successful proof search is some source text that typically reconstructs the proof within Isabelle. For ATPs, the reconstructed proof typically relies on the general-purpose \textit{metis} proof method, which integrates the Metis ATP in Isabelle/HOL with explicit inferences going through the kernel. Thus its results are correct by construction. For Isabelle/jEdit users, Sledgehammer provides an automatic mode that can be enabled via the ``Auto Sledgehammer'' option under ``Plugins > Plugin Options > Isabelle > General.'' In this mode, a reduced version of Sledgehammer is run on every newly entered theorem for a few seconds. \newbox\boxA \setbox\boxA=\hbox{\texttt{NOSPAM}} \newcommand\authoremail{\texttt{jasmin.blan{\color{white}NOSPAM}\kern-\wd\boxA{}chette@\allowbreak google.\allowbreak com}} To run Sledgehammer, you must make sure that the theory \textit{Sledgehammer} is imported---this is rarely a problem in practice since it is part of \textit{Main}. Examples of Sledgehammer use can be found in the \texttt{src/HOL/Metis\_Examples} directory. Comments and bug reports concerning Sledgehammer or this manual should be directed to the author at \authoremail. \section{Installation} \label{installation} Sledgehammer is part of Isabelle, so you do not need to install it. However, it relies on third-party automatic provers (ATPs and SMT solvers). Among the ATPs, agsyHOL, Alt-Ergo, E, LEO-II, Leo-III, Satallax, SPASS, Vampire, and Zipperposition can be run locally; in addition, agsyHOL, Alt-Ergo, E, iProver, LEO-II, Leo-III, Satallax, Vampire, Waldmeister, and Zipperposition are available remotely via System\-On\-TPTP \cite{sutcliffe-2000}. The SMT solvers CVC3, CVC4, veriT, and Z3 can be run locally. There are three main ways to install automatic provers on your machine: \begin{sloppy} \begin{enum} \item[\labelitemi] If you installed an official Isabelle package, it should already include properly set up executables for CVC4, E, SPASS, Vampire, veriT, and Z3, ready to use. To use Vampire, you must confirm that you are a noncommercial user, as indicated by the message that is displayed when Sledgehammer is invoked the first time. \item[\labelitemi] Alternatively, you can download the Isabelle-aware CVC3, CVC4, E, SPASS, Vampire, veriT, and Z3 binary packages from \download. Extract the archives, then add a line to your \texttt{\$ISABELLE\_HOME\_USER\slash etc\slash components}% \footnote{The variable \texttt{\$ISABELLE\_HOME\_USER} is set by Isabelle at startup. Its value can be retrieved by executing \texttt{isabelle} \texttt{getenv} \texttt{ISABELLE\_HOME\_USER} on the command line.} file with the absolute path to the prover. For example, if the \texttt{components} file does not exist yet and you extracted SPASS to \texttt{/usr/local/spass-3.8ds}, create it with the single line \prew \texttt{/usr/local/spass-3.8ds} \postw in it. \item[\labelitemi] If you prefer to build agsyHOL, Alt-Ergo, E, LEO-II, Leo-III, or Satallax manually, set the environment variable \texttt{AGSYHOL\_HOME}, \texttt{E\_HOME}, \texttt{LEO2\_HOME}, \texttt{LEO3\_HOME}, or \texttt{SATALLAX\_HOME} to the directory that contains the \texttt{agsyHOL}, \texttt{eprover} (and/or \texttt{eproof} or \texttt{eproof\_ram}), \texttt{leo}, \texttt{leo3}, or \texttt{satallax} executable; for Alt-Ergo, set the environment variable \texttt{WHY3\_HOME} to the directory that contains the \texttt{why3} executable. Sledgehammer has been tested with agsyHOL 1.0, Alt-Ergo 0.95.2, E 1.6 to 2.0, LEO-II 1.3.4, Leo-III 1.1, and Satallax 2.7. Since the ATPs' output formats are neither documented nor stable, other versions might not work well with Sledgehammer. Ideally, you should also set \texttt{E\_VERSION}, \texttt{LEO2\_VERSION}, \texttt{LEO3\_VERSION}, or \texttt{SATALLAX\_VERSION} to the prover's version number (e.g., ``2.7''); this might help Sledgehammer invoke the prover optimally. Similarly, if you want to install CVC3, CVC4, veriT, or Z3, set the environment variable \texttt{CVC3\_\allowbreak SOLVER}, \texttt{CVC4\_\allowbreak SOLVER}, \texttt{VERIT\_\allowbreak SOLVER}, or \texttt{Z3\_SOLVER} to the complete path of the executable, \emph{including the file name}. Sledgehammer has been tested with CVC3 2.2 and 2.4.1, CVC4 1.5-prerelease, veriT 2020.10-rmx, and Z3 4.3.2. Since Z3's output format is somewhat unstable, other versions of the solver might not work well with Sledgehammer. Ideally, also set \texttt{CVC3\_VERSION}, \texttt{CVC4\_VERSION}, \texttt{VERIT\_VERSION}, or \texttt{Z3\_VERSION} to the solver's version number (e.g., ``4.4.0''). \end{enum} \end{sloppy} To check whether the provers are successfully installed, try out the example in \S\ref{first-steps}. If the remote versions of any of these provers is used (identified by the prefix ``\textit{remote\_\/}''), or if the local versions fail to solve the easy goal presented there, something must be wrong with the installation. \section{First Steps} \label{first-steps} To illustrate Sledgehammer in context, let us start a theory file and attempt to prove a simple lemma: \prew \textbf{theory}~\textit{Scratch} \\ \textbf{imports}~\textit{Main} \\ \textbf{begin} \\[2\smallskipamount] % \textbf{lemma} ``$[a] = [b] \,\Longrightarrow\, a = b$'' \\ \textbf{sledgehammer} \postw Instead of issuing the \textbf{sledgehammer} command, you can also use the Sledgehammer panel in Isabelle/jEdit. Sledgehammer might produce something like the following output after a few seconds: \prew \slshape Proof found\ldots \\ ``\textit{e\/}'': Try this: \textbf{by} \textit{simp} (0.3 ms) \\ % ``\textit{cvc4\/}'': Try this: \textbf{by} \textit{simp} (0.4 ms) \\ % ``\textit{z3\/}'': Try this: \textbf{by} \textit{simp} (0.5 ms) \\ % ``\textit{spass\/}'': Try this: \textbf{by} \textit{simp} (0.3 ms) % \postw Sledgehammer ran CVC4, E, SPASS, and Z3 in parallel. Depending on which provers are installed and how many processor cores are available, some of the provers might be missing or present with a \textit{remote\_} prefix. For each successful prover, Sledgehammer gives a one-line Isabelle proof. Rough timings are shown in parentheses, indicating how fast the call is. You can click the proof to insert it into the theory text. In addition, you can ask Sledgehammer for an Isar text proof by enabling the \textit{isar\_proofs} option (\S\ref{output-format}): \prew \textbf{sledgehammer} [\textit{isar\_proofs}] \postw When Isar proof construction is successful, it can yield proofs that are more readable and also faster than \textit{metis} or \textit{smt} one-line proofs. This feature is experimental. \section{Hints} \label{hints} This section presents a few hints that should help you get the most out of Sledgehammer. Frequently asked questions are answered in \S\ref{frequently-asked-questions}. %\newcommand\point[1]{\medskip\par{\sl\bfseries#1}\par\nopagebreak} \newcommand\point[1]{\subsection{\emph{#1}}} \point{Presimplify the goal} For best results, first simplify your problem by calling \textit{auto} or at least \textit{safe} followed by \textit{simp\_all}. The SMT solvers provide arithmetic decision procedures, but the ATPs typically do not (or if they do, Sledgehammer does not use it yet). Apart from Waldmeister, they are not particularly good at heavy rewriting, but because they regard equations as undirected, they often prove theorems that require the reverse orientation of a \textit{simp} rule. Higher-order problems can be tackled, but the success rate is better for first-order problems. Hence, you may get better results if you first simplify the problem to remove higher-order features. \point{Familiarize yourself with the main options} Sledgehammer's options are fully documented in \S\ref{command-syntax}. Many of the options are very specialized, but serious users of the tool should at least familiarize themselves with the following options: \begin{enum} \item[\labelitemi] \textbf{\textit{provers}} (\S\ref{mode-of-operation}) specifies the automatic provers (ATPs and SMT solvers) that should be run whenever Sledgehammer is invoked (e.g., ``\textit{provers}~= \textit{cvc4 e spass vampire\/}''). For convenience, you can omit ``\textit{provers}~='' and simply write the prover names as a space-separated list (e.g., ``\textit{cvc4 e spass vampire\/}''). \item[\labelitemi] \textbf{\textit{max\_facts}} (\S\ref{relevance-filter}) specifies the maximum number of facts that should be passed to the provers. By default, the value is prover-dependent but varies between about 50 and 1000. If the provers time out, you can try lowering this value to, say, 25 or 50 and see if that helps. \item[\labelitemi] \textbf{\textit{isar\_proofs}} (\S\ref{output-format}) specifies that Isar proofs should be generated, in addition to one-line \textit{metis} or \textit{smt} proofs. The length of the Isar proofs can be controlled by setting \textit{compress} (\S\ref{output-format}). \item[\labelitemi] \textbf{\textit{timeout}} (\S\ref{timeouts}) controls the provers' time limit. It is set to 30 seconds by default. \end{enum} Options can be set globally using \textbf{sledgehammer\_params} (\S\ref{command-syntax}). The command also prints the list of all available options with their current value. Fact selection can be influenced by specifying ``$(\textit{add}{:}~\textit{my\_facts})$'' after the \textbf{sledgehammer} call to ensure that certain facts are included, or simply ``$(\textit{my\_facts})$'' to force Sledgehammer to run only with $\textit{my\_facts}$ (and any facts chained into the goal). \section{Frequently Asked Questions} \label{frequently-asked-questions} This sections answers frequently (and infrequently) asked questions about Sledgehammer. It is a good idea to skim over it now even if you do not have any questions at this stage. And if you have any further questions not listed here, send them to the author at \authoremail. \point{Which facts are passed to the automatic provers?} Sledgehammer heuristically selects a few hundred relevant lemmas from the currently loaded libraries. The component that performs this selection is called \emph{relevance filter} (\S\ref{relevance-filter}). \begin{enum} \item[\labelitemi] The traditional relevance filter, \emph{MePo} (\underline{Me}ng--\underline{Pau}lson), assigns a score to every available fact (lemma, theorem, definition, or axiom) based upon how many constants that fact shares with the conjecture. This process iterates to include facts relevant to those just accepted. The constants are weighted to give unusual ones greater significance. MePo copes best when the conjecture contains some unusual constants; if all the constants are common, it is unable to discriminate among the hundreds of facts that are picked up. The filter is also memoryless: It has no information about how many times a particular fact has been used in a proof, and it cannot learn. \item[\labelitemi] An alternative to MePo is \emph{MaSh} (\underline{Ma}chine Learner for \underline{S}ledge\underline{h}ammer). It applies machine learning to the problem of finding relevant facts. \item[\labelitemi] The \emph{MeSh} filter combines MePo and MaSh. This is the default. \end{enum} The number of facts included in a problem varies from prover to prover, since some provers get overwhelmed more easily than others. You can show the number of facts given using the \textit{verbose} option (\S\ref{output-format}) and the actual facts using \textit{debug} (\S\ref{output-format}). Sledgehammer is good at finding short proofs combining a handful of existing lemmas. If you are looking for longer proofs, you must typically restrict the number of facts, by setting the \textit{max\_facts} option (\S\ref{relevance-filter}) to, say, 25 or 50. You can also influence which facts are actually selected in a number of ways. If you simply want to ensure that a fact is included, you can specify it using the ``$(\textit{add}{:}~\textit{my\_facts})$'' syntax. For example: % \prew \textbf{sledgehammer} (\textit{add}: \textit{hd.simps} \textit{tl.simps}) \postw % The specified facts then replace the least relevant facts that would otherwise be included; the other selected facts remain the same. If you want to direct the selection in a particular direction, you can specify the facts via \textbf{using}: % \prew \textbf{using} \textit{hd.simps} \textit{tl.simps} \\ \textbf{sledgehammer} \postw % The facts are then more likely to be selected than otherwise, and if they are selected at iteration $j$ they also influence which facts are selected at iterations $j + 1$, $j + 2$, etc. To give them even more weight, try % \prew \textbf{using} \textit{hd.simps} \textit{tl.simps} \\ \textbf{apply}~\textbf{--} \\ \textbf{sledgehammer} \postw \point{Why does Metis fail to reconstruct the proof?} There are many reasons. If Metis runs seemingly forever, that is a sign that the proof is too difficult for it. Metis's search is complete for first-order logic with equality, so if the proof was found by a superposition-based ATP such as E, SPASS, or Vampire, Metis should eventually find it, but that is little consolation. In some rare cases, \textit{metis} fails fairly quickly, and you get the error message ``One-line proof reconstruction failed.'' This indicates that Sledgehammer determined that the goal is provable, but the proof is, for technical reasons, beyond \textit{metis}'s power. You can then try again with the \textit{strict} option (\S\ref{problem-encoding}). If the goal is actually unprovable and you did not specify an unsound encoding using \textit{type\_enc} (\S\ref{problem-encoding}), this is a bug, and you are strongly encouraged to report this to the author at \authoremail. \point{What are the \textit{full\_types}, \textit{no\_types}, and \\ \textit{mono\_tags} arguments to Metis?} The \textit{metis}~(\textit{full\_types}) proof method and its cousin \textit{metis}~(\textit{mono\_tags}) are fully-typed versions of Metis. It is somewhat slower than \textit{metis}, but the proof search is fully typed, and it also includes more powerful rules such as the axiom ``$x = \const{True} \mathrel{\lor} x = \const{False}$'' for reasoning in higher-order places (e.g., in set comprehensions). The method is tried as a fallback when \textit{metis} fails, and it is sometimes generated by Sledgehammer instead of \textit{metis} if the proof obviously requires type information or if \textit{metis} failed when Sledgehammer preplayed the proof. % At the other end of the soundness spectrum, \textit{metis} (\textit{no\_types}) uses no type information at all during the proof search, which is more efficient but often fails. Calls to \textit{metis} (\textit{no\_types}) are occasionally generated by Sledgehammer. % See the \textit{type\_enc} option (\S\ref{problem-encoding}) for details. Incidentally, if you ever see warnings such as \prew \slshape Metis: Falling back on ``\textit{metis} (\textit{full\_types})'' \postw for a successful \textit{metis} proof, you can advantageously pass the \textit{full\_types} option to \textit{metis} directly. \point{And what are the \textit{lifting} and \textit{hide\_lams} \\ arguments to Metis?} Orthogonally to the encoding of types, it is important to choose an appropriate translation of $\lambda$-abstractions. Metis supports three translation schemes, in decreasing order of power: Curry combinators (the default), $\lambda$-lifting, and a ``hiding'' scheme that disables all reasoning under $\lambda$-abstractions. The more powerful schemes also give the automatic provers more rope to hang themselves. See the \textit{lam\_trans} option (\S\ref{problem-encoding}) for details. \point{Are the generated proofs minimal?} Automatic provers frequently use many more facts than are necessary. Sledgehammer includes a proof minimization tool that takes a set of facts returned by a given prover and repeatedly calls a prover or proof method with subsets of those facts to find a minimal set. Reducing the number of facts typically helps reconstruction, while decluttering the proof scripts. \point{A strange error occurred---what should I do?} Sledgehammer tries to give informative error messages. Please report any strange error to the author at \authoremail. \point{Auto can solve it---why not Sledgehammer?} Problems can be easy for \textit{auto} and difficult for automatic provers, but the reverse is also true, so do not be discouraged if your first attempts fail. Because the system refers to all theorems known to Isabelle, it is particularly suitable when your goal has a short proof but requires lemmas that you do not know about. \point{Why are there so many options?} Sledgehammer's philosophy is that it should work out of the box, without user guidance. Most of the options are meant to be used by the Sledgehammer developers for experiments. \section{Command Syntax} \label{command-syntax} \subsection{Sledgehammer} \label{sledgehammer} Sledgehammer can be invoked at any point when there is an open goal by entering the \textbf{sledgehammer} command in the theory file. Its general syntax is as follows: \prew \textbf{sledgehammer} \qty{subcommand}$^?$ \qty{options}$^?$ \qty{facts\_override}$^?$ \qty{num}$^?$ \postw In the general syntax, the \qty{subcommand} may be any of the following: \begin{enum} \item[\labelitemi] \textbf{\textit{run} (the default):} Runs Sledgehammer on subgoal number \qty{num} (1 by default), with the given options and facts. \item[\labelitemi] \textbf{\textit{supported\_provers}:} Prints the list of automatic provers supported by Sledgehammer. See \S\ref{installation} and \S\ref{mode-of-operation} for more information on how to install automatic provers. \item[\labelitemi] \textbf{\textit{refresh\_tptp}:} Refreshes the list of remote ATPs available at System\-On\-TPTP \cite{sutcliffe-2000}. \end{enum} In addition, the following subcommands provide finer control over machine learning with MaSh: \begin{enum} \item[\labelitemi] \textbf{\textit{unlearn}:} Resets MaSh, erasing any persistent state. \item[\labelitemi] \textbf{\textit{learn\_isar}:} Invokes MaSh on the current theory to process all the available facts, learning from their Isabelle/Isar proofs. This happens automatically at Sledgehammer invocations if the \textit{learn} option (\S\ref{relevance-filter}) is enabled. \item[\labelitemi] \textbf{\textit{learn\_prover}:} Invokes MaSh on the current theory to process all the available facts, learning from proofs generated by automatic provers. The prover to use and its timeout can be set using the \textit{prover} (\S\ref{mode-of-operation}) and \textit{timeout} (\S\ref{timeouts}) options. It is recommended to perform learning using a first-order ATP (such as E, SPASS, and Vampire) as opposed to a higher-order ATP or an SMT solver. \item[\labelitemi] \textbf{\textit{relearn\_isar}:} Same as \textit{unlearn} followed by \textit{learn\_isar}. \item[\labelitemi] \textbf{\textit{relearn\_prover}:} Same as \textit{unlearn} followed by \textit{learn\_prover}. \end{enum} Sledgehammer's behavior can be influenced by various \qty{options}, which can be specified in brackets after the \textbf{sledgehammer} command. The \qty{options} are a list of key--value pairs of the form ``[$k_1 = v_1, \ldots, k_n = v_n$]''. For Boolean options, ``= \textit{true\/}'' is optional. For example: \prew \textbf{sledgehammer} [\textit{isar\_proofs}, \,\textit{timeout} = 120] \postw Default values can be set using \textbf{sledgehammer\_\allowbreak params}: \prew \textbf{sledgehammer\_params} \qty{options} \postw The supported options are described in \S\ref{option-reference}. The \qty{facts\_override} argument lets you alter the set of facts that go through the relevance filter. It may be of the form ``(\qty{facts})'', where \qty{facts} is a space-separated list of Isabelle facts (theorems, local assumptions, etc.), in which case the relevance filter is bypassed and the given facts are used. It may also be of the form ``(\textit{add}:\ \qty{facts\/_{\mathrm{1}}})'', ``(\textit{del}:\ \qty{facts\/_{\mathrm{2}}})'', or ``(\textit{add}:\ \qty{facts\/_{\mathrm{1}}}\ \textit{del}:\ \qty{facts\/_{\mathrm{2}}})'', where the relevance filter is instructed to proceed as usual except that it should consider \qty{facts\/_{\mathrm{1}}} highly-relevant and \qty{facts\/_{\mathrm{2}}} fully irrelevant. If you use Isabelle/jEdit, Sledgehammer also provides an automatic mode that can be enabled via the ``Auto Sledgehammer'' option under ``Plugins > Plugin Options > Isabelle > General.'' For automatic runs, only the first prover set using \textit{provers} (\S\ref{mode-of-operation}) is considered (typically E), \textit{slice} (\S\ref{mode-of-operation}) is disabled, fewer facts are passed to the prover, \textit{fact\_filter} (\S\ref{relevance-filter}) is set to \textit{mepo}, \textit{strict} (\S\ref{problem-encoding}) is enabled, \textit{verbose} (\S\ref{output-format}) and \textit{debug} (\S\ref{output-format}) are disabled, and \textit{timeout} (\S\ref{timeouts}) is superseded by the ``Auto Time Limit'' option in jEdit. Sledgehammer's output is also more concise. \subsection{Metis} \label{metis} The \textit{metis} proof method has the syntax \prew \textbf{\textit{metis}}~(\qty{options})${}^?$~\qty{facts}${}^?$ \postw where \qty{facts} is a list of arbitrary facts and \qty{options} is a comma-separated list consisting of at most one $\lambda$ translation scheme specification with the same semantics as Sledgehammer's \textit{lam\_trans} option (\S\ref{problem-encoding}) and at most one type encoding specification with the same semantics as Sledgehammer's \textit{type\_enc} option (\S\ref{problem-encoding}). % The supported $\lambda$ translation schemes are \textit{hide\_lams}, \textit{lifting}, and \textit{combs} (the default). % All the untyped type encodings listed in \S\ref{problem-encoding} are supported. For convenience, the following aliases are provided: \begin{enum} \item[\labelitemi] \textbf{\textit{full\_types}:} Alias for \textit{poly\_guards\_query}. \item[\labelitemi] \textbf{\textit{partial\_types}:} Alias for \textit{poly\_args}. \item[\labelitemi] \textbf{\textit{no\_types}:} Alias for \textit{erased}. \end{enum} \section{Option Reference} \label{option-reference} \def\defl{\{} \def\defr{\}} \def\flushitem#1{\item[]\noindent\kern-\leftmargin \textbf{#1}} \def\optrueonly#1{\flushitem{\textit{#1} $\bigl[$= \textit{true}$\bigr]$\enskip}\nopagebreak\\[\parskip]} \def\optrue#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{true}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opfalse#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{false}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opsmart#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{smart\_bool}$\bigr]$\enskip \defl\textit{smart}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opsmartx#1#2{\flushitem{\textit{#1} $\bigl[$= \qtybf{smart\_bool}$\bigr]$\enskip \defl\textit{smart}\defr\\\hbox{}\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} \def\opnodefault#1#2{\flushitem{\textit{#1} = \qtybf{#2}} \nopagebreak\\[\parskip]} \def\opnodefaultbrk#1#2{\flushitem{$\bigl[$\textit{#1} =$\bigr]$ \qtybf{#2}} \nopagebreak\\[\parskip]} \def\opdefault#1#2#3{\flushitem{\textit{#1} = \qtybf{#2}\enskip \defl\textit{#3}\defr} \nopagebreak\\[\parskip]} \def\oparg#1#2#3{\flushitem{\textit{#1} \qtybf{#2} = \qtybf{#3}} \nopagebreak\\[\parskip]} \def\opargbool#1#2#3{\flushitem{\textit{#1} \qtybf{#2} $\bigl[$= \qtybf{bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} \def\opargboolorsmart#1#2#3{\flushitem{\textit{#1} \qtybf{#2} $\bigl[$= \qtybf{smart\_bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} Sledgehammer's options are categorized as follows:\ mode of operation (\S\ref{mode-of-operation}), problem encoding (\S\ref{problem-encoding}), relevance filter (\S\ref{relevance-filter}), output format (\S\ref{output-format}), regression testing (\S\ref{regression-testing}), and timeouts (\S\ref{timeouts}). The descriptions below refer to the following syntactic quantities: \begin{enum} \item[\labelitemi] \qtybf{string}: A string. \item[\labelitemi] \qtybf{bool\/}: \textit{true} or \textit{false}. \item[\labelitemi] \qtybf{smart\_bool\/}: \textit{true}, \textit{false}, or \textit{smart}. \item[\labelitemi] \qtybf{int\/}: An integer. \item[\labelitemi] \qtybf{float}: A floating-point number (e.g., 2.5 or 60) expressing a number of seconds. \item[\labelitemi] \qtybf{float\_pair\/}: A pair of floating-point numbers (e.g., 0.6 0.95). \item[\labelitemi] \qtybf{smart\_int\/}: An integer or \textit{smart}. \end{enum} Default values are indicated in curly brackets (\textrm{\{\}}). Boolean options have a negative counterpart (e.g., \textit{minimize} vs.\ \textit{dont\_minimize}). When setting Boolean options or their negative counterparts, ``= \textit{true\/}'' may be omitted. \subsection{Mode of Operation} \label{mode-of-operation} \begin{enum} \opnodefaultbrk{provers}{string} Specifies the automatic provers to use as a space-separated list (e.g., ``\textit{cvc4}~\textit{e}~\textit{spass}~\textit{vampire\/}''). Provers can be run locally or remotely; see \S\ref{installation} for installation instructions. The following local provers are supported: \begin{sloppy} \begin{enum} \item[\labelitemi] \textbf{\textit{agsyhol}:} agsyHOL is an automatic higher-order prover developed by Fredrik Lindblad \cite{agsyHOL}. To use agsyHOL, set the environment variable \texttt{AGSYHOL\_HOME} to the directory that contains the \texttt{agsyHOL} executable. Sledgehammer has been tested with version 1.0. \item[\labelitemi] \textbf{\textit{alt\_ergo}:} Alt-Ergo is a polymorphic ATP developed by Bobot et al.\ \cite{alt-ergo}. It supports the TPTP polymorphic typed first-order format (TF1) via Why3 \cite{why3}. To use Alt-Ergo, set the environment variable \texttt{WHY3\_HOME} to the directory that contains the \texttt{why3} executable. Sledgehammer requires Alt-Ergo 0.95.2 and Why3 0.83. \item[\labelitemi] \textbf{\textit{cvc3}:} CVC3 is an SMT solver developed by Clark Barrett, Cesare Tinelli, and their colleagues \cite{cvc3}. To use CVC3, set the environment variable \texttt{CVC3\_SOLVER} to the complete path of the executable, including the file name, or install the prebuilt CVC3 package from \download. Sledgehammer has been tested with versions 2.2 and 2.4.1. \item[\labelitemi] \textbf{\textit{cvc4}:} CVC4 \cite{cvc4} is the successor to CVC3. To use CVC4, set the environment variable \texttt{CVC4\_SOLVER} to the complete path of the executable, including the file name, or install the prebuilt CVC4 package from \download. Sledgehammer has been tested with version 1.5-prerelease. \item[\labelitemi] \textbf{\textit{e}:} E is a first-order resolution prover developed by Stephan Schulz \cite{schulz-2019}. To use E, set the environment variable \texttt{E\_HOME} to the directory that contains the \texttt{eproof} executable and \texttt{E\_VERSION} to the version number (e.g., ``1.8''), or install the prebuilt E package from \download. Sledgehammer has been tested with versions 1.6 to 1.8. \item[\labelitemi] \textbf{\textit{iprover}:} iProver is a pure instantiation-based prover developed by Konstantin Korovin \cite{korovin-2009}. To use iProver, set the environment variable \texttt{IPROVER\_HOME} to the directory that contains the \texttt{iproveropt} executable. Sledgehammer has been tested with version 2.8. iProver depends on E to clausify problems, so make sure that E is installed as well. \item[\labelitemi] \textbf{\textit{leo2}:} LEO-II is an automatic higher-order prover developed by Christoph Benzm\"uller et al.\ \cite{leo2}, with support for the TPTP typed higher-order syntax (TH0). To use LEO-II, set the environment variable \texttt{LEO2\_HOME} to the directory that contains the \texttt{leo} executable. Sledgehammer has been tested with version 1.3.4. \item[\labelitemi] \textbf{\textit{leo3}:} Leo-III is an automatic higher-order prover developed by Alexander Steen, Max Wisniewski, Christoph Benzm\"uller et al.\ \cite{leo3}, with support for the TPTP typed higher-order syntax (TH0). To use Leo-III, set the environment variable \texttt{LEO3\_HOME} to the directory that contains the \texttt{leo3} executable. Sledgehammer has been tested with version 1.1. \item[\labelitemi] \textbf{\textit{satallax}:} Satallax is an automatic higher-order prover developed by Chad Brown et al.\ \cite{satallax}, with support for the TPTP typed higher-order syntax (TH0). To use Satallax, set the environment variable \texttt{SATALLAX\_HOME} to the directory that contains the \texttt{satallax} executable. Sledgehammer has been tested with version 2.2. \item[\labelitemi] \textbf{\textit{spass}:} SPASS is a first-order resolution prover developed by Christoph Weidenbach et al.\ \cite{weidenbach-et-al-2009}. To use SPASS, set the environment variable \texttt{SPASS\_HOME} to the directory that contains the \texttt{SPASS} executable and \texttt{SPASS\_VERSION} to the version number (e.g., ``3.8ds''), or install the prebuilt SPASS package from \download. Sledgehammer has been tested with version 3.8ds. \item[\labelitemi] \textbf{\textit{vampire}:} Vampire is a first-order resolution prover developed by Andrei Voronkov and his colleagues \cite{riazanov-voronkov-2002}. To use Vampire, set the environment variable \texttt{VAMPIRE\_HOME} to the directory that contains the \texttt{vampire} executable and \texttt{VAMPIRE\_VERSION} to the version number (e.g., ``4.2.2''). Sledgehammer has been tested with versions 1.8 to 4.2.2 (in the post-2010 numbering scheme). \item[\labelitemi] \textbf{\textit{verit}:} veriT \cite{bouton-et-al-2009} is an SMT solver developed by David D\'eharbe, Pascal Fontaine, and their colleagues. It is designed to produce detailed proofs for reconstruction in proof assistants. To use veriT, set the environment variable \texttt{VERIT\_SOLVER} to the complete path of the executable, including the file name. Sledgehammer has been tested with version 2020.10-rmx. \item[\labelitemi] \textbf{\textit{z3}:} Z3 is an SMT solver developed at Microsoft Research \cite{de-moura-2008}. To use Z3, set the environment variable \texttt{Z3\_SOLVER} to the complete path of the executable, including the file name. Sledgehammer has been tested with a pre-release version of 4.4.0. \item[\labelitemi] \textbf{\textit{z3\_tptp}:} This version of Z3 pretends to be an ATP, exploiting Z3's support for the TPTP typed first-order format (TF0). It is included for experimental purposes. Sledgehammer has been tested with version 4.3.1. To use it, set the environment variable \texttt{Z3\_TPTP\_HOME} to the directory that contains the \texttt{z3\_tptp} executable. \item[\labelitemi] \textbf{\textit{zipperposition}:} Zipperposition \cite{cruanes-2014} is a higher-order superposition prover developed by Simon Cruanes, Petar Vukmirovi\'c, and colleagues. To use Zipperposition, set the environment variable \texttt{ZIPPERPOSITION\_HOME} to the directory that contains the \texttt{zipperposition} executable and \texttt{ZIPPERPOSITION\_VERSION} to the version number (e.g., ``2.0.1''). Sledgehammer has been tested with version 2.0.1. \end{enum} \end{sloppy} Moreover, the following remote provers are supported: \begin{enum} \item[\labelitemi] \textbf{\textit{remote\_agsyhol}:} The remote version of agsyHOL runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_alt\_ergo}:} The remote version of Alt-Ergo runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_e}:} The remote version of E runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_iprover}:} The remote version of iProver runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_leo2}:} The remote version of LEO-II runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_leo3}:} The remote version of Leo-III runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. \item[\labelitemi] \textbf{\textit{remote\_vampire}:} The remote version of Vampire runs on Geoff Sutcliffe's Miami servers. \item[\labelitemi] \textbf{\textit{remote\_waldmeister}:} Waldmeister is a unit equality prover developed by Hillenbrand et al.\ \cite{waldmeister}. It can be used to prove universally quantified equations using unconditional equations, corresponding to the TPTP CNF UEQ division. The remote version of Waldmeister runs on Geoff Sutcliffe's Miami servers. \item[\labelitemi] \textbf{\textit{remote\_zipperposition}:} The remote version of Zipperposition runs on Geoff Sutcliffe's Miami servers. \end{enum} By default, Sledgehammer runs a subset of CVC4, E, SPASS, Vampire, veriT, and Z3 in parallel, either locally or remotely---depending on the number of processor cores available and on which provers are actually installed. It is generally desirable to run several provers in parallel. \opnodefault{prover}{string} Alias for \textit{provers}. \optrue{slice}{dont\_slice} Specifies whether the time allocated to a prover should be sliced into several segments, each of which has its own set of possibly prover-dependent options. For SPASS and Vampire, the first slice tries the fast but incomplete set-of-support (SOS) strategy, whereas the second slice runs without it. For E, up to three slices are tried, with different weighted search strategies and number of facts. For SMT solvers, several slices are tried with the same options each time but fewer and fewer facts. According to benchmarks with a timeout of 30 seconds, slicing is a valuable optimization, and you should probably leave it enabled unless you are conducting experiments. \nopagebreak {\small See also \textit{verbose} (\S\ref{output-format}).} \optrue{minimize}{dont\_minimize} Specifies whether the proof minimization tool should be invoked automatically after proof search. \nopagebreak {\small See also \textit{preplay\_timeout} (\S\ref{timeouts}) and \textit{dont\_preplay} (\S\ref{timeouts}).} \opfalse{spy}{dont\_spy} Specifies whether Sledgehammer should record statistics in \texttt{\$ISA\-BELLE\_\allowbreak HOME\_\allowbreak USER/\allowbreak spy\_\allowbreak sledgehammer}. These statistics can be useful to the developers of Sledgehammer. If you are willing to have your interactions recorded in the name of science, please enable this feature and send the statistics file every now and then to the author of this manual (\authoremail). To change the default value of this option globally, set the environment variable \texttt{SLEDGEHAMMER\_SPY} to \textit{yes}. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}).} \opfalse{overlord}{no\_overlord} Specifies whether Sledgehammer should put its temporary files in \texttt{\$ISA\-BELLE\_\allowbreak HOME\_\allowbreak USER}, which is useful for debugging Sledgehammer but also unsafe if several instances of the tool are run simultaneously. The files are identified by the prefixes \texttt{prob\_} and \texttt{mash\_}; you may safely remove them after Sledgehammer has run. \textbf{Warning:} This option is not thread-safe. Use at your own risks. \nopagebreak {\small See also \textit{debug} (\S\ref{output-format}).} \end{enum} \subsection{Relevance Filter} \label{relevance-filter} \begin{enum} \opdefault{fact\_filter}{string}{smart} Specifies the relevance filter to use. The following filters are available: \begin{enum} \item[\labelitemi] \textbf{\textit{mepo}:} The traditional memoryless MePo relevance filter. \item[\labelitemi] \textbf{\textit{mash}:} The MaSh machine learner. Three learning algorithms are provided: \begin{enum} \item[\labelitemi] \textbf{\textit{nb}} is an implementation of naive Bayes. \item[\labelitemi] \textbf{\textit{knn}} is an implementation of $k$-nearest neighbors. \item[\labelitemi] \textbf{\textit{nb\_knn}} (also called \textbf{\textit{yes}} and \textbf{\textit{sml}}) is a combination of naive Bayes and $k$-nearest neighbors. \end{enum} In addition, the special value \textit{none} is used to disable machine learning by default (cf.\ \textit{smart} below). The default algorithm is \textit{nb\_knn}. The algorithm can be selected by setting the ``MaSh'' option under ``Plugins > Plugin Options > Isabelle > General'' in Isabelle/jEdit. Persistent data for both algorithms is stored in the directory \texttt{\$ISABELLE\_\allowbreak HOME\_\allowbreak USER/\allowbreak mash}. \item[\labelitemi] \textbf{\textit{mesh}:} The MeSh filter, which combines the rankings from MePo and MaSh. \item[\labelitemi] \textbf{\textit{smart}:} A combination of MePo, MaSh, and MeSh. If the learning algorithm is set to be \textit{none}, \textit{smart} behaves like MePo. \end{enum} \opdefault{max\_facts}{smart\_int}{smart} Specifies the maximum number of facts that may be returned by the relevance filter. If the option is set to \textit{smart} (the default), it effectively takes a value that was empirically found to be appropriate for the prover. Typical values lie between 50 and 1000. \opdefault{fact\_thresholds}{float\_pair}{\upshape 0.45~0.85} Specifies the thresholds above which facts are considered relevant by the relevance filter. The first threshold is used for the first iteration of the relevance filter and the second threshold is used for the last iteration (if it is reached). The effective threshold is quadratically interpolated for the other iterations. Each threshold ranges from 0 to 1, where 0 means that all theorems are relevant and 1 only theorems that refer to previously seen constants. \optrue{learn}{dont\_learn} Specifies whether Sledgehammer invocations should run MaSh to learn the available theories (and hence provide more accurate results). Learning takes place only if MaSh is enabled. \opdefault{max\_new\_mono\_instances}{int}{smart} Specifies the maximum number of monomorphic instances to generate beyond \textit{max\_facts}. The higher this limit is, the more monomorphic instances are potentially generated. Whether monomorphization takes place depends on the type encoding used. If the option is set to \textit{smart} (the default), it takes a value that was empirically found to be appropriate for the prover. For most provers, this value is 100. \nopagebreak {\small See also \textit{type\_enc} (\S\ref{problem-encoding}).} \opdefault{max\_mono\_iters}{int}{smart} Specifies the maximum number of iterations for the monomorphization fixpoint construction. The higher this limit is, the more monomorphic instances are potentially generated. Whether monomorphization takes place depends on the type encoding used. If the option is set to \textit{smart} (the default), it takes a value that was empirically found to be appropriate for the prover. For most provers, this value is 3. \nopagebreak {\small See also \textit{type\_enc} (\S\ref{problem-encoding}).} \end{enum} \subsection{Problem Encoding} \label{problem-encoding} \newcommand\comb[1]{\const{#1}} \begin{enum} \opdefault{lam\_trans}{string}{smart} Specifies the $\lambda$ translation scheme to use in ATP problems. The supported translation schemes are listed below: \begin{enum} \item[\labelitemi] \textbf{\textit{hide\_lams}:} Hide the $\lambda$-abstractions by replacing them by unspecified fresh constants, effectively disabling all reasoning under $\lambda$-abstractions. \item[\labelitemi] \textbf{\textit{lifting}:} Introduce a new supercombinator \const{c} for each cluster of $n$~$\lambda$-abstractions, defined using an equation $\const{c}~x_1~\ldots~x_n = t$ ($\lambda$-lifting). \item[\labelitemi] \textbf{\textit{combs}:} Rewrite lambdas to the Curry combinators (\comb{I}, \comb{K}, \comb{S}, \comb{B}, \comb{C}). Combinators enable the ATPs to synthesize $\lambda$-terms but tend to yield bulkier formulas than $\lambda$-lifting: The translation is quadratic in the worst case, and the equational definitions of the combinators are very prolific in the context of resolution. \item[\labelitemi] \textbf{\textit{combs\_and\_lifting}:} Introduce a new supercombinator \const{c} for each cluster of $\lambda$-abstractions and characterize it both using a lifted equation $\const{c}~x_1~\ldots~x_n = t$ and via Curry combinators. \item[\labelitemi] \textbf{\textit{combs\_or\_lifting}:} For each cluster of $\lambda$-abstractions, heuristically choose between $\lambda$-lifting and Curry combinators. \item[\labelitemi] \textbf{\textit{keep\_lams}:} Keep the $\lambda$-abstractions in the generated problems. This is available only with provers that support the TH0 syntax. \item[\labelitemi] \textbf{\textit{smart}:} The actual translation scheme used depends on the ATP and should be the most efficient scheme for that ATP. \end{enum} For SMT solvers, the $\lambda$ translation scheme is always \textit{lifting}, irrespective of the value of this option. \opsmartx{uncurried\_aliases}{no\_uncurried\_aliases} Specifies whether fresh function symbols should be generated as aliases for applications of curried functions in ATP problems. \opdefault{type\_enc}{string}{smart} Specifies the type encoding to use in ATP problems. Some of the type encodings are unsound, meaning that they can give rise to spurious proofs (unreconstructible using \textit{metis}). The type encodings are listed below, with an indication of their soundness in parentheses. An asterisk (*) indicates that the encoding is slightly incomplete for reconstruction with \textit{metis}, unless the \textit{strict} option (described below) is enabled. \begin{enum} \item[\labelitemi] \textbf{\textit{erased} (unsound):} No type information is supplied to the ATP, not even to resolve overloading. Types are simply erased. \item[\labelitemi] \textbf{\textit{poly\_guards} (sound):} Types are encoded using a predicate \const{g}$(\tau, t)$ that guards bound variables. Constants are annotated with their types, supplied as extra arguments, to resolve overloading. \item[\labelitemi] \textbf{\textit{poly\_tags} (sound):} Each term and subterm is tagged with its type using a function $\const{t\/}(\tau, t)$. \item[\labelitemi] \textbf{\textit{poly\_args} (unsound):} Like for \textit{poly\_guards} constants are annotated with their types to resolve overloading, but otherwise no type information is encoded. This is the default encoding used by the \textit{metis} proof method. \item[\labelitemi] \textbf{% \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags} (sound); \\ \textit{raw\_mono\_args} (unsound):} \\ Similar to \textit{poly\_guards}, \textit{poly\_tags}, and \textit{poly\_args}, respectively, but the problem is additionally monomorphized, meaning that type variables are instantiated with heuristically chosen ground types. Monomorphization can simplify reasoning but also leads to larger fact bases, which can slow down the ATPs. \item[\labelitemi] \textbf{% \textit{mono\_guards}, \textit{mono\_tags} (sound); \textit{mono\_args} \\ (unsound):} \\ Similar to \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, and \textit{raw\_mono\_\allowbreak args}, respectively but types are mangled in constant names instead of being supplied as ground term arguments. The binary predicate $\const{g}(\tau, t)$ becomes a unary predicate $\const{g\_}\tau(t)$, and the binary function $\const{t}(\tau, t)$ becomes a unary function $\const{t\_}\tau(t)$. \item[\labelitemi] \textbf{\textit{mono\_native} (sound):} Exploits native first-order types if the prover supports the TF0, TF1, TH0, or TH1 syntax; otherwise, falls back on \textit{mono\_guards}. The problem is monomorphized. \item[\labelitemi] \textbf{\textit{mono\_native\_fool} (sound):} Exploits native first-order types, including Booleans, if the prover supports the TFX0, TFX1, TH0, or TH1 syntax; otherwise, falls back on \textit{mono\_native}. The problem is monomorphized. \item[\labelitemi] \textbf{\textit{mono\_native\_higher}, \textit{mono\_native\_higher\_fool} \\ (sound):} Exploits native higher-order types, including Booleans if ending with ``\textit{\_fool}'', if the prover supports the TH0 syntax; otherwise, falls back on \textit{mono\_native} or \textit{mono\_native\_fool}. The problem is monomorphized. \item[\labelitemi] \textbf{\textit{poly\_native}, \textit{poly\_native\_fool}, \textit{poly\_native\_higher}, \\ \textit{poly\_native\_higher\_fool} (sound):} Exploits native first-order polymorphic types if the prover supports the TF1, TFX1, or TH1 syntax; otherwise, falls back on \textit{mono\_native}, \textit{mono\_native\_fool}, \textit{mono\_native\_higher}, or \textit{mono\_native\_higher\_fool}. \item[\labelitemi] \textbf{% \textit{poly\_guards}?, \textit{poly\_tags}?, \textit{raw\_mono\_guards}?, \\ \textit{raw\_mono\_tags}?, \textit{mono\_guards}?, \textit{mono\_tags}?, \\ \textit{mono\_native}? (sound*):} \\ The type encodings \textit{poly\_guards}, \textit{poly\_tags}, \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, \textit{mono\_guards}, \textit{mono\_tags}, and \textit{mono\_native} are fully typed and sound. For each of these, Sledgehammer also provides a lighter variant identified by a question mark (`\hbox{?}')\ that detects and erases monotonic types, notably infinite types. (For \textit{mono\_native}, the types are not actually erased but rather replaced by a shared uniform type of individuals.) As argument to the \textit{metis} proof method, the question mark is replaced by a \hbox{``\textit{\_query\/}''} suffix. \item[\labelitemi] \textbf{% \textit{poly\_guards}??, \textit{poly\_tags}??, \textit{raw\_mono\_guards}??, \\ \textit{raw\_mono\_tags}??, \textit{mono\_guards}??, \textit{mono\_tags}?? \\ (sound*):} \\ Even lighter versions of the `\hbox{?}' encodings. As argument to the \textit{metis} proof method, the `\hbox{??}' suffix is replaced by \hbox{``\textit{\_query\_query\/}''}. \item[\labelitemi] \textbf{% \textit{poly\_guards}@, \textit{poly\_tags}@, \textit{raw\_mono\_guards}@, \\ \textit{raw\_mono\_tags}@ (sound*):} \\ Alternative versions of the `\hbox{??}' encodings. As argument to the \textit{metis} proof method, the `\hbox{@}' suffix is replaced by \hbox{``\textit{\_at\/}''}. \item[\labelitemi] \textbf{\textit{poly\_args}?, \textit{raw\_mono\_args}? (unsound):} \\ Lighter versions of \textit{poly\_args} and \textit{raw\_mono\_args}. \item[\labelitemi] \textbf{\textit{smart}:} The actual encoding used depends on the ATP and should be the most efficient sound encoding for that ATP. \end{enum} For SMT solvers, the type encoding is always \textit{mono\_native}, irrespective of the value of this option. \nopagebreak {\small See also \textit{max\_new\_mono\_instances} (\S\ref{relevance-filter}) and \textit{max\_mono\_iters} (\S\ref{relevance-filter}).} \opfalse{strict}{non\_strict} Specifies whether Sledgehammer should run in its strict mode. In that mode, sound type encodings marked with an asterisk (*) above are made complete for reconstruction with \textit{metis}, at the cost of some clutter in the generated problems. This option has no effect if \textit{type\_enc} is deliberately set to an unsound encoding. \end{enum} \subsection{Output Format} \label{output-format} \begin{enum} \opfalse{verbose}{quiet} Specifies whether the \textbf{sledgehammer} command should explain what it does. \opfalse{debug}{no\_debug} Specifies whether Sledgehammer should display additional debugging information beyond what \textit{verbose} already displays. Enabling \textit{debug} also enables \textit{verbose} behind the scenes. \nopagebreak {\small See also \textit{spy} (\S\ref{mode-of-operation}) and \textit{overlord} (\S\ref{mode-of-operation}).} \opsmart{isar\_proofs}{no\_isar\_proofs} Specifies whether Isar proofs should be output in addition to one-line proofs. The construction of Isar proof is still experimental and may sometimes fail; however, when they succeed they are usually faster and more intelligible than one-line proofs. If the option is set to \textit{smart} (the default), Isar proofs are only generated when no working one-line proof is available. \opdefault{compress}{int}{smart} Specifies the granularity of the generated Isar proofs if \textit{isar\_proofs} is explicitly enabled. A value of $n$ indicates that each Isar proof step should correspond to a group of up to $n$ consecutive proof steps in the ATP proof. If the option is set to \textit{smart} (the default), the compression factor is 10 if the \textit{isar\_proofs} option is explicitly enabled; otherwise, it is $\infty$. \optrueonly{dont\_compress} Alias for ``\textit{compress} = 1''. \optrue{try0}{dont\_try0} Specifies whether standard proof methods such as \textit{auto} and \textit{blast} should be tried as alternatives to \textit{metis} in Isar proofs. The collection of methods is roughly the same as for the \textbf{try0} command. \optrue{smt\_proofs}{no\_smt\_proofs} Specifies whether the \textit{smt} proof method should be tried in addition to Isabelle's built-in proof methods. \end{enum} \subsection{Regression Testing} \label{regression-testing} \begin{enum} \opnodefault{expect}{string} Specifies the expected outcome, which must be one of the following: \begin{enum} \item[\labelitemi] \textbf{\textit{some}:} Sledgehammer found a proof. \item[\labelitemi] \textbf{\textit{none}:} Sledgehammer found no proof. \item[\labelitemi] \textbf{\textit{timeout}:} Sledgehammer timed out. \item[\labelitemi] \textbf{\textit{unknown}:} Sledgehammer encountered some problem. \end{enum} Sledgehammer emits an error if the actual outcome differs from the expected outcome. This option is useful for regression testing. \nopagebreak {\small See also \textit{timeout} (\S\ref{timeouts}).} \end{enum} \subsection{Timeouts} \label{timeouts} \begin{enum} \opdefault{timeout}{float}{\upshape 30} Specifies the maximum number of seconds that the automatic provers should spend searching for a proof. This excludes problem preparation and is a soft limit. \opdefault{preplay\_timeout}{float}{\upshape 1} Specifies the maximum number of seconds that \textit{metis} or other proof methods should spend trying to ``preplay'' the found proof. If this option is set to 0, no preplaying takes place, and no timing information is displayed next to the suggested proof method calls. \nopagebreak {\small See also \textit{minimize} (\S\ref{mode-of-operation}).} \optrueonly{dont\_preplay} Alias for ``\textit{preplay\_timeout} = 0''. \end{enum} \section{Mirabelle Testing Tool} \label{mirabelle} The \texttt{isabelle mirabelle} tool executes Sledgehammer or other advisory tools (e.g., Nitpick) or tactics (e.g., \textit{auto}) on all subgoals emering in a theory. It is typically used to measure the success rate of a proof tool on some benchmark. Its command-line usage is as follows: {\small \begin{verbatim} isabelle mirabelle [OPTIONS] ACTIONS FILES Options are: -L LOGIC parent logic to use (default HOL) -O DIR output directory for test data (default None) -S FILE user-provided setup file (no actions required) -T THEORY parent theory to use (default Main) -d DIR include session directory -q be quiet (suppress output of Isabelle process) -t TIMEOUT timeout for each action in seconds (default 30) Apply the given actions at all proof steps in the given theory files. \end{verbatim} } Option \texttt{-L LOGIC} specifies the parent session to use. This is often a logic (e.g., \texttt{Pure}, \texttt{HOL}) but may be any session (e.g., from the AFP). Using multiple sessions is not supported. If a theory A needs to import theories from multiple sessions, this limitation can be overcome as follows: \begin{enumerate} \item Define a custom session \texttt{S} with a single theory \texttt{B}. \item Move all imports from \texttt{A} to \texttt{B}. \item Build the heap image of \texttt{S}. \item Import \texttt{S.B} from theory \texttt{A}. \item Execute Mirabelle with \texttt{C} as parent logic (i.e., with \texttt{-L S}). \end{enumerate} Option \texttt{-O DIR} specifies the output directory, which is created if needed. In this directory, one log file per theory records the position of each tested subgoal and the result of executing the action. Option \texttt{-t TIMEOUT} specifies a generic timeout that the actions may interpret differently. More specific documentation about the \texttt{ACTIONS} and \texttt{FILES} parameters and their corresponding options can be found in the Isabelle tool usage by entering \texttt{isabelle mirabelle -?} on the command line. \subsection{Example of Benchmarking Sledgehammer} \begin{verbatim} isabelle mirabelle -O output/ \ sledgehammer[prover=e,prover_timeout=10] Huffman.thy \end{verbatim} This command specifies Sledgehammer as the action, using the E prover with a timeout of 10 seconds. The results are written to \texttt{output/Huffman.log}. \subsection{Example of Benchmarking Another Tool} \begin{verbatim} isabelle mirabelle -O output/ -t 10 try0 Huffman.thy \end{verbatim} This command specifies the \textbf{try0} command as the action, with a timeout of 10 seconds. The results are written to \texttt{output/Huffman.log}. \subsection{Example of Generating TPTP Files} \begin{verbatim} isabelle mirabelle -O output/ \ sledgehammer[prover=e,prover_timeout=1,keep=/tptp/files/] \ Huffman.thy \end{verbatim} This command generates TPTP files using Sledgehammer. Since the file is generated at the very beginning of every Sledgehammer invocation, a timeout of one second making the prover fail faster speeds up processing the theory. The results are written in the specified directory (\texttt{/tptp/files/}), which must exist beforehand. A TPTP file is generated for each subgoal. \let\em=\sl \bibliography{manual}{} \bibliographystyle{abbrv} \end{document} diff --git a/src/Doc/Sugar/document/build b/src/Doc/Sugar/document/build deleted file mode 100755 --- a/src/Doc/Sugar/document/build +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/System/Environment.thy b/src/Doc/System/Environment.thy --- a/src/Doc/System/Environment.thy +++ b/src/Doc/System/Environment.thy @@ -1,501 +1,502 @@ (*:maxLineLen=78:*) theory Environment imports Base begin chapter \The Isabelle system environment\ text \ This manual describes Isabelle together with related tools as seen from a system oriented view. See also the \<^emph>\Isabelle/Isar Reference Manual\ @{cite "isabelle-isar-ref"} for the actual Isabelle input language and related concepts, and \<^emph>\The Isabelle/Isar Implementation Manual\ @{cite "isabelle-implementation"} for the main concepts of the underlying implementation in Isabelle/ML. \ section \Isabelle settings \label{sec:settings}\ text \ Isabelle executables may depend on the \<^emph>\Isabelle settings\ within the process environment. This is a statically scoped collection of environment variables, such as @{setting ISABELLE_HOME}, @{setting ML_SYSTEM}, @{setting ML_HOME}. These variables are \<^emph>\not\ intended to be set directly from the shell, but are provided by Isabelle \<^emph>\components\ their \<^emph>\settings files\ as explained below. \ subsection \Bootstrapping the environment \label{sec:boot}\ text \ Isabelle executables need to be run within a proper settings environment. This is bootstrapped as described below, on the first invocation of one of the outer wrapper scripts (such as @{executable_ref isabelle}). This happens only once for each process tree, i.e.\ the environment is passed to subprocesses according to regular Unix conventions. \<^enum> The special variable @{setting_def ISABELLE_HOME} is determined automatically from the location of the binary that has been run. You should not try to set @{setting ISABELLE_HOME} manually. Also note that the Isabelle executables either have to be run from their original location in the distribution directory, or via the executable objects created by the @{tool install} tool. Symbolic links are admissible, but a plain copy of the \<^dir>\$ISABELLE_HOME/bin\ files will not work! \<^enum> The file \<^file>\$ISABELLE_HOME/etc/settings\ is run as a @{executable_ref bash} shell script with the auto-export option for variables enabled. This file holds a rather long list of shell variable assignments, thus providing the site-wide default settings. The Isabelle distribution already contains a global settings file with sensible defaults for most variables. When installing the system, only a few of these may have to be adapted (probably @{setting ML_SYSTEM} etc.). \<^enum> The file \<^path>\$ISABELLE_HOME_USER/etc/settings\ (if it exists) is run in the same way as the site default settings. Note that the variable @{setting ISABELLE_HOME_USER} has already been set before --- usually to something like \<^verbatim>\$USER_HOME/.isabelle/Isabelle2021\. Thus individual users may override the site-wide defaults. Typically, a user settings file contains only a few lines, with some assignments that are actually changed. Never copy the central \<^file>\$ISABELLE_HOME/etc/settings\ file! Since settings files are regular GNU @{executable_def bash} scripts, one may use complex shell commands, such as \<^verbatim>\if\ or \<^verbatim>\case\ statements to set variables depending on the system architecture or other environment variables. Such advanced features should be added only with great care, though. In particular, external environment references should be kept at a minimum. \<^medskip> A few variables are somewhat special, e.g.\ @{setting_def ISABELLE_TOOL} is set automatically to the absolute path name of the @{executable isabelle} executables. \<^medskip> Note that the settings environment may be inspected with the @{tool getenv} tool. This might help to figure out the effect of complex settings scripts. \ subsection \Common variables\ text \ This is a reference of common Isabelle settings variables. Note that the list is somewhat open-ended. Third-party utilities or interfaces may add their own selection. Variables that are special in some sense are marked with \\<^sup>*\. \<^descr>[@{setting_def USER_HOME}\\<^sup>*\] Is the cross-platform user home directory. On Unix systems this is usually the same as @{setting HOME}, but on Windows it is the regular home directory of the user, not the one of within the Cygwin root file-system.\<^footnote>\Cygwin itself offers another choice whether its HOME should point to the \<^path>\/home\ directory tree or the Windows user home.\ \<^descr>[@{setting_def ISABELLE_HOME}\\<^sup>*\] is the location of the top-level Isabelle distribution directory. This is automatically determined from the Isabelle executable that has been invoked. Do not attempt to set @{setting ISABELLE_HOME} yourself from the shell! \<^descr>[@{setting_def ISABELLE_HOME_USER}] is the user-specific counterpart of @{setting ISABELLE_HOME}. The default value is relative to \<^path>\$USER_HOME/.isabelle\, under rare circumstances this may be changed in the global setting file. Typically, the @{setting ISABELLE_HOME_USER} directory mimics @{setting ISABELLE_HOME} to some extend. In particular, site-wide defaults may be overridden by a private \<^verbatim>\$ISABELLE_HOME_USER/etc/settings\. \<^descr>[@{setting_def ISABELLE_PLATFORM_FAMILY}\\<^sup>*\] is automatically set to the general platform family (\<^verbatim>\linux\, \<^verbatim>\macos\, \<^verbatim>\windows\). Note that platform-dependent tools usually need to refer to the more specific identification according to @{setting ISABELLE_PLATFORM64}, @{setting ISABELLE_WINDOWS_PLATFORM64}, @{setting ISABELLE_APPLE_PLATFORM64}. \<^descr>[@{setting_def ISABELLE_PLATFORM64}\\<^sup>*\] indicates the standard Posix platform (\<^verbatim>\x86_64\, \<^verbatim>\arm64\), together with a symbolic name for the operating system (\<^verbatim>\linux\, \<^verbatim>\darwin\, \<^verbatim>\cygwin\). \<^descr>[@{setting_def ISABELLE_WINDOWS_PLATFORM64}\\<^sup>*\, @{setting_def ISABELLE_WINDOWS_PLATFORM32}\\<^sup>*\] indicate the native Windows platform: both 64\,bit and 32\,bit executables are supported here. In GNU bash scripts, a preference for native Windows platform variants may be specified like this (first 64 bit, second 32 bit): @{verbatim [display] \"${ISABELLE_WINDOWS_PLATFORM64:-${ISABELLE_WINDOWS_PLATFORM32:- $ISABELLE_PLATFORM64}}"\} \<^descr>[@{setting_def ISABELLE_APPLE_PLATFORM64}\\<^sup>*\] indicates the native Apple Silicon platform (\<^verbatim>\arm64-darwin\ if available), instead of Intel emulation via Rosetta (\<^verbatim>\ISABELLE_PLATFORM64=x86_64-darwin\). \<^descr>[@{setting ISABELLE_TOOL}\\<^sup>*\] is automatically set to the full path name of the @{executable isabelle} executable. \<^descr>[@{setting_def ISABELLE_IDENTIFIER}\\<^sup>*\] refers to the name of this Isabelle distribution, e.g.\ ``\<^verbatim>\Isabelle2021\''. \<^descr>[@{setting_def ML_SYSTEM}, @{setting_def ML_HOME}, @{setting_def ML_OPTIONS}, @{setting_def ML_PLATFORM}, @{setting_def ML_IDENTIFIER}\\<^sup>*\] specify the underlying ML system to be used for Isabelle. There is only a fixed set of admissable @{setting ML_SYSTEM} names (see the \<^file>\$ISABELLE_HOME/etc/settings\ file of the distribution). The actual compiler binary will be run from the directory @{setting ML_HOME}, with @{setting ML_OPTIONS} as first arguments on the command line. The optional @{setting ML_PLATFORM} may specify the binary format of ML heap images, which is useful for cross-platform installations. The value of @{setting ML_IDENTIFIER} is automatically obtained by composing the values of @{setting ML_SYSTEM}, @{setting ML_PLATFORM} and the Isabelle version values. \<^descr>[@{setting_def ISABELLE_JDK_HOME}] points to a full JDK (Java Development Kit) installation with \<^verbatim>\javac\ and \<^verbatim>\jar\ executables. Note that conventional \<^verbatim>\JAVA_HOME\ points to the JRE (Java Runtime Environment), not the JDK. \<^descr>[@{setting_def ISABELLE_JAVA_PLATFORM}] identifies the hardware and operating system platform for the Java installation of Isabelle. That is always the (native) 64 bit variant: \<^verbatim>\x86_64-linux\, \<^verbatim>\x86_64-darwin\, \<^verbatim>\x86_64-windows\. \<^descr>[@{setting_def ISABELLE_BROWSER_INFO}] is the directory where HTML and PDF browser information is stored (see also \secref{sec:info}); its default is \<^path>\$ISABELLE_HOME_USER/browser_info\. For ``system build mode'' (see \secref{sec:tool-build}), @{setting_def ISABELLE_BROWSER_INFO_SYSTEM} is used instead; its default is \<^path>\$ISABELLE_HOME/browser_info\. \<^descr>[@{setting_def ISABELLE_HEAPS}] is the directory where session heap images, log files, and build databases are stored; its default is \<^path>\$ISABELLE_HOME_USER/heaps\. If @{system_option system_heaps} is \<^verbatim>\true\, @{setting_def ISABELLE_HEAPS_SYSTEM} is used instead; its default is \<^path>\$ISABELLE_HOME/heaps\. See also \secref{sec:tool-build}. \<^descr>[@{setting_def ISABELLE_LOGIC}] specifies the default logic to load if none is given explicitely by the user. The default value is \<^verbatim>\HOL\. \<^descr>[@{setting_def ISABELLE_LINE_EDITOR}] specifies the line editor for the @{tool_ref console} interface. - \<^descr>[@{setting_def ISABELLE_PDFLATEX}, @{setting_def ISABELLE_BIBTEX}] refer to - {\LaTeX} related tools for Isabelle document preparation (see also - \secref{sec:tool-latex}). + \<^descr>[@{setting_def ISABELLE_PDFLATEX}, @{setting_def ISABELLE_LUALATEX}, + @{setting_def ISABELLE_BIBTEX}, @{setting_def ISABELLE_MAKEINDEX}] refer to + {\LaTeX}-related tools for Isabelle document preparation (see also + \secref{sec:tool-document}). \<^descr>[@{setting_def ISABELLE_TOOLS}] is a colon separated list of directories that are scanned by @{executable isabelle} for external utility programs (see also \secref{sec:isabelle-tool}). \<^descr>[@{setting_def ISABELLE_DOCS}] is a colon separated list of directories with documentation files. \<^descr>[@{setting_def PDF_VIEWER}] specifies the program to be used for displaying \<^verbatim>\pdf\ files. \<^descr>[@{setting_def ISABELLE_TMP_PREFIX}\\<^sup>*\] is the prefix from which any running Isabelle ML process derives an individual directory for temporary files. \<^descr>[@{setting_def ISABELLE_TOOL_JAVA_OPTIONS}] is passed to the \<^verbatim>\java\ executable when running Isabelle tools (e.g.\ @{tool build}). This is occasionally helpful to provide more heap space, via additional options like \<^verbatim>\-Xms1g -Xmx4g\. \ subsection \Additional components \label{sec:components}\ text \ Any directory may be registered as an explicit \<^emph>\Isabelle component\. The general layout conventions are that of the main Isabelle distribution itself, and the following two files (both optional) have a special meaning: \<^item> \<^verbatim>\etc/settings\ holds additional settings that are initialized when bootstrapping the overall Isabelle environment, cf.\ \secref{sec:boot}. As usual, the content is interpreted as a GNU bash script. It may refer to the component's enclosing directory via the \<^verbatim>\COMPONENT\ shell variable. For example, the following setting allows to refer to files within the component later on, without having to hardwire absolute paths: @{verbatim [display] \MY_COMPONENT_HOME="$COMPONENT"\} Components can also add to existing Isabelle settings such as @{setting_def ISABELLE_TOOLS}, in order to provide component-specific tools that can be invoked by end-users. For example: @{verbatim [display] \ISABELLE_TOOLS="$ISABELLE_TOOLS:$COMPONENT/lib/Tools"\} \<^item> \<^verbatim>\etc/components\ holds a list of further sub-components of the same structure. The directory specifications given here can be either absolute (with leading \<^verbatim>\/\) or relative to the component's main directory. The root of component initialization is @{setting ISABELLE_HOME} itself. After initializing all of its sub-components recursively, @{setting ISABELLE_HOME_USER} is included in the same manner (if that directory exists). This allows to install private components via \<^path>\$ISABELLE_HOME_USER/etc/components\, although it is often more convenient to do that programmatically via the \<^bash_function>\init_component\ shell function in the \<^verbatim>\etc/settings\ script of \<^verbatim>\$ISABELLE_HOME_USER\ (or any other component directory). For example: @{verbatim [display] \init_component "$HOME/screwdriver-2.0"\} This is tolerant wrt.\ missing component directories, but might produce a warning. \<^medskip> More complex situations may be addressed by initializing components listed in a given catalog file, relatively to some base directory: @{verbatim [display] \init_components "$HOME/my_component_store" "some_catalog_file"\} The component directories listed in the catalog file are treated as relative to the given base directory. See also \secref{sec:tool-components} for some tool-support for resolving components that are formally initialized but not installed yet. \ section \The Isabelle tool wrapper \label{sec:isabelle-tool}\ text \ The main \<^emph>\Isabelle tool wrapper\ provides a generic startup environment for Isabelle-related utilities, user interfaces, add-on applications etc. Such tools automatically benefit from the settings mechanism (\secref{sec:settings}). Moreover, this is the standard way to invoke Isabelle/Scala functionality as a separate operating-system process. Isabelle command-line tools are run uniformly via a common wrapper --- @{executable_ref isabelle}: @{verbatim [display] \Usage: isabelle TOOL [ARGS ...] Start Isabelle TOOL with ARGS; pass "-?" for tool-specific help. Available tools: ...\} Tools may be implemented in Isabelle/Scala or as stand-alone executables (usually as GNU bash scripts). In the invocation of ``@{executable isabelle}~\tool\'', the named \tool\ is resolved as follows (and in the given order). \<^enum> An external tool found on the directories listed in the @{setting ISABELLE_TOOLS} settings variable (colon-separated list in standard POSIX notation). \<^enum> If a file ``\tool\\<^verbatim>\.scala\'' is found, the source needs to define some object that extends the class \<^verbatim>\Isabelle_Tool.Body\. The Scala compiler is invoked on the spot (which may take some time), and the body function is run with the command-line arguments as \<^verbatim>\List[String]\. \<^enum> If an executable file ``\tool\'' is found, it is invoked as stand-alone program with the command-line arguments provided as \<^verbatim>\argv\ array. \<^enum> An internal tool that is registered in \<^verbatim>\etc/settings\ via the shell function \<^bash_function>\isabelle_scala_service\, referring to a suitable instance of class \<^scala_type>\isabelle.Isabelle_Scala_Tools\. This is the preferred approach for non-trivial systems programming in Isabelle/Scala: instead of adhoc interpretation of \<^verbatim>\scala\ scripts, which is somewhat slow and only type-checked at runtime, there are properly compiled \<^verbatim>\jar\ modules (see also the shell function \<^bash_function>\classpath\ in \secref{sec:scala}). There are also various administrative tools that are available from a bare repository clone of Isabelle, but not in regular distributions. \ subsubsection \Examples\ text \ Show the list of available documentation of the Isabelle distribution: @{verbatim [display] \isabelle doc\} View a certain document as follows: @{verbatim [display] \isabelle doc system\} Query the Isabelle settings environment: @{verbatim [display] \isabelle getenv ISABELLE_HOME_USER\} \ section \The raw Isabelle ML process\ subsection \Batch mode \label{sec:tool-process}\ text \ The @{tool_def process} tool runs the raw ML process in batch mode: @{verbatim [display] \Usage: isabelle process [OPTIONS] Options are: -T THEORY load theory -d DIR include session directory -e ML_EXPR evaluate ML expression on startup -f ML_FILE evaluate ML file on startup -l NAME logic session name (default ISABELLE_LOGIC="HOL") -m MODE add print mode for output -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) Run the raw Isabelle ML process in batch mode.\} \<^medskip> Options \<^verbatim>\-e\ and \<^verbatim>\-f\ allow to evaluate ML code, before the ML process is started. The source is either given literally or taken from a file. Multiple \<^verbatim>\-e\ and \<^verbatim>\-f\ options are evaluated in the given order. Errors lead to premature exit of the ML process with return code 1. \<^medskip> Option \<^verbatim>\-T\ loads a specified theory file. This is a wrapper for \<^verbatim>\-e\ with a suitable \<^ML>\use_thy\ invocation. \<^medskip> Option \<^verbatim>\-l\ specifies the logic session name. Option \<^verbatim>\-d\ specifies additional directories for session roots, see also \secref{sec:tool-build}. \<^medskip> The \<^verbatim>\-m\ option adds identifiers of print modes to be made active for this session. For example, \<^verbatim>\-m ASCII\ prefers ASCII replacement syntax over mathematical Isabelle symbols. \<^medskip> Option \<^verbatim>\-o\ allows to override Isabelle system options for this process, see also \secref{sec:system-options}. \ subsubsection \Examples\ text \ The subsequent example retrieves the \<^verbatim>\Main\ theory value from the theory loader within ML: @{verbatim [display] \isabelle process -e 'Thy_Info.get_theory "Main"'\} Observe the delicate quoting rules for the GNU bash shell vs.\ ML. The Isabelle/ML and Scala libraries provide functions for that, but here we need to do it manually. \<^medskip> This is how to invoke a function body with proper return code and printing of errors, and without printing of a redundant \<^verbatim>\val it = (): unit\ result: @{verbatim [display] \isabelle process -e 'Command_Line.tool (fn () => writeln "OK")'\} @{verbatim [display] \isabelle process -e 'Command_Line.tool (fn () => error "Bad")'\} \ subsection \Interactive mode\ text \ The @{tool_def console} tool runs the raw ML process with interactive console and line editor: @{verbatim [display] \Usage: isabelle console [OPTIONS] Options are: -d DIR include session directory -i NAME include session in name-space of theories -l NAME logic session name (default ISABELLE_LOGIC) -m MODE add print mode for output -n no build of session image on startup -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -r bootstrap from raw Poly/ML Build a logic session image and run the raw Isabelle ML process in interactive mode, with line editor ISABELLE_LINE_EDITOR.\} \<^medskip> Option \<^verbatim>\-l\ specifies the logic session name. By default, its heap image is checked and built on demand, but the option \<^verbatim>\-n\ skips that. Option \<^verbatim>\-i\ includes additional sessions into the name-space of theories: multiple occurrences are possible. Option \<^verbatim>\-r\ indicates a bootstrap from the raw Poly/ML system, which is relevant for Isabelle/Pure development. \<^medskip> Options \<^verbatim>\-d\, \<^verbatim>\-m\, \<^verbatim>\-o\ have the same meaning as for @{tool process} (\secref{sec:tool-process}). \<^medskip> The Isabelle/ML process is run through the line editor that is specified via the settings variable @{setting ISABELLE_LINE_EDITOR} (e.g.\ @{executable_def rlwrap} for GNU readline); the fall-back is to use plain standard input/output. The user is connected to the raw ML toplevel loop: this is neither Isabelle/Isar nor Isabelle/ML within the usual formal context. The most relevant ML commands at this stage are \<^ML>\use\ (for ML files) and \<^ML>\use_thy\ (for theory files). \ section \The raw Isabelle Java process \label{sec:isabelle-java}\ text \ The @{executable_ref isabelle_java} executable allows to run a Java process within the name space of Java and Scala components that are bundled with Isabelle, but \<^emph>\without\ the Isabelle settings environment (\secref{sec:settings}). After such a JVM cold-start, the Isabelle environment can be accessed via \<^verbatim>\Isabelle_System.getenv\ as usual, but the underlying process environment remains clean. This is e.g.\ relevant when invoking other processes that should remain separate from the current Isabelle installation. \<^medskip> Note that under normal circumstances, Isabelle command-line tools are run \<^emph>\within\ the settings environment, as provided by the @{executable isabelle} wrapper (\secref{sec:isabelle-tool} and \secref{sec:tool-java}). \ subsubsection \Example\ text \ The subsequent example creates a raw Java process on the command-line and invokes the main Isabelle application entry point: @{verbatim [display] \isabelle_java isabelle.Main\} \ section \YXML versus XML \label{sec:yxml-vs-xml}\ text \ Isabelle tools often use YXML, which is a simple and efficient syntax for untyped XML trees. The YXML format is defined as follows. \<^enum> The encoding is always UTF-8. \<^enum> Body text is represented verbatim (no escaping, no special treatment of white space, no named entities, no CDATA chunks, no comments). \<^enum> Markup elements are represented via ASCII control characters \\<^bold>X = 5\ and \\<^bold>Y = 6\ as follows: \begin{tabular}{ll} XML & YXML \\\hline \<^verbatim>\<\\name attribute\\<^verbatim>\=\\value \\\<^verbatim>\>\ & \\<^bold>X\<^bold>Yname\<^bold>Yattribute\\<^verbatim>\=\\value\\<^bold>X\ \\ \<^verbatim>\\name\\<^verbatim>\>\ & \\<^bold>X\<^bold>Y\<^bold>X\ \\ \end{tabular} There is no special case for empty body text, i.e.\ \<^verbatim>\\ is treated like \<^verbatim>\\. Also note that \\<^bold>X\ and \\<^bold>Y\ may never occur in well-formed XML documents. Parsing YXML is pretty straight-forward: split the text into chunks separated by \\<^bold>X\, then split each chunk into sub-chunks separated by \\<^bold>Y\. Markup chunks start with an empty sub-chunk, and a second empty sub-chunk indicates close of an element. Any other non-empty chunk consists of plain text. For example, see \<^file>\~~/src/Pure/PIDE/yxml.ML\ or \<^file>\~~/src/Pure/PIDE/yxml.scala\. YXML documents may be detected quickly by checking that the first two characters are \\<^bold>X\<^bold>Y\. \ end diff --git a/src/Doc/System/Presentation.thy b/src/Doc/System/Presentation.thy --- a/src/Doc/System/Presentation.thy +++ b/src/Doc/System/Presentation.thy @@ -1,223 +1,187 @@ (*:maxLineLen=78:*) theory Presentation imports Base begin chapter \Presenting theories \label{ch:present}\ text \ Isabelle provides several ways to present the outcome of formal developments, including WWW-based browsable libraries or actual printable documents. Presentation is centered around the concept of \<^emph>\sessions\ (\chref{ch:session}). The global session structure is that of a tree, with Isabelle Pure at its root, further object-logics derived (e.g.\ HOLCF from HOL, and HOL from Pure), and application sessions further on in the hierarchy. The command-line tools @{tool_ref mkroot} and @{tool_ref build} provide the primary means for managing Isabelle sessions, including options for presentation: ``\<^verbatim>\document=pdf\'' generates PDF output from the theory session, and ``\<^verbatim>\document_output=dir\'' emits a copy of the document sources with the PDF into the given directory (relative to the session directory). Alternatively, @{tool_ref document} may be used to turn the generated - {\LaTeX} sources of a session (exports from its build database) into PDF, - using suitable invocations of @{tool_ref latex}. + {\LaTeX} sources of a session (exports from its build database) into PDF. \ section \Generating HTML browser information \label{sec:info}\ text \ As a side-effect of building sessions, Isabelle is able to generate theory browsing information, including HTML documents that show the theory sources and the relationship with its ancestors and descendants. Besides the HTML file that is generated for every theory, Isabelle stores links to all theories of a session in an index file. As a second hierarchy, groups of sessions are organized as \<^emph>\chapters\, with a separate index. Note that the implicit tree structure of the session build hierarchy is \<^emph>\not\ relevant for the presentation. \<^medskip> To generate theory browsing information for an existing session, just invoke @{tool build} with suitable options: @{verbatim [display] \isabelle build -o browser_info -v -c FOL\} The presentation output will appear in \<^verbatim>\$ISABELLE_BROWSER_INFO/FOL/FOL\ as reported by the above verbose invocation of the build process. Many Isabelle sessions (such as \<^session>\HOL-Library\ in \<^dir>\~~/src/HOL/Library\) also provide printable documents in PDF. These are prepared automatically as well if enabled like this: @{verbatim [display] \isabelle build -o browser_info -o document=pdf -v -c HOL-Library\} Enabling both browser info and document preparation simultaneously causes an appropriate ``document'' link to be included in the HTML index. Documents may be generated independently of browser information as well, see \secref{sec:tool-document} for further details. \<^bigskip> The theory browsing information is stored in a sub-directory directory determined by the @{setting_ref ISABELLE_BROWSER_INFO} setting plus a prefix corresponding to the session chapter and identifier. In order to present Isabelle applications on the web, the corresponding subdirectory from @{setting ISABELLE_BROWSER_INFO} can be put on a WWW server. \ section \Preparing session root directories \label{sec:tool-mkroot}\ text \ The @{tool_def mkroot} tool configures a given directory as session root, with some \<^verbatim>\ROOT\ file and optional document source directory. Its usage is: @{verbatim [display] \Usage: isabelle mkroot [OPTIONS] [DIRECTORY] Options are: -A LATEX provide author in LaTeX notation (default: user name) -I init Mercurial repository and add generated files -T LATEX provide title in LaTeX notation (default: session name) -n NAME alternative session name (default: directory base name) Prepare session root directory (default: current directory). \} The results are placed in the given directory \dir\, which refers to the current directory by default. The @{tool mkroot} tool is conservative in the sense that it does not overwrite existing files or directories. Earlier attempts to generate a session root need to be deleted manually. The generated session template will be accompanied by a formal document, with \DIRECTORY\\<^verbatim>\/document/root.tex\ as its {\LaTeX} entry point (see also \chref{ch:present}). Options \<^verbatim>\-T\ and \<^verbatim>\-A\ specify the document title and author explicitly, using {\LaTeX} source notation. Option \<^verbatim>\-I\ initializes a Mercurial repository in the target directory, and adds all generated files (without commit). Option \<^verbatim>\-n\ specifies an alternative session name; otherwise the base name of the given directory is used. \<^medskip> The implicit Isabelle settings variable @{setting ISABELLE_LOGIC} specifies the parent session. \ subsubsection \Examples\ text \ Produce session \<^verbatim>\Test\ within a separate directory of the same name: @{verbatim [display] \isabelle mkroot Test && isabelle build -D Test\} \<^medskip> Upgrade the current directory into a session ROOT with document preparation, and build it: @{verbatim [display] \isabelle mkroot && isabelle build -D .\} \ section \Preparing Isabelle session documents \label{sec:tool-document}\ text \ The @{tool_def document} tool prepares logic session documents. Its usage is: @{verbatim [display] \Usage: isabelle document [OPTIONS] SESSION Options are: -O DIR output directory for LaTeX sources and resulting PDF -P DIR output directory for resulting PDF -S DIR output directory for LaTeX sources -V verbose latex -d DIR include session directory -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -v verbose build Prepare the theory document of a session.\} Generated {\LaTeX} sources are taken from the session build database: @{tool_ref build} is invoked beforehand to ensure that it is up-to-date. Further files are generated on the spot, notably essential Isabelle style files, and \<^verbatim>\session.tex\ to input all theory sources from the session (excluding imports from other sessions). \<^medskip> Options \<^verbatim>\-d\, \<^verbatim>\-o\, \<^verbatim>\-v\ have the same meaning as for @{tool build}. \<^medskip> Option \<^verbatim>\-V\ prints full output of {\LaTeX} tools. \<^medskip> Option \<^verbatim>\-O\~\dir\ specifies the output directory for generated {\LaTeX} sources and the result PDF file. Options \<^verbatim>\-P\ and \<^verbatim>\-S\ only refer to the PDF and sources, respectively. For example, for output directory ``\<^verbatim>\output\'' and the default document variant ``\<^verbatim>\document\'', the generated document sources are placed into the subdirectory \<^verbatim>\output/document/\ and the resulting PDF into \<^verbatim>\output/document.pdf\. \<^medskip> Isabelle is usually smart enough to create the PDF from the given \<^verbatim>\root.tex\ and optional \<^verbatim>\root.bib\ (bibliography) and \<^verbatim>\root.idx\ (index) - using standard {\LaTeX} tools. Alternatively, \isakeyword{document\_files} - in the session \<^verbatim>\ROOT\ may include an executable \<^verbatim>\build\ script to take - care of that. It is invoked with command-line arguments for the document - format (\<^verbatim>\pdf\) and the document variant name. The script needs to produce - corresponding output files, e.g.\ \<^verbatim>\root.pdf\ for default document variants - (the main work can be delegated to @{tool latex}). \ + using standard {\LaTeX} tools. Actual command-lines are given by settings + @{setting_ref ISABELLE_PDFLATEX}, @{setting_ref ISABELLE_LUALATEX}, + @{setting_ref ISABELLE_BIBTEX}, @{setting_ref ISABELLE_MAKEINDEX}: these + variables are used without quoting in shell scripts, and thus may contain + additional options. + + Alternatively, the session \<^verbatim>\ROOT\ may include an option + \<^verbatim>\document_build=build\ together with an executable \<^verbatim>\build\ script in + \isakeyword{document\_files}: it is invoked with command-line arguments for + the document format (\<^verbatim>\pdf\) and the document variant name. The script needs + to produce corresponding output files, e.g.\ \<^verbatim>\root.pdf\ for default + document variants. +\ + subsubsection \Examples\ text \ Produce the document from session \<^verbatim>\FOL\ with full verbosity, and a copy in the current directory (subdirectory \<^verbatim>\document\ and file \<^verbatim>\document.pdf)\: @{verbatim [display] \isabelle document -v -V -O. FOL\} \ - -section \Running {\LaTeX} within the Isabelle environment - \label{sec:tool-latex}\ - -text \ - The @{tool_def latex} tool provides the basic interface for Isabelle - document preparation. Its usage is: - @{verbatim [display] -\Usage: isabelle latex [OPTIONS] [FILE] - - Options are: - -o FORMAT specify output format: pdf (default), bbl, idx, sty - - Run LaTeX (and related tools) on FILE (default root.tex), - producing the specified output format.\} - - Appropriate {\LaTeX}-related programs are run on the input file, according - to the given output format: @{executable pdflatex}, @{executable bibtex} - (for \<^verbatim>\bbl\), and @{executable makeindex} (for \<^verbatim>\idx\). The actual commands - are determined from the settings environment (@{setting ISABELLE_PDFLATEX} - etc.). - - The \<^verbatim>\sty\ output format causes the Isabelle style files to be updated from - the distribution. This is useful in special situations where the document - sources are to be processed another time by separate tools. -\ - - -subsubsection \Examples\ - -text \ - Invoking @{tool latex} by hand may be occasionally useful when debugging - failed attempts of the automatic document preparation stage of batch-mode - Isabelle. The abortive process leaves the sources at a certain place within - @{setting ISABELLE_BROWSER_INFO}, see the runtime error message for details. - This enables users to inspect {\LaTeX} runs in further detail, e.g.\ like - this: - - @{verbatim [display] -\cd "$(isabelle getenv -b ISABELLE_BROWSER_INFO)/Unsorted/Test/document" -isabelle latex -o pdf\} -\ - end diff --git a/src/Doc/System/Sessions.thy b/src/Doc/System/Sessions.thy --- a/src/Doc/System/Sessions.thy +++ b/src/Doc/System/Sessions.thy @@ -1,836 +1,846 @@ (*:maxLineLen=78:*) theory Sessions imports Base begin chapter \Isabelle sessions and build management \label{ch:session}\ text \ An Isabelle \<^emph>\session\ consists of a collection of related theories that may be associated with formal documents (\chref{ch:present}). There is also a notion of \<^emph>\persistent heap\ image to capture the state of a session, similar to object-code in compiled programming languages. Thus the concept of session resembles that of a ``project'' in common IDE environments, but the specific name emphasizes the connection to interactive theorem proving: the session wraps-up the results of user-interaction with the prover in a persistent form. Application sessions are built on a given parent session, which may be built recursively on other parents. Following this path in the hierarchy eventually leads to some major object-logic session like \HOL\, which itself is based on \Pure\ as the common root of all sessions. Processing sessions may take considerable time. Isabelle build management helps to organize this efficiently. This includes support for parallel build jobs, in addition to the multithreaded theory and proof checking that is already provided by the prover process itself. \ section \Session ROOT specifications \label{sec:session-root}\ text \ Session specifications reside in files called \<^verbatim>\ROOT\ within certain directories, such as the home locations of registered Isabelle components or additional project directories given by the user. The ROOT file format follows the lexical conventions of the \<^emph>\outer syntax\ of Isabelle/Isar, see also @{cite "isabelle-isar-ref"}. This defines common forms like identifiers, names, quoted strings, verbatim text, nested comments etc. The grammar for @{syntax session_chapter} and @{syntax session_entry} is given as syntax diagram below; each ROOT file may contain multiple specifications like this. Chapters help to organize browser info (\secref{sec:info}), but have no formal meaning. The default chapter is ``\Unsorted\''. Isabelle/jEdit @{cite "isabelle-jedit"} includes a simple editing mode \<^verbatim>\isabelle-root\ for session ROOT files, which is enabled by default for any file of that name. \<^rail>\ @{syntax_def session_chapter}: @'chapter' @{syntax name} ; @{syntax_def session_entry}: @'session' @{syntax system_name} groups? dir? '=' \ (@{syntax system_name} '+')? description? options? \ sessions? directories? (theories*) \ (document_theories?) (document_files*) \ (export_files*) ; groups: '(' (@{syntax name} +) ')' ; dir: @'in' @{syntax embedded} ; description: @'description' @{syntax text} ; options: @'options' opts ; opts: '[' ( (@{syntax name} '=' value | @{syntax name}) + ',' ) ']' ; value: @{syntax name} | @{syntax real} ; sessions: @'sessions' (@{syntax system_name}+) ; directories: @'directories' (dir+) ; theories: @'theories' opts? (theory_entry+) ; theory_entry: @{syntax system_name} ('(' @'global' ')')? ; document_theories: @'document_theories' (@{syntax name}+) ; document_files: @'document_files' ('(' dir ')')? (@{syntax embedded}+) ; export_files: @'export_files' ('(' dir ')')? ('[' nat ']')? \ (@{syntax embedded}+) \ \<^descr> \isakeyword{session}~\A = B + body\ defines a new session \A\ based on parent session \B\, with its content given in \body\ (imported sessions and theories). Note that a parent (like \HOL\) is mandatory in practical applications: only Isabelle/Pure can bootstrap itself from nothing. All such session specifications together describe a hierarchy (graph) of sessions, with globally unique names. The new session name \A\ should be sufficiently long and descriptive to stand on its own in a potentially large library. \<^descr> \isakeyword{session}~\A (groups)\ indicates a collection of groups where the new session is a member. Group names are uninterpreted and merely follow certain conventions. For example, the Isabelle distribution tags some important sessions by the group name called ``\main\''. Other projects may invent their own conventions, but this requires some care to avoid clashes within this unchecked name space. \<^descr> \isakeyword{session}~\A\~\isakeyword{in}~\dir\ specifies an explicit directory for this session; by default this is the current directory of the \<^verbatim>\ROOT\ file. All theory files are located relatively to the session directory. The prover process is run within the same as its current working directory. \<^descr> \isakeyword{description}~\text\ is a free-form annotation for this session. \<^descr> \isakeyword{options}~\[x = a, y = b, z]\ defines separate options (\secref{sec:system-options}) that are used when processing this session, but \<^emph>\without\ propagation to child sessions. Note that \z\ abbreviates \z = true\ for Boolean options. \<^descr> \isakeyword{sessions}~\names\ specifies sessions that are \<^emph>\imported\ into the current name space of theories. This allows to refer to a theory \A\ from session \B\ by the qualified name \B.A\ --- although it is loaded again into the current ML process, which is in contrast to a theory that is already present in the \<^emph>\parent\ session. Theories that are imported from other sessions are excluded from the current session document. \<^descr> \isakeyword{directories}~\dirs\ specifies additional directories for import of theory files via \isakeyword{theories} within \<^verbatim>\ROOT\ or \<^theory_text>\imports\ within a theory; \dirs\ are relative to the main session directory (cf.\ \isakeyword{session} \dots \isakeyword{in}~\dir\). These directories need to be exclusively assigned to a unique session, without implicit sharing of file-system locations. \<^descr> \isakeyword{theories}~\options names\ specifies a block of theories that are processed within an environment that is augmented by the given options, in addition to the global session options given before. Any number of blocks of \isakeyword{theories} may be given. Options are only active for each \isakeyword{theories} block separately. A theory name that is followed by \(\\isakeyword{global}\)\ is treated literally in other session specifications or theory imports --- the normal situation is to qualify theory names by the session name; this ensures globally unique names in big session graphs. Global theories are usually the entry points to major logic sessions: \Pure\, \Main\, \Complex_Main\, \HOLCF\, \IFOL\, \FOL\, \ZF\, \ZFC\ etc. Regular Isabelle applications should not claim any global theory names. \<^descr> \isakeyword{document_theories}~\names\ specifies theories from other sessions that should be included in the generated document source directory. These theories need to be explicit imports in the current session, or implicit imports from the underlying hierarchy of parent sessions. The generated \<^verbatim>\session.tex\ file is not affected: the session's {\LaTeX} setup needs to \<^verbatim>\\input{\\\\\<^verbatim>\}\ generated \<^verbatim>\.tex\ files separately. \<^descr> \isakeyword{document_files}~\(\\isakeyword{in}~\base_dir) files\ lists source files for document preparation, typically \<^verbatim>\.tex\ and \<^verbatim>\.sty\ for {\LaTeX}. Only these explicitly given files are copied from the base directory to the document output directory, before formal document processing is started (see also \secref{sec:tool-document}). The local path structure of the \files\ is preserved, which allows to reconstruct the original directory hierarchy of \base_dir\. The default \base_dir\ is \<^verbatim>\document\ within the session root directory. \<^descr> \isakeyword{export_files}~\(\\isakeyword{in}~\target_dir) [number] patterns\ specifies theory exports that may get written to the file-system, e.g. via @{tool_ref build} with option \<^verbatim>\-e\ (\secref{sec:tool-build}). The \target_dir\ specification is relative to the session root directory; its default is \<^verbatim>\export\. Exports are selected via \patterns\ as in @{tool_ref export} (\secref{sec:tool-export}). The number given in brackets (default: 0) specifies elements that should be pruned from each name: it allows to reduce the resulting directory hierarchy at the danger of overwriting files due to loss of uniqueness. \ subsubsection \Examples\ text \ See \<^file>\~~/src/HOL/ROOT\ for a diversity of practically relevant situations, although it uses relatively complex quasi-hierarchic naming conventions like \<^verbatim>\HOL-SPARK\, \<^verbatim>\HOL-SPARK-Examples\. An alternative is to use unqualified names that are relatively long and descriptive, as in the Archive of Formal Proofs (\<^url>\https://isa-afp.org\), for example. \ section \System build options \label{sec:system-options}\ text \ See \<^file>\~~/etc/options\ for the main defaults provided by the Isabelle distribution. Isabelle/jEdit @{cite "isabelle-jedit"} includes a simple editing mode \<^verbatim>\isabelle-options\ for this file-format. The following options are particularly relevant to build Isabelle sessions, in particular with document preparation (\chref{ch:present}). \<^item> @{system_option_def "browser_info"} controls output of HTML browser info, see also \secref{sec:info}. \<^item> @{system_option_def "document"} controls document output for a particular session or theory; \<^verbatim>\document=pdf\ means enabled, \<^verbatim>\document=false\ means disabled (especially for particular theories). \<^item> @{system_option_def "document_output"} specifies an alternative directory for generated output of the document preparation system; the default is within the @{setting "ISABELLE_BROWSER_INFO"} hierarchy as explained in \secref{sec:info}. See also @{tool mkroot}, which generates a default configuration with output readily available to the author of the document. \<^item> @{system_option_def "document_variants"} specifies document variants as a colon-separated list of \name=tags\ entries. The default name \<^verbatim>\document\, without additional tags. Tags are specified as a comma separated list of modifier/name pairs and tell {\LaTeX} how to interpret certain Isabelle command regions: ``\<^verbatim>\+\\foo\'' (or just ``\foo\'') means to keep, ``\<^verbatim>\-\\foo\'' to drop, and ``\<^verbatim>\/\\foo\'' to fold text tagged as \foo\. The builtin default is equivalent to the tag specification ``\<^verbatim>\+document,+theory,+proof,+ML,+visible,-invisible,+important,+unimportant\''; see also the {\LaTeX} macros \<^verbatim>\\isakeeptag\, \<^verbatim>\\isadroptag\, and \<^verbatim>\\isafoldtag\, in \<^file>\~~/lib/texinputs/isabelle.sty\. In contrast, \<^verbatim>\document_variants=document:outline=/proof,/ML\ indicates two documents: the one called \<^verbatim>\document\ with default tags, and the other called \<^verbatim>\outline\ where proofs and ML sections are folded. Document variant names are just a matter of conventions. It is also possible to use different document variant names (without tags) for different document root entries, see also \secref{sec:tool-document}. \<^item> @{system_option_def "document_tags"} specifies alternative command tags as a comma-separated list of items: either ``\command\\<^verbatim>\%\\tag\'' for a specific command, or ``\<^verbatim>\%\\tag\'' as default for all other commands. This is occasionally useful to control the global visibility of commands via session options (e.g.\ in \<^verbatim>\ROOT\). + \<^item> @{system_option_def "document_bibliography"} explicitly enables the use + of \<^verbatim>\bibtex\; the default is to check the presence of \<^verbatim>\root.bib\, but it + could have a different name. + + \<^item> @{system_option_def "document_preprocessor"} specifies the name of an + executable that is run within the document output directory, after + preparing the document sources and before the actual build process. This + allows to apply adhoc patches, without requiring a separate \<^verbatim>\build\ + script. + \<^item> @{system_option_def "threads"} determines the number of worker threads for parallel checking of theories and proofs. The default \0\ means that a sensible maximum value is determined by the underlying hardware. For machines with many cores or with hyperthreading, this is often requires manual adjustment (on the command-line or within personal settings or preferences, not within a session \<^verbatim>\ROOT\). \<^item> @{system_option_def "condition"} specifies a comma-separated list of process environment variables (or Isabelle settings) that are required for the subsequent theories to be processed. Conditions are considered ``true'' if the corresponding environment value is defined and non-empty. \<^item> @{system_option_def "timeout"} and @{system_option_def "timeout_scale"} specify a real wall-clock timeout for the session as a whole: the two values are multiplied and taken as the number of seconds. Typically, @{system_option "timeout"} is given for individual sessions, and @{system_option "timeout_scale"} as global adjustment to overall hardware performance. The timer is controlled outside the ML process by the JVM that runs Isabelle/Scala. Thus it is relatively reliable in canceling processes that get out of control, even if there is a deadlock without CPU time usage. \<^item> @{system_option_def "profiling"} specifies a mode for global ML profiling. Possible values are the empty string (disabled), \<^verbatim>\time\ for \<^ML>\profile_time\ and \<^verbatim>\allocations\ for \<^ML>\profile_allocations\. Results appear near the bottom of the session log file. \<^item> @{system_option_def "system_heaps"} determines the directories for session heap images: \<^path>\$ISABELLE_HEAPS\ is the user directory and \<^path>\$ISABELLE_HEAPS_SYSTEM\ the system directory (usually within the Isabelle application). For \<^verbatim>\system_heaps=false\, heaps are stored in the user directory and may be loaded from both directories. For \<^verbatim>\system_heaps=true\, store and load happens only in the system directory. The @{tool_def options} tool prints Isabelle system options. Its command-line usage is: @{verbatim [display] \Usage: isabelle options [OPTIONS] [MORE_OPTIONS ...] Options are: -b include $ISABELLE_BUILD_OPTIONS -g OPTION get value of OPTION -l list options -x FILE export to FILE in YXML format Report Isabelle system options, augmented by MORE_OPTIONS given as arguments NAME=VAL or NAME.\} The command line arguments provide additional system options of the form \name\\<^verbatim>\=\\value\ or \name\ for Boolean options. Option \<^verbatim>\-b\ augments the implicit environment of system options by the ones of @{setting ISABELLE_BUILD_OPTIONS}, cf.\ \secref{sec:tool-build}. Option \<^verbatim>\-g\ prints the value of the given option. Option \<^verbatim>\-l\ lists all options with their declaration and current value. Option \<^verbatim>\-x\ specifies a file to export the result in YXML format, instead of printing it in human-readable form. \ section \Invoking the build process \label{sec:tool-build}\ text \ The @{tool_def build} tool invokes the build process for Isabelle sessions. It manages dependencies between sessions, related sources of theories and auxiliary files, and target heap images. Accordingly, it runs instances of the prover process with optional document preparation. Its command-line usage is:\<^footnote>\Isabelle/Scala provides the same functionality via \<^scala_method>\isabelle.Build.build\.\ @{verbatim [display] \Usage: isabelle build [OPTIONS] [SESSIONS ...] Options are: -B NAME include session NAME and all descendants -D DIR include session directory and select its sessions -N cyclic shuffling of NUMA CPU nodes (performance tuning) -P DIR enable HTML/PDF presentation in directory (":" for default) -R refer to requirements of selected sessions -S soft build: only observe changes of sources, not heap images -X NAME exclude sessions from group NAME and all descendants -a select all sessions -b build heap images -c clean build -d DIR include session directory -e export files from session specification into file-system -f fresh build -g NAME select session group NAME -j INT maximum number of parallel jobs (default 1) -k KEYWORD check theory sources for conflicts with proposed keywords -l list session source files -n no build -- test dependencies only -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -v verbose -x NAME exclude session NAME and all descendants Build and manage Isabelle sessions, depending on implicit settings: ISABELLE_TOOL_JAVA_OPTIONS="..." ISABELLE_BUILD_OPTIONS="..." ML_PLATFORM="..." ML_HOME="..." ML_SYSTEM="..." ML_OPTIONS="..."\} \<^medskip> Isabelle sessions are defined via session ROOT files as described in (\secref{sec:session-root}). The totality of sessions is determined by collecting such specifications from all Isabelle component directories (\secref{sec:components}), augmented by more directories given via options \<^verbatim>\-d\~\DIR\ on the command line. Each such directory may contain a session \<^verbatim>\ROOT\ file with several session specifications. Any session root directory may refer recursively to further directories of the same kind, by listing them in a catalog file \<^verbatim>\ROOTS\ line-by-line. This helps to organize large collections of session specifications, or to make \<^verbatim>\-d\ command line options persistent (e.g.\ in \<^verbatim>\$ISABELLE_HOME_USER/ROOTS\). \<^medskip> The subset of sessions to be managed is determined via individual \SESSIONS\ given as command-line arguments, or session groups that are given via one or more options \<^verbatim>\-g\~\NAME\. Option \<^verbatim>\-a\ selects all sessions. The build tool takes session dependencies into account: the set of selected sessions is completed by including all ancestors. \<^medskip> One or more options \<^verbatim>\-B\~\NAME\ specify base sessions to be included (all descendants wrt.\ the session parent or import graph). \<^medskip> One or more options \<^verbatim>\-x\~\NAME\ specify sessions to be excluded (all descendants wrt.\ the session parent or import graph). Option \<^verbatim>\-X\ is analogous to this, but excluded sessions are specified by session group membership. \<^medskip> Option \<^verbatim>\-R\ reverses the selection in the sense that it refers to its requirements: all ancestor sessions excluding the original selection. This allows to prepare the stage for some build process with different options, before running the main build itself (without option \<^verbatim>\-R\). \<^medskip> Option \<^verbatim>\-D\ is similar to \<^verbatim>\-d\, but selects all sessions that are defined in the given directories. \<^medskip> Option \<^verbatim>\-S\ indicates a ``soft build'': the selection is restricted to those sessions that have changed sources (according to actually imported theories). The status of heap images is ignored. \<^medskip> The build process depends on additional options (\secref{sec:system-options}) that are passed to the prover eventually. The settings variable @{setting_ref ISABELLE_BUILD_OPTIONS} allows to provide additional defaults, e.g.\ \<^verbatim>\ISABELLE_BUILD_OPTIONS="document=pdf threads=4"\. Moreover, the environment of system build options may be augmented on the command line via \<^verbatim>\-o\~\name\\<^verbatim>\=\\value\ or \<^verbatim>\-o\~\name\, which abbreviates \<^verbatim>\-o\~\name\\<^verbatim>\=true\ for Boolean options. Multiple occurrences of \<^verbatim>\-o\ on the command-line are applied in the given order. \<^medskip> Option \<^verbatim>\-P\ enables PDF/HTML presentation in the given directory, where ``\<^verbatim>\-P:\'' refers to the default @{setting_ref ISABELLE_BROWSER_INFO} (or @{setting_ref ISABELLE_BROWSER_INFO_SYSTEM}). This applies only to explicitly selected sessions; note that option \-R\ allows to select all requirements separately. \<^medskip> Option \<^verbatim>\-b\ ensures that heap images are produced for all selected sessions. By default, images are only saved for inner nodes of the hierarchy of sessions, as required for other sessions to continue later on. \<^medskip> Option \<^verbatim>\-c\ cleans the selected sessions (all descendants wrt.\ the session parent or import graph) before performing the specified build operation. \<^medskip> Option \<^verbatim>\-e\ executes the \isakeyword{export_files} directives from the ROOT specification of all explicitly selected sessions: the status of the session build database needs to be OK, but the session could have been built earlier. Using \isakeyword{export_files}, a session may serve as abstract interface for add-on build artefacts, but these are only materialized on explicit request: without option \<^verbatim>\-e\ there is no effect on the physical file-system yet. \<^medskip> Option \<^verbatim>\-f\ forces a fresh build of all selected sessions and their requirements. \<^medskip> Option \<^verbatim>\-n\ omits the actual build process after the preparatory stage (including optional cleanup). Note that the return code always indicates the status of the set of selected sessions. \<^medskip> Option \<^verbatim>\-j\ specifies the maximum number of parallel build jobs (prover processes). Each prover process is subject to a separate limit of parallel worker threads, cf.\ system option @{system_option_ref threads}. \<^medskip> Option \<^verbatim>\-N\ enables cyclic shuffling of NUMA CPU nodes. This may help performance tuning on Linux servers with separate CPU/memory modules. \<^medskip> Option \<^verbatim>\-v\ increases the general level of verbosity. Option \<^verbatim>\-l\ lists the source files that contribute to a session. \<^medskip> Option \<^verbatim>\-k\ specifies a newly proposed keyword for outer syntax (multiple uses allowed). The theory sources are checked for conflicts wrt.\ this hypothetical change of syntax, e.g.\ to reveal occurrences of identifiers that need to be quoted. \ subsubsection \Examples\ text \ Build a specific logic image: @{verbatim [display] \isabelle build -b HOLCF\} \<^smallskip> Build the main group of logic images: @{verbatim [display] \isabelle build -b -g main\} \<^smallskip> Build all descendants (and requirements) of \<^verbatim>\FOL\ and \<^verbatim>\ZF\: @{verbatim [display] \isabelle build -B FOL -B ZF\} \<^smallskip> Build all sessions where sources have changed (ignoring heaps): @{verbatim [display] \isabelle build -a -S\} \<^smallskip> Provide a general overview of the status of all Isabelle sessions, without building anything: @{verbatim [display] \isabelle build -a -n -v\} \<^smallskip> Build all sessions with HTML browser info and PDF document preparation: @{verbatim [display] \isabelle build -a -o browser_info -o document=pdf\} \<^smallskip> Build all sessions with a maximum of 8 parallel prover processes and 4 worker threads each (on a machine with many cores): @{verbatim [display] \isabelle build -a -j8 -o threads=4\} \<^smallskip> Build some session images with cleanup of their descendants, while retaining their ancestry: @{verbatim [display] \isabelle build -b -c HOL-Library HOL-Algebra\} \<^smallskip> Clean all sessions without building anything: @{verbatim [display] \isabelle build -a -n -c\} \<^smallskip> Build all sessions from some other directory hierarchy, according to the settings variable \<^verbatim>\AFP\ that happens to be defined inside the Isabelle environment: @{verbatim [display] \isabelle build -D '$AFP'\} \<^smallskip> Inform about the status of all sessions required for AFP, without building anything yet: @{verbatim [display] \isabelle build -D '$AFP' -R -v -n\} \ section \Print messages from build database \label{sec:tool-log}\ text \ The @{tool_def "log"} tool prints prover messages from the build database of the given session. Its command-line usage is: @{verbatim [display] \Usage: isabelle log [OPTIONS] SESSION Options are: -T NAME restrict to given theories (multiple options possible) -U output Unicode symbols -m MARGIN margin for pretty printing (default: 76.0) -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) Print messages from the build database of the given session, without any checks against current sources: results from a failed build can be printed as well.\} The specified session database is taken as is, independently of the current session structure and theories sources. The order of messages follows the source positions of source files; thus the erratic evaluation of parallel processing rarely matters. There is \<^emph>\no\ implicit build process involved, so it is possible to retrieve error messages from a failed session as well. \<^medskip> Option \<^verbatim>\-o\ allows to change system options, as in @{tool build} (\secref{sec:tool-build}). This may affect the storage space for the build database, notably via @{system_option system_heaps}, or @{system_option build_database_server} and its relatives. \<^medskip> Option \<^verbatim>\-T\ restricts output to given theories: multiple entries are possible by repeating this option on the command-line. The default is to refer to \<^emph>\all\ theories that were used in original session build process. \<^medskip> Options \<^verbatim>\-m\ and \<^verbatim>\-U\ modify pretty printing and output of Isabelle symbols. The default is for an old-fashioned ASCII terminal at 80 characters per line (76 + 4 characters to prefix warnings or errors). \<^medskip> Option \<^verbatim>\-v\ prints all messages from the session database, including extra information and tracing messages etc. \ subsubsection \Examples\ text \ Print messages from theory \<^verbatim>\HOL.Nat\ of session \<^verbatim>\HOL\, using Unicode rendering of Isabelle symbols and a margin of 100 characters: @{verbatim [display] \isabelle log -T HOL.Nat -U -m 100 HOL\} \ section \Retrieve theory exports \label{sec:tool-export}\ text \ The @{tool_def "export"} tool retrieves theory exports from the session database. Its command-line usage is: @{verbatim [display] \Usage: isabelle export [OPTIONS] SESSION Options are: -O DIR output directory for exported files (default: "export") -d DIR include session directory -l list exports -n no build of session -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -p NUM prune path of exported files by NUM elements -x PATTERN extract files matching pattern (e.g.\ "*:**" for all) List or export theory exports for SESSION: named blobs produced by isabelle build. Option -l or -x is required; option -x may be repeated. The PATTERN language resembles glob patterns in the shell, with ? and * (both excluding ":" and "/"), ** (excluding ":"), and [abc] or [^abc], and variants {pattern1,pattern2,pattern3}.\} \<^medskip> The specified session is updated via @{tool build} (\secref{sec:tool-build}), with the same options \<^verbatim>\-d\, \<^verbatim>\-o\. The option \<^verbatim>\-n\ suppresses the implicit build process: it means that a potentially outdated session database is used! \<^medskip> Option \<^verbatim>\-l\ lists all stored exports, with compound names \theory\\<^verbatim>\:\\name\. \<^medskip> Option \<^verbatim>\-x\ extracts stored exports whose compound name matches the given pattern. Note that wild cards ``\<^verbatim>\?\'' and ``\<^verbatim>\*\'' do not match the separators ``\<^verbatim>\:\'' and ``\<^verbatim>\/\''; the wild card \<^verbatim>\**\ matches over directory name hierarchies separated by ``\<^verbatim>\/\''. Thus the pattern ``\<^verbatim>\*:**\'' matches \<^emph>\all\ theory exports. Multiple options \<^verbatim>\-x\ refer to the union of all specified patterns. Option \<^verbatim>\-O\ specifies an alternative output directory for option \<^verbatim>\-x\: the default is \<^verbatim>\export\ within the current directory. Each theory creates its own sub-directory hierarchy, using the session-qualified theory name. Option \<^verbatim>\-p\ specifies the number of elements that should be pruned from each name: it allows to reduce the resulting directory hierarchy at the danger of overwriting files due to loss of uniqueness. \ section \Dump PIDE session database \label{sec:tool-dump}\ text \ The @{tool_def "dump"} tool dumps information from the cumulative PIDE session database (which is processed on the spot). Its command-line usage is: @{verbatim [display] \Usage: isabelle dump [OPTIONS] [SESSIONS ...] Options are: -A NAMES dump named aspects (default: ...) -B NAME include session NAME and all descendants -D DIR include session directory and select its sessions -O DIR output directory for dumped files (default: "dump") -R refer to requirements of selected sessions -X NAME exclude sessions from group NAME and all descendants -a select all sessions -b NAME base logic image (default "Pure") -d DIR include session directory -g NAME select session group NAME -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -v verbose -x NAME exclude session NAME and all descendants Dump cumulative PIDE session database, with the following aspects: ...\} \<^medskip> Options \<^verbatim>\-B\, \<^verbatim>\-D\, \<^verbatim>\-R\, \<^verbatim>\-X\, \<^verbatim>\-a\, \<^verbatim>\-d\, \<^verbatim>\-g\, \<^verbatim>\-x\ and the remaining command-line arguments specify sessions as in @{tool build} (\secref{sec:tool-build}): the cumulative PIDE database of all their loaded theories is dumped to the output directory of option \<^verbatim>\-O\ (default: \<^verbatim>\dump\ in the current directory). \<^medskip> Option \<^verbatim>\-b\ specifies an optional base logic image, for improved scalability of the PIDE session. Its theories are only processed if it is included in the overall session selection. \<^medskip> Option \<^verbatim>\-o\ overrides Isabelle system options as for @{tool build} (\secref{sec:tool-build}). \<^medskip> Option \<^verbatim>\-v\ increases the general level of verbosity. \<^medskip> Option \<^verbatim>\-A\ specifies named aspects of the dump, as a comma-separated list. The default is to dump all known aspects, as given in the command-line usage of the tool. The underlying Isabelle/Scala operation \<^scala_method>\isabelle.Dump.dump\ takes aspects as user-defined operations on the final PIDE state and document version. This allows to imitate Prover IDE rendering under program control. \ subsubsection \Examples\ text \ Dump all Isabelle/ZF sessions (which are rather small): @{verbatim [display] \isabelle dump -v -B ZF\} \<^smallskip> Dump the quite substantial \<^verbatim>\HOL-Analysis\ session, with full bootstrap from Isabelle/Pure: @{verbatim [display] \isabelle dump -v HOL-Analysis\} \<^smallskip> Dump all sessions connected to HOL-Analysis, using main Isabelle/HOL as basis: @{verbatim [display] \isabelle dump -v -b HOL -B HOL-Analysis\} This results in uniform PIDE markup for everything, except for the Isabelle/Pure bootstrap process itself. Producing that on the spot requires several GB of heap space, both for the Isabelle/Scala and Isabelle/ML process (in 64bit mode). Here are some relevant settings (\secref{sec:boot}) for such ambitious applications: @{verbatim [display] \ISABELLE_TOOL_JAVA_OPTIONS="-Xms4g -Xmx32g -Xss16m" ML_OPTIONS="--minheap 4G --maxheap 32G" \} \ section \Update theory sources based on PIDE markup \label{sec:tool-update}\ text \ The @{tool_def "update"} tool updates theory sources based on markup that is produced from a running PIDE session (similar to @{tool dump} \secref{sec:tool-dump}). Its command-line usage is: @{verbatim [display] \Usage: isabelle update [OPTIONS] [SESSIONS ...] Options are: -B NAME include session NAME and all descendants -D DIR include session directory and select its sessions -R refer to requirements of selected sessions -X NAME exclude sessions from group NAME and all descendants -a select all sessions -b NAME base logic image (default "Pure") -d DIR include session directory -g NAME select session group NAME -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -u OPT overide update option: shortcut for "-o update_OPT" -v verbose -x NAME exclude session NAME and all descendants Update theory sources based on PIDE markup.\} \<^medskip> Options \<^verbatim>\-B\, \<^verbatim>\-D\, \<^verbatim>\-R\, \<^verbatim>\-X\, \<^verbatim>\-a\, \<^verbatim>\-d\, \<^verbatim>\-g\, \<^verbatim>\-x\ and the remaining command-line arguments specify sessions as in @{tool build} (\secref{sec:tool-build}) or @{tool dump} (\secref{sec:tool-dump}). \<^medskip> Option \<^verbatim>\-b\ specifies an optional base logic image, for improved scalability of the PIDE session. Its theories are only processed if it is included in the overall session selection. \<^medskip> Option \<^verbatim>\-v\ increases the general level of verbosity. \<^medskip> Option \<^verbatim>\-o\ overrides Isabelle system options as for @{tool build} (\secref{sec:tool-build}). Option \<^verbatim>\-u\ refers to specific \<^verbatim>\update\ options, by relying on naming convention: ``\<^verbatim>\-u\~\OPT\'' is a shortcut for ``\<^verbatim>\-o\~\<^verbatim>\update_\\OPT\''. \<^medskip> The following update options are supported: \<^item> @{system_option update_inner_syntax_cartouches} to update inner syntax (types, terms, etc.)~to use cartouches, instead of double-quoted strings or atomic identifiers. For example, ``\<^theory_text>\lemma \x = x\\'' is replaced by ``\<^theory_text>\lemma \x = x\\'', and ``\<^theory_text>\assume A\'' is replaced by ``\<^theory_text>\assume \A\\''. \<^item> @{system_option update_mixfix_cartouches} to update mixfix templates to use cartouches instead of double-quoted strings. For example, ``\<^theory_text>\(infixl \+\ 65)\'' is replaced by ``\<^theory_text>\(infixl \+\ 65)\''. \<^item> @{system_option update_control_cartouches} to update antiquotations to use the compact form with control symbol and cartouche argument. For example, ``\@{term \x + y\}\'' is replaced by ``\\<^term>\x + y\\'' (the control symbol is literally \<^verbatim>\\<^term>\.) \<^item> @{system_option update_path_cartouches} to update file-system paths to use cartouches: this depends on language markup provided by semantic processing of parsed input. It is also possible to produce custom updates in Isabelle/ML, by reporting \<^ML>\Markup.update\ with the precise source position and a replacement text. This operation should be made conditional on specific system options, similar to the ones above. Searching the above option names in ML sources of \<^dir>\$ISABELLE_HOME/src/Pure\ provides some examples. Updates can be in conflict by producing nested or overlapping edits: this may require to run @{tool update} multiple times. \ subsubsection \Examples\ text \ Update some cartouche notation in all theory sources required for session \<^verbatim>\HOL-Analysis\ (and ancestors): @{verbatim [display] \isabelle update -u mixfix_cartouches HOL-Analysis\} \<^smallskip> Update the same for all application sessions based on \<^verbatim>\HOL-Analysis\ --- using its image is taken starting point (for reduced resource requirements): @{verbatim [display] \isabelle update -u mixfix_cartouches -b HOL-Analysis -B HOL-Analysis\} \<^smallskip> Update sessions that build on \<^verbatim>\HOL-Proofs\, which need to be run separately with special options as follows: @{verbatim [display] \isabelle update -u mixfix_cartouches -l HOL-Proofs -B HOL-Proofs -o record_proofs=2\} \<^smallskip> See also the end of \secref{sec:tool-dump} for hints on increasing Isabelle/ML heap sizes for very big PIDE processes that include many sessions, notably from the Archive of Formal Proofs. \ section \Explore sessions structure\ text \ The @{tool_def "sessions"} tool explores the sessions structure. Its command-line usage is: @{verbatim [display] \Usage: isabelle sessions [OPTIONS] [SESSIONS ...] Options are: -B NAME include session NAME and all descendants -D DIR include session directory and select its sessions -R refer to requirements of selected sessions -X NAME exclude sessions from group NAME and all descendants -a select all sessions -d DIR include session directory -g NAME select session group NAME -x NAME exclude session NAME and all descendants Explore the structure of Isabelle sessions and print result names in topological order (on stdout).\} Arguments and options for session selection resemble @{tool build} (\secref{sec:tool-build}). \ subsubsection \Examples\ text \ All sessions of the Isabelle distribution: @{verbatim [display] \isabelle sessions -a\} \<^medskip> Sessions that are based on \<^verbatim>\ZF\ (and required by it): @{verbatim [display] \isabelle sessions -B ZF\} \<^medskip> All sessions of Isabelle/AFP (based in directory \<^path>\AFP\): @{verbatim [display] \isabelle sessions -D AFP/thys\} \<^medskip> Sessions required by Isabelle/AFP (based in directory \<^path>\AFP\): @{verbatim [display] \isabelle sessions -R -D AFP/thys\} \ end diff --git a/src/Doc/System/document/build b/src/Doc/System/document/build deleted file mode 100755 --- a/src/Doc/System/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -isabelle logo -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/System/document/root.tex b/src/Doc/System/document/root.tex --- a/src/Doc/System/document/root.tex +++ b/src/Doc/System/document/root.tex @@ -1,49 +1,49 @@ \documentclass[12pt,a4paper]{report} \usepackage[T1]{fontenc} \usepackage{supertabular} \usepackage{graphicx} \usepackage{iman,extra,isar} \usepackage[nohyphen,strings]{underscore} \usepackage{isabelle,isabellesym} \usepackage{railsetup} \usepackage{style} \usepackage{pdfsetup} \hyphenation{Isabelle} \hyphenation{Isar} \isadroptag{theory} \isabellestyle{literal} \def\isastylett{\footnotesize\tt} -\title{\includegraphics[scale=0.5]{isabelle} \\[4ex] The Isabelle System Manual} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] The Isabelle System Manual} \author{\emph{Makarius Wenzel}} \makeindex \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \clearfirst \input{Environment.tex} \input{Sessions.tex} \input{Presentation.tex} \input{Server.tex} \input{Scala.tex} \input{Phabricator.tex} \input{Misc.tex} \begingroup \tocentry{\bibname} \bibliographystyle{abbrv} \small\raggedright\frenchspacing \bibliography{manual} \endgroup \tocentry{\indexname} \printindex \end{document} diff --git a/src/Doc/Tutorial/document/build b/src/Doc/Tutorial/document/build --- a/src/Doc/Tutorial/document/build +++ b/src/Doc/Tutorial/document/build @@ -1,14 +1,10 @@ #!/usr/bin/env bash set -e -FORMAT="$1" -VARIANT="$2" - -isabelle logo HOL -isabelle latex -o "$FORMAT" -isabelle latex -o bbl +$ISABELLE_LUALATEX root +$ISABELLE_BIBTEX root +$ISABELLE_LUALATEX root +$ISABELLE_LUALATEX root ./isa-index root -isabelle latex -o "$FORMAT" -[ -f root.out ] && "$ISABELLE_HOME/src/Doc/fixbookmarks" root.out -isabelle latex -o "$FORMAT" +$ISABELLE_LUALATEX root diff --git a/src/Doc/Tutorial/document/isa-index b/src/Doc/Tutorial/document/isa-index --- a/src/Doc/Tutorial/document/isa-index +++ b/src/Doc/Tutorial/document/isa-index @@ -1,23 +1,23 @@ #! /bin/sh # #sedindex - shell script to create indexes, preprocessing LaTeX's .idx file # # puts strings prefixed by * into \tt font # terminator characters for strings are |!@{} # # a space terminates the \tt part to allow \index{*notE theorem}, etc. # # note that makeindex uses a dboule quote (") to delimit special characters. # # change *"X"Y"Z"W to "X"Y"Z"W@{\tt "X"Y"Z"W} # change *"X"Y"Z to "X"Y"Z@{\tt "X"Y"Z} # change *"X"Y to "X"Y@{\tt "X"Y} # change *"X to "X@{\tt "X} # change *IDENT to IDENT@{\tt IDENT} # where IDENT is any string not containing | ! or @ # FOUR backslashes: to escape the shell AND sed sed -e "s~\*\(\".\".\".\".\)~\1@\\\\isa {\1}~g s~\*\(\".\".\".\)~\1@\\\\isa {\1}~g s~\*\(\".\".\)~\1@\\\\isa {\1}~g s~\*\(\".\)~\1@\\\\isa {\1}~g -s~\*\([^ |!@{}][^ |!@{}]*\)~\1@\\\\isa {\1}~g" $1.idx | makeindex -c -q -o $1.ind +s~\*\([^ |!@{}][^ |!@{}]*\)~\1@\\\\isa {\1}~g" $1.idx | $ISABELLE_MAKEINDEX -o $1.ind diff --git a/src/Doc/Tutorial/document/root.tex b/src/Doc/Tutorial/document/root.tex --- a/src/Doc/Tutorial/document/root.tex +++ b/src/Doc/Tutorial/document/root.tex @@ -1,97 +1,97 @@ \documentclass{article} \usepackage{cl2emono-modified,isabelle,isabellesym} \usepackage{proof,amsmath,amsfonts,amssymb} \usepackage{wasysym,verbatim,graphicx,tutorial,ttbox,comment} \usepackage{eurosym} \usepackage{pdfsetup} %last package! \remarkstrue %TRUE causes remarks to be displayed (as marginal notes) %\remarksfalse \makeindex \index{conditional expressions|see{\isa{if} expressions}} \index{primitive recursion|see{recursion, primitive}} \index{product type|see{pairs and tuples}} \index{structural induction|see{induction, structural}} \index{termination|see{functions, total}} \index{tuples|see{pairs and tuples}} \index{*<*lex*>|see{lexicographic product}} \underscoreoff \setcounter{secnumdepth}{2} \setcounter{tocdepth}{2} %% {secnumdepth}{2}??? \pagestyle{headings} \begin{document} \title{ \begin{center} -\includegraphics[scale=.8]{isabelle_hol} +\includegraphics[scale=.8]{isabelle_logo} \\ \vspace{0.5cm} A Proof Assistant for Higher-Order Logic \end{center}} \author{Tobias Nipkow \quad Lawrence C. Paulson \quad Markus Wenzel%\\[1ex] %Technische Universit{\"a}t M{\"u}nchen \\ %Institut f{\"u}r Informatik \\[1ex] %University of Cambridge\\ %Computer Laboratory } \pagenumbering{roman} \maketitle \newpage %\setcounter{page}{5} %\vspace*{\fill} %\begin{center} %\LARGE In memoriam \\[1ex] %{\sc Annette Schumann}\\[1ex] %1959 -- 2001 %\end{center} %\vspace*{\fill} %\vspace*{\fill} %\newpage \input{preface} \tableofcontents \cleardoublepage\pagenumbering{arabic} \part{Elementary Techniques} \input{basics} \input{fp} \input{documents0} \part{Logic and Sets} \input{rules} \input{sets} \input{inductive0} \part{Advanced Material} \input{types0} \input{advanced0} \input{protocol} \markboth{}{} \cleardoublepage \vspace*{\fill} \begin{flushright} \begin{tabular}{l} {\large\sf\slshape You know my methods. Apply them!}\\[1ex] Sherlock Holmes \end{tabular} \end{flushright} \vspace*{\fill} \vspace*{\fill} \underscoreoff \input{appendix0} \bibliographystyle{plain} \bibliography{manual} \underscoreoff \printindex \end{document} diff --git a/src/Doc/Tutorial/document/tutorial.sty b/src/Doc/Tutorial/document/tutorial.sty --- a/src/Doc/Tutorial/document/tutorial.sty +++ b/src/Doc/Tutorial/document/tutorial.sty @@ -1,191 +1,189 @@ % tutorial.sty : Isabelle Tutorial Page Layout % \typeout{Document Style tutorial. Released 9 July 2001} \hyphenation{Isa-belle man-u-script man-u-scripts ap-pen-dix mut-u-al-ly} \hyphenation{data-type data-types co-data-type co-data-types } %usage: \iflabelundefined{LABEL}{if not defined}{if defined} \newcommand{\iflabelundefined}[1]{\@ifundefined{r@#1}} %%%INDEXING use isa-index to process the index \newcommand\seealso[2]{\emph{see also} #1} \usepackage{makeidx} %index, putting page numbers of definitions in boldface \def\bold#1{\textbf{#1}} \newcommand\fnote[1]{#1n} \newcommand\indexbold[1]{\index{#1|bold}} % The alternative to \protect\isa in the indexing macros is % \noexpand\noexpand \noexpand\isa % need TWO levels of \noexpand to delay the expansion of \isa: % the \noexpand\noexpand will leave one \noexpand, to be given to the % (still unexpanded) \isa token. See TeX by Topic, page 122. %%%% for indexing constants, symbols, theorems, ... \newcommand\cdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (constant)}} \newcommand\sdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (symbol)}} \newcommand\sdxpos[2]{\isa{#1}\index{#2@\protect\isa{#1} (symbol)}} \newcommand\tdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)}} \newcommand\tdxbold[1]{\isa{#1}\index{#1@\protect\isa{#1} (theorem)|bold}} \newcommand\cldx[1]{\isa{#1}\index{#1@\protect\isa{#1} (class)}} \newcommand\tydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (type)}} \newcommand\tcdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (type class)}} \newcommand\thydx[1]{\isa{#1}\index{#1@\protect\isa{#1} (theory)}} \newcommand\attrdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (attribute)}} \newcommand\cmmdx[1]{\index{#1@\protect\isacommand{#1} (command)}} \newcommand\commdx[1]{\isacommand{#1}\index{#1@\protect\isacommand{#1} (command)}} \newcommand\methdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (method)}} -\newcommand\tooldx[1]{\isa{#1}\index{#1@\protect\isa{#1} (tool)}} -\newcommand\settdx[1]{\isa{#1}\index{#1@\protect\isa{#1} (setting)}} \newcommand\pgdx[1]{\pgmenu{#1}\index{#1@\protect\pgmenu{#1} (Proof General)}} %set argument in \bf font and index in ROMAN font (for definitions in text!) \newcommand\bfindex[1]{{\bf#1}\index{#1|bold}\@} \newcommand\rmindex[1]{{#1}\index{#1}\@} \newcommand\ttindex[1]{\texttt{#1}\index{#1@\texttt{#1}}\@} \newcommand\ttindexbold[1]{\texttt{#1}\index{#1@\texttt{#1}|bold}\@} \newcommand{\isadxpos}[2]{\isa{#1}\index{#2@\protect\isa{#1}}\@} \newcommand{\isadxboldpos}[2]{\isa{#1}\index{#2@\protect\isa{#1}|bold}\@} %Commented-out the original versions to see what the index looks like without them. % In any event, they need to use \isa or \protect\isa rather than \texttt. %%\newcommand{\indexboldpos}[2]{#1\index{#2@#1|bold}\@} %%\newcommand{\ttindexboldpos}[2]{\texttt{#1}\index{#2@\texttt{#1}|bold}\@} \newcommand{\indexboldpos}[2]{#1\@} \newcommand{\ttindexboldpos}[2]{\isa{#1}\@} %\newtheorem{theorem}{Theorem}[section] \newtheorem{Exercise}{Exercise}[section] \newenvironment{exercise}{\begin{Exercise}\rm}{\end{Exercise}} \newcommand{\ttlbr}{\texttt{[|}} \newcommand{\ttrbr}{\texttt{|]}} \newcommand{\ttor}{\texttt{|}} \newcommand{\ttall}{\texttt{!}} \newcommand{\ttuniquex}{\texttt{?!}} \newcommand{\ttEXU}{\texttt{EX!}} \newcommand{\ttAnd}{\texttt{!!}} \newcommand{\isasymignore}{} \newcommand{\isasymimp}{\isasymlongrightarrow} \newcommand{\isasymImp}{\isasymLongrightarrow} \newcommand{\isasymFun}{\isasymRightarrow} \newcommand{\isasymuniqex}{\isamath{\exists!\,}} \renewcommand{\S}{Sect.\ts} \renewenvironment{isamarkuptxt}{\begin{isamarkuptext}}{\end{isamarkuptext}} \newif\ifremarks \newcommand{\REMARK}[1]{\ifremarks\marginpar{\raggedright\footnotesize#1}\fi} %names of Isabelle rules \newcommand{\rulename}[1]{\hfill(#1)} \newcommand{\rulenamedx}[1]{\hfill(#1\index{#1@\protect\isa{#1} (theorem)|bold})} %%%% meta-logical connectives \let\Forall=\bigwedge \let\Imp=\Longrightarrow \let\To=\Rightarrow \newcommand{\Var}[1]{{?\!#1}} %%% underscores as ordinary characters, not for subscripting %% use @ or \sb for subscripting; use \at for @ %% only works in \tt font %% must not make _ an active char; would make \ttindex fail! \gdef\underscoreoff{\catcode`\@=8\catcode`\_=\other} \gdef\underscoreon{\catcode`\_=8\makeatother} \chardef\other=12 \chardef\at=`\@ % alternative underscore \def\_{\leavevmode\kern.06em\vbox{\hrule height.2ex width.3em}\hskip0.1em} %%%% ``WARNING'' environment: 2 ! characters separated by negative thin space \def\warnbang{\vtop to 0pt{\vss\hbox{\Huge\bf!\!!}\vss}} \newenvironment{warn}{\medskip\medbreak\begingroup \clubpenalty=10000 \small %%WAS\baselineskip=0.9\baselineskip \noindent \hangindent\parindent \hangafter=-2 \hbox to0pt{\hskip-\hangindent\warnbang\hfill}\ignorespaces}% {\par\endgroup\medbreak} %%%% ``PROOF GENERAL'' environment \def\pghead{\lower3pt\vbox to 0pt{\vss\hbox{\includegraphics[width=12pt]{pghead}}\vss}} \newenvironment{pgnote}{\medskip\medbreak\begingroup \clubpenalty=10000 \small \noindent \hangindent\parindent \hangafter=-2 \hbox to0pt{\hskip-\hangindent \pghead\hfill}\ignorespaces}% {\par\endgroup\medbreak} \newcommand{\pgmenu}[1]{\textsf{#1}} %%%% Standard logical symbols \let\turn=\vdash \let\conj=\wedge \let\disj=\vee \let\imp=\rightarrow \let\bimp=\leftrightarrow \newcommand\all[1]{\forall#1.} %quantification \newcommand\ex[1]{\exists#1.} \newcommand{\pair}[1]{\langle#1\rangle} \newcommand{\lparr}{\mathopen{(\!|}} \newcommand{\rparr}{\mathclose{|\!)}} \newcommand{\fs}{\mathpunct{,\,}} \newcommand{\ty}{\mathrel{::}} \newcommand{\asn}{\mathrel{:=}} \newcommand{\more}{\ldots} \newcommand{\record}[1]{\lparr #1 \rparr} \newcommand{\dtt}{\mathord.} \newcommand\lbrakk{\mathopen{[\![}} \newcommand\rbrakk{\mathclose{]\!]}} \newcommand\List[1]{\lbrakk#1\rbrakk} %was \obj \newcommand\vpile[1]{\begin{array}{c}#1\end{array}} \newenvironment{matharray}[1]{\[\begin{array}{#1}}{\end{array}\]} \newcommand{\Text}[1]{\mbox{#1}} \DeclareMathSymbol{\dshsym}{\mathalpha}{letters}{"2D} \newcommand{\dsh}{\mathit{\dshsym}} \let\int=\cap \let\un=\cup \let\inter=\bigcap \let\union=\bigcup \def\ML{{\sc ml}} \def\AST{{\sc ast}} %macros to change the treatment of symbols \def\relsemicolon{\mathcode`\;="303B} %treat ; like a relation \def\binperiod{\mathcode`\.="213A} %treat . like a binary operator \def\binvert{\mathcode`\|="226A} %treat | like a binary operator %redefinition of \sloppy and \fussy to use \emergencystretch \def\sloppy{\tolerance2000 \hfuzz.5pt \vfuzz.5pt \emergencystretch=15pt} \def\fussy{\tolerance200 \hfuzz.1pt \vfuzz.1pt \emergencystretch=0pt} %non-bf version of description \def\descrlabel#1{\hspace\labelsep #1} \def\descr{\list{}{\labelwidth\z@ \itemindent-\leftmargin\let\makelabel\descrlabel}} \let\enddescr\endlist % The mathcodes for the letters A, ..., Z, a, ..., z are changed to % generate text italic rather than math italic by default. This makes % multi-letter identifiers look better. The mathcode for character c % is set to |"7000| (variable family) + |"400| (text italic) + |c|. % \DeclareSymbolFont{italics}{\encodingdefault}{\rmdefault}{m}{it}% \def\@setmcodes#1#2#3{{\count0=#1 \count1=#3 \loop \global\mathcode\count0=\count1 \ifnum \count0<#2 \advance\count0 by1 \advance\count1 by1 \repeat}} \@setmcodes{`A}{`Z}{"7\hexnumber@\symitalics41} \@setmcodes{`a}{`z}{"7\hexnumber@\symitalics61} diff --git a/src/Doc/Typeclass_Hierarchy/document/build b/src/Doc/Typeclass_Hierarchy/document/build deleted file mode 100755 --- a/src/Doc/Typeclass_Hierarchy/document/build +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" -VARIANT="$2" - -"$ISABELLE_TOOL" logo Isar -"$ISABELLE_HOME/src/Doc/prepare_document" "$FORMAT" - diff --git a/src/Doc/Typeclass_Hierarchy/document/root.tex b/src/Doc/Typeclass_Hierarchy/document/root.tex --- a/src/Doc/Typeclass_Hierarchy/document/root.tex +++ b/src/Doc/Typeclass_Hierarchy/document/root.tex @@ -1,38 +1,38 @@ \documentclass[12pt,a4paper,fleqn]{article} \usepackage{latexsym,graphicx} \usepackage{iman,extra,isar} \usepackage{isabelle,isabellesym} \usepackage{style} \usepackage{pdfsetup} \hyphenation{Isabelle} \hyphenation{Isar} \isadroptag{theory} -\title{\includegraphics[scale=0.5]{isabelle_isar} +\title{\includegraphics[scale=0.5]{isabelle_logo} \\[4ex] The {Isabelle/HOL} type-class hierarchy} \author{\emph{Florian Haftmann}} \begin{document} \maketitle \begin{abstract} \noindent This primer introduces corner stones of the {Isabelle/HOL} type-class hierarchy and gives some insights into its internal organization. \end{abstract} \thispagestyle{empty}\clearpage \pagenumbering{roman} \clearfirst \input{Typeclass_Hierarchy.tex} \begingroup \bibliographystyle{plain} \small\raggedright\frenchspacing \bibliography{manual} \endgroup \end{document} diff --git a/src/Doc/antiquote_setup.ML b/src/Doc/antiquote_setup.ML --- a/src/Doc/antiquote_setup.ML +++ b/src/Doc/antiquote_setup.ML @@ -1,210 +1,158 @@ (* Title: Doc/antiquote_setup.ML Author: Makarius Auxiliary antiquotations for the Isabelle manuals. *) structure Antiquote_Setup: sig end = struct (* misc utils *) fun translate f = Symbol.explode #> map f #> implode; val clean_string = translate (fn "_" => "\\_" | "#" => "\\#" | "$" => "\\$" | "%" => "\\%" | "<" => "$<$" | ">" => "$>$" | "{" => "\\{" | "|" => "$\\mid$" | "}" => "\\}" | "\" => "-" | c => c); fun clean_name "\" = "dots" | clean_name ".." = "ddot" | clean_name "." = "dot" | clean_name "_" = "underscore" | clean_name "{" = "braceleft" | clean_name "}" = "braceright" | clean_name s = s |> translate (fn "_" => "-" | "\" => "-" | c => c); -(* ML text *) - -local - -fun ml_val (toks1, []) = ML_Lex.read "fn _ => (" @ toks1 @ ML_Lex.read ");" - | ml_val (toks1, toks2) = - ML_Lex.read "fn _ => (" @ toks1 @ ML_Lex.read " : " @ toks2 @ ML_Lex.read ");"; - -fun ml_op (toks1, []) = ML_Lex.read "fn _ => (op " @ toks1 @ ML_Lex.read ");" - | ml_op (toks1, toks2) = - ML_Lex.read "fn _ => (op " @ toks1 @ ML_Lex.read " : " @ toks2 @ ML_Lex.read ");"; - -fun ml_type (toks1, []) = ML_Lex.read "val _ = NONE : (" @ toks1 @ ML_Lex.read ") option;" - | ml_type (toks1, toks2) = - ML_Lex.read "val _ = [NONE : (" @ toks1 @ ML_Lex.read ") option, NONE : (" @ - toks2 @ ML_Lex.read ") option];"; - -fun ml_exception (toks1, []) = ML_Lex.read "fn _ => (" @ toks1 @ ML_Lex.read " : exn);" - | ml_exception (toks1, toks2) = - ML_Lex.read "fn _ => (" @ toks1 @ ML_Lex.read " : " @ toks2 @ ML_Lex.read " -> exn);"; - -fun ml_structure (toks, _) = - ML_Lex.read "functor XXX() = struct structure XX = " @ toks @ ML_Lex.read " end;"; - -fun ml_functor (Antiquote.Text tok :: _, _) = - ML_Lex.read "ML_Env.check_functor " @ - ML_Lex.read (ML_Syntax.print_string (ML_Lex.content_of tok)) - | ml_functor _ = raise Fail "Bad ML functor specification"; - -val is_name = - ML_Lex.kind_of #> (fn kind => kind = ML_Lex.Ident orelse kind = ML_Lex.Long_Ident); - -fun ml_name txt = - (case filter is_name (ML_Lex.tokenize txt) of - toks as [_] => ML_Lex.flatten toks - | _ => error ("Single ML name expected in input: " ^ quote txt)); - -fun prep_ml source = - (#1 (Input.source_content source), ML_Lex.read_source source); - -fun index_ml name kind ml = Thy_Output.antiquotation_raw name - (Scan.lift (Args.text_input -- Scan.option (Args.colon |-- Args.text_input))) - (fn ctxt => fn (source1, opt_source2) => - let - val (txt1, toks1) = prep_ml source1; - val (txt2, toks2) = - (case opt_source2 of - SOME source => prep_ml source - | NONE => ("", [])); - - val txt = - if txt2 = "" then txt1 - else if kind = "type" then txt1 ^ " = " ^ txt2 - else if kind = "exception" then txt1 ^ " of " ^ txt2 - else if Symbol_Pos.is_identifier (Long_Name.base_name (ml_name txt1)) - then txt1 ^ ": " ^ txt2 - else txt1 ^ " : " ^ txt2; - val txt' = if kind = "" then txt else kind ^ " " ^ txt; - - val pos = Input.pos_of source1; - val _ = - ML_Context.eval_in (SOME ctxt) ML_Compiler.flags pos (ml (toks1, toks2)) - handle ERROR msg => error (msg ^ Position.here pos); - val kind' = if kind = "" then "ML" else "ML " ^ kind; - in - Latex.block - [Latex.string ("\\indexdef{}{" ^ kind' ^ "}{" ^ clean_string (ml_name txt1) ^ "}"), - Thy_Output.verbatim ctxt txt'] - end); - -in - -val _ = - Theory.setup - (index_ml \<^binding>\index_ML\ "" ml_val #> - index_ml \<^binding>\index_ML_op\ "infix" ml_op #> - index_ml \<^binding>\index_ML_type\ "type" ml_type #> - index_ml \<^binding>\index_ML_exception\ "exception" ml_exception #> - index_ml \<^binding>\index_ML_structure\ "structure" ml_structure #> - index_ml \<^binding>\index_ML_functor\ "functor" ml_functor); - -end; - - (* named theorems *) val _ = - Theory.setup (Thy_Output.antiquotation_raw \<^binding>\named_thms\ + Theory.setup (Document_Output.antiquotation_raw \<^binding>\named_thms\ (Scan.repeat (Attrib.thm -- Scan.lift (Args.parens Args.name))) (fn ctxt => map (fn (thm, name) => Output.output (Document_Antiquotation.format ctxt - (Document_Antiquotation.delimit ctxt (Thy_Output.pretty_thm ctxt thm))) ^ + (Document_Antiquotation.delimit ctxt (Document_Output.pretty_thm ctxt thm))) ^ enclose "\\rulename{" "}" (Output.output name)) #> space_implode "\\par\\smallskip%\n" #> Latex.string #> single - #> Thy_Output.isabelle ctxt)); + #> Document_Output.isabelle ctxt)); (* Isabelle/Isar entities (with index) *) local fun no_check (_: Proof.context) (name, _: Position.T) = name; fun check_keyword ctxt (name, pos) = if Keyword.is_keyword (Thy_Header.get_keywords' ctxt) name then name else error ("Bad outer syntax keyword " ^ quote name ^ Position.here pos); fun check_system_option ctxt arg = (Completion.check_option (Options.default ()) ctxt arg; true) handle ERROR _ => false; val arg = enclose "{" "}" o clean_string; fun entity check markup binding index = - Thy_Output.antiquotation_raw + Document_Output.antiquotation_raw (binding |> Binding.map_name (fn name => name ^ (case index of NONE => "" | SOME true => "_def" | SOME false => "_ref"))) (Scan.lift (Scan.optional (Args.parens Args.name) "" -- Args.name_position)) (fn ctxt => fn (logic, (name, pos)) => let val kind = translate (fn "_" => " " | c => c) (Binding.name_of binding); val hyper_name = "{" ^ Long_Name.append kind (Long_Name.append logic (clean_name name)) ^ "}"; val hyper = enclose ("\\hyperlink" ^ hyper_name ^ "{") "}" #> index = SOME true ? enclose ("\\hypertarget" ^ hyper_name ^ "{") "}"; val idx = (case index of NONE => "" | SOME is_def => "\\index" ^ (if is_def then "def" else "ref") ^ arg logic ^ arg kind ^ arg name); val _ = if Context_Position.is_reported ctxt pos then ignore (check ctxt (name, pos)) else (); val latex = idx ^ (Output.output name |> (if markup = "" then I else enclose ("\\" ^ markup ^ "{") "}") |> hyper o enclose "\\mbox{\\isa{" "}}"); in Latex.string latex end); fun entity_antiqs check markup kind = entity check markup kind NONE #> entity check markup kind (SOME true) #> entity check markup kind (SOME false); in val _ = Theory.setup (entity_antiqs no_check "" \<^binding>\syntax\ #> entity_antiqs Outer_Syntax.check_command "isacommand" \<^binding>\command\ #> entity_antiqs check_keyword "isakeyword" \<^binding>\keyword\ #> entity_antiqs check_keyword "isakeyword" \<^binding>\element\ #> entity_antiqs Method.check_name "" \<^binding>\method\ #> entity_antiqs Attrib.check_name "" \<^binding>\attribute\ #> entity_antiqs no_check "" \<^binding>\fact\ #> entity_antiqs no_check "" \<^binding>\variable\ #> entity_antiqs no_check "" \<^binding>\case\ #> entity_antiqs Document_Antiquotation.check "" \<^binding>\antiquotation\ #> entity_antiqs Document_Antiquotation.check_option "" \<^binding>\antiquotation_option\ #> entity_antiqs Document_Marker.check "" \<^binding>\document_marker\ #> entity_antiqs no_check "isasystem" \<^binding>\setting\ #> entity_antiqs check_system_option "isasystem" \<^binding>\system_option\ #> entity_antiqs no_check "" \<^binding>\inference\ #> entity_antiqs no_check "isasystem" \<^binding>\executable\ #> entity_antiqs Isabelle_Tool.check "isatool" \<^binding>\tool\ #> entity_antiqs ML_Context.check_antiquotation "" \<^binding>\ML_antiquotation\ #> entity_antiqs (K JEdit.check_action) "isasystem" \<^binding>\action\); end; + +(* show symbols *) + +val _ = + Theory.setup (Document_Output.antiquotation_raw \<^binding>\show_symbols\ (Scan.succeed ()) + (fn _ => fn _ => + let + val symbol_name = + unprefix "\\newcommand{\\isasym" + #> raw_explode + #> take_prefix Symbol.is_ascii_letter + #> implode; + + val symbols = + File.read \<^file>\~~/lib/texinputs/isabellesym.sty\ + |> split_lines + |> map_filter (fn line => + (case try symbol_name line of + NONE => NONE + | SOME "" => NONE + | SOME name => SOME ("\\verb,\\" ^ "<" ^ name ^ ">, & {\\isasym" ^ name ^ "}"))); + + val eol = "\\\\\n"; + fun table (a :: b :: rest) = a ^ " & " ^ b ^ eol :: table rest + | table [a] = [a ^ eol] + | table [] = []; + in + Latex.string + ("\\begin{supertabular}{ll@{\\qquad}ll}\n" ^ implode (table symbols) ^ + "\\end{supertabular}\n") + end)) + end; diff --git a/src/Doc/fixbookmarks b/src/Doc/fixbookmarks deleted file mode 100755 --- a/src/Doc/fixbookmarks +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env bash - -perl -pi -e 's/\\([a-zA-Z]+)\s*/$1/g; s/\$//g; s/^BOOKMARK/\\BOOKMARK/g;' "$@" diff --git a/src/Doc/iman.sty b/src/Doc/iman.sty --- a/src/Doc/iman.sty +++ b/src/Doc/iman.sty @@ -1,153 +1,147 @@ % iman.sty : Isabelle Manual Page Layout % \typeout{Document Style iman. Released 17 February 1994} \hyphenation{Isa-belle man-u-script man-u-scripts ap-pen-dix mut-u-al-ly} \hyphenation{data-type data-types co-data-type co-data-types } \let\ts=\thinspace %usage: \iflabelundefined{LABEL}{if not defined}{if defined} \newcommand{\iflabelundefined}[1]{\@ifundefined{r@#1}} %%%INDEXING use sedindex to process the index \newcommand\seealso[2]{\emph{see also} #1} \usepackage{makeidx} %index, putting page numbers of definitions in boldface \def\bold#1{\textbf{#1}} \newcommand\fnote[1]{#1n} \newcommand\indexbold[1]{\index{#1|bold}} %for indexing constants, symbols, theorems, ... \newcommand\cdx[1]{{\tt#1}\index{#1@{\tt#1} constant}} \newcommand\sdx[1]{{\tt#1}\index{#1@{\tt#1} symbol}} \newcommand\tdx[1]{{\tt#1}\index{#1@{\tt#1} theorem}} \newcommand\tdxbold[1]{{\tt#1}\index{#1@{\tt#1} theorem|bold}} \newcommand\mltydx[1]{{\tt#1}\index{#1@{\tt#1} ML type}} \newcommand\xdx[1]{{\tt#1}\index{#1@{\tt#1} exception}} -\newcommand\ndx[1]{{\tt#1}\index{#1@{\tt#1} nonterminal}} -\newcommand\ndxbold[1]{{\tt#1}\index{#1@{\tt#1} nonterminal|bold}} - \newcommand\cldx[1]{{\tt#1}\index{#1@{\tt#1} class}} \newcommand\tydx[1]{\textit{#1}\index{#1@{\textit{#1}} type}} \newcommand\thydx[1]{{\tt#1}\index{#1@{\tt#1} theory}} -\newcommand\tooldx[1]{{\tt#1}\index{#1@{\tt#1} tool}} -\newcommand\settdx[1]{{\tt#1}\index{#1@{\tt#1} setting}} - %set argument in \tt font; at the same time, index using * prefix \newcommand\rmindex[1]{{#1}\index{#1}\@} \newcommand\ttindex[1]{{\tt#1}\index{*#1}\@} \newcommand\ttindexbold[1]{{\tt#1}\index{*#1|bold}\@} %set argument in \bf font and index in ROMAN font (for definitions in text!) \newcommand\bfindex[1]{{\bf#1}\index{#1|bold}\@} %%% underscores as ordinary characters, not for subscripting %% use @ or \sb for subscripting; use \at for @ %% only works in \tt font %% must not make _ an active char; would make \ttindex fail! \gdef\underscoreoff{\catcode`\@=8\catcode`\_=\other} \gdef\underscoreon{\catcode`\_=8\makeatother} \chardef\other=12 \chardef\at=`\@ % alternative underscore \def\_{\leavevmode\kern.06em\vbox{\hrule height.2ex width.3em}\hskip0.1em} %%% \dquotes permits usage of "..." for \hbox{...} -- also taken from under.sty {\catcode`\"=\active \gdef\dquotes{\catcode`\"=\active \let"=\@mathText}% \gdef\@mathText#1"{\hbox{\mathTextFont #1\/}}} \def\mathTextFont{\frenchspacing\tt} \def\dquotesoff{\catcode`\"=\other} %%%% meta-logical connectives \let\Forall=\bigwedge \let\Imp=\Longrightarrow \let\To=\Rightarrow \newcommand{\PROP}{\mathop{\mathrm{PROP}}} \newcommand{\Var}[1]{{?\!#1}} \newcommand{\All}[1]{\Forall#1.} %quantification %%%% ``WARNING'' environment \def\dbend{\vtop to 0pt{\vss\hbox{\Huge\bf!}\vss}} \newenvironment{warn}{\medskip\medbreak\begingroup \clubpenalty=10000 \small %%WAS\baselineskip=0.9\baselineskip \noindent \ifdim\parindent > 0pt\hangindent\parindent\else\hangindent1.5em\fi \hangafter=-2 \hbox to0pt{\hskip-\hangindent\dbend\hfill}\ignorespaces}% {\par\endgroup\medbreak} %%%% Standard logical symbols \let\turn=\vdash \let\conj=\wedge \let\disj=\vee \let\imp=\rightarrow \let\bimp=\leftrightarrow \newcommand\all[1]{\forall#1.} %quantification \newcommand\ex[1]{\exists#1.} \newcommand{\pair}[1]{\langle#1\rangle} \newcommand{\lparr}{\mathopen{(\!|}} \newcommand{\rparr}{\mathclose{|\!)}} \newcommand{\fs}{\mathpunct{,\,}} \newcommand{\ty}{\mathrel{::}} \newcommand{\asn}{\mathrel{:=}} \newcommand{\more}{\ldots} \newcommand{\record}[1]{\lparr #1 \rparr} \newcommand{\dtt}{\mathord.} \newcommand\lbrakk{\mathopen{[\![}} \newcommand\rbrakk{\mathclose{]\!]}} \newcommand\List[1]{\lbrakk#1\rbrakk} %was \obj \newcommand\vpile[1]{\begin{array}{c}#1\end{array}} \newenvironment{matharray}[1]{\[\begin{array}{#1}}{\end{array}\]} \newcommand{\Text}[1]{\mbox{#1}} \DeclareMathSymbol{\dshsym}{\mathalpha}{letters}{"2D} \newcommand{\dsh}{\mathit{\dshsym}} \let\int=\cap \let\un=\cup \let\inter=\bigcap \let\union=\bigcup \def\ML{{\sc ml}} \def\OBJ{{\sc obj}} \def\AST{{\sc ast}} %macros to change the treatment of symbols \def\relsemicolon{\mathcode`\;="303B} %treat ; like a relation \def\binperiod{\mathcode`\.="213A} %treat . like a binary operator \def\binvert{\mathcode`\|="226A} %treat | like a binary operator %redefinition of \sloppy and \fussy to use \emergencystretch \def\sloppy{\tolerance2000 \hfuzz.5pt \vfuzz.5pt \emergencystretch=15pt} \def\fussy{\tolerance200 \hfuzz.1pt \vfuzz.1pt \emergencystretch=0pt} %non-bf version of description \def\descrlabel#1{\hspace\labelsep #1} \def\descr{\list{}{\labelwidth\z@ \itemindent-\leftmargin\let\makelabel\descrlabel}} \let\enddescr\endlist % The mathcodes for the letters A, ..., Z, a, ..., z are changed to % generate text italic rather than math italic by default. This makes % multi-letter identifiers look better. The mathcode for character c % is set to |"7000| (variable family) + |"400| (text italic) + |c|. % \DeclareSymbolFont{italics}{\encodingdefault}{\rmdefault}{m}{it}% \def\@setmcodes#1#2#3{{\count0=#1 \count1=#3 \loop \global\mathcode\count0=\count1 \ifnum \count0<#2 \advance\count0 by1 \advance\count1 by1 \repeat}} \@setmcodes{`A}{`Z}{"7\hexnumber@\symitalics41} \@setmcodes{`a}{`z}{"7\hexnumber@\symitalics61} diff --git a/src/Doc/isar.sty b/src/Doc/isar.sty --- a/src/Doc/isar.sty +++ b/src/Doc/isar.sty @@ -1,29 +1,25 @@ \usepackage{ifthen} \newcommand{\indexdef}[3]% {\ifthenelse{\equal{}{#1}}{\index{#3 (#2)|bold}}{\index{#3 (#1\ #2)|bold}}} \newcommand{\indexref}[3]{\ifthenelse{\equal{}{#1}}{\index{#3 (#2)}}{\index{#3 (#1\ #2)}}} \newcommand{\isadigitreset}{\def\isadigit##1{##1}} \newcommand{\isasystem}[1]{{\def\isacharminus{-}\def\isacharunderscore{\_}\isadigitreset\tt #1}} \newcommand{\isatool}[1]{{\def\isacharminus{-}\def\isacharunderscore{\_}\isadigitreset\tt isabelle #1}} -\newcommand{\indexoutertoken}[1]{\indexdef{}{syntax}{#1}} -\newcommand{\indexouternonterm}[1]{\indexdef{}{syntax}{#1}} -\newcommand{\indexisarelem}[1]{\indexdef{}{element}{#1}} - \newcommand{\isasymIF}{\isakeyword{if}} \newcommand{\isasymFOR}{\isakeyword{for}} \newcommand{\isasymAND}{\isakeyword{and}} \newcommand{\isasymIS}{\isakeyword{is}} \newcommand{\isasymWHERE}{\isakeyword{where}} \newcommand{\isasymBEGIN}{\isakeyword{begin}} \newcommand{\isasymIMPORTS}{\isakeyword{imports}} \newcommand{\isasymIN}{\isakeyword{in}} \newcommand{\isasymFIXES}{\isakeyword{fixes}} \newcommand{\isasymASSUMES}{\isakeyword{assumes}} \newcommand{\isasymSHOWS}{\isakeyword{shows}} \newcommand{\isasymOBTAINS}{\isakeyword{obtains}} \newcommand{\isasymASSM}{\isacommand{assm}} diff --git a/src/Doc/more_antiquote.ML b/src/Doc/more_antiquote.ML --- a/src/Doc/more_antiquote.ML +++ b/src/Doc/more_antiquote.ML @@ -1,38 +1,38 @@ (* Title: Doc/more_antiquote.ML Author: Florian Haftmann, TU Muenchen More antiquotations (partly depending on Isabelle/HOL). *) structure More_Antiquote : sig end = struct (* class specifications *) val _ = - Theory.setup (Thy_Output.antiquotation_pretty \<^binding>\class_spec\ (Scan.lift Args.name) + Theory.setup (Document_Output.antiquotation_pretty \<^binding>\class_spec\ (Scan.lift Args.name) (fn ctxt => fn s => let val thy = Proof_Context.theory_of ctxt; val class = Sign.intern_class thy s; in Pretty.chunks (Class.pretty_specification thy class) end)); (* code theorem antiquotation *) val _ = - Theory.setup (Thy_Output.antiquotation_pretty \<^binding>\code_thms\ Args.term + Theory.setup (Document_Output.antiquotation_pretty \<^binding>\code_thms\ Args.term (fn ctxt => fn raw_const => let val thy = Proof_Context.theory_of ctxt; val const = Code.check_const thy raw_const; val { eqngr, ... } = Code_Preproc.obtain true { ctxt = ctxt, consts = [const], terms = [] }; val thms = Code_Preproc.cert eqngr const |> Code.equations_of_cert thy |> snd |> these |> map_filter (fn (_, (some_thm, proper)) => if proper then some_thm else NONE) |> map (HOLogic.mk_obj_eq o Variable.import_vars ctxt o Axclass.overload ctxt); - in Pretty.chunks (map (Thy_Output.pretty_thm ctxt) thms) end)); + in Pretty.chunks (map (Document_Output.pretty_thm ctxt) thms) end)); end; diff --git a/src/Doc/prepare_document b/src/Doc/prepare_document deleted file mode 100755 --- a/src/Doc/prepare_document +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env bash - -set -e - -FORMAT="$1" - -isabelle latex -o sty -cp "$ISABELLE_HOME/src/Doc/pdfsetup.sty" . - -isabelle latex -o "$FORMAT" -isabelle latex -o bbl -[ -f root.idx ] && "$ISABELLE_HOME/src/Doc/sedindex" root -isabelle latex -o "$FORMAT" -[ -f root.out ] && "$ISABELLE_HOME/src/Doc/fixbookmarks" root.out -isabelle latex -o "$FORMAT" - diff --git a/src/Doc/sedindex b/src/Doc/sedindex --- a/src/Doc/sedindex +++ b/src/Doc/sedindex @@ -1,21 +1,21 @@ #! /bin/sh # #sedindex - shell script to create indexes, preprocessing LaTeX's .idx file # # puts strings prefixed by * into \tt font # terminator characters for strings are |!@{} # # a space terminates the \tt part to allow \index{*NE theorem}, etc. # # change *"X"Y"Z"W to "X"Y"Z"W@{\tt "X"Y"Z"W} # change *"X"Y"Z to "X"Y"Z@{\tt "X"Y"Z} # change *"X"Y to "X"Y@{\tt "X"Y} # change *"X to "X@{\tt "X} # change *IDENT to IDENT@{\tt IDENT} # where IDENT is any string not containing | ! or @ # FOUR backslashes: to escape the shell AND sed sed -e "s~\*\(\".\".\".\".\)~\1@{\\\\tt \1}~g s~\*\(\".\".\".\)~\1@{\\\\tt \1}~g s~\*\(\".\".\)~\1@{\\\\tt \1}~g s~\*\(\".\)~\1@{\\\\tt \1}~g -s~\*\([^ |!@{}][^ |!@{}]*\)~\1@{\\\\tt \1}~g" $1.idx | makeindex -c -q -o $1.ind +s~\*\([^ |!@{}][^ |!@{}]*\)~\1@{\\\\tt \1}~g" $1.idx | $ISABELLE_MAKEINDEX -o $1.ind diff --git a/src/HOL/Library/Code_Lazy.thy b/src/HOL/Library/Code_Lazy.thy --- a/src/HOL/Library/Code_Lazy.thy +++ b/src/HOL/Library/Code_Lazy.thy @@ -1,238 +1,238 @@ (* Author: Pascal Stoop, ETH Zurich Author: Andreas Lochbihler, Digital Asset *) section \Lazy types in generated code\ theory Code_Lazy imports Case_Converter keywords "code_lazy_type" "activate_lazy_type" "deactivate_lazy_type" "activate_lazy_types" "deactivate_lazy_types" "print_lazy_types" :: thy_decl begin text \ This theory and the CodeLazy tool described in @{cite "LochbihlerStoop2018"}. It hooks into Isabelle's code generator such that the generated code evaluates a user-specified set of type constructors lazily, even in target languages with eager evaluation. The lazy type must be algebraic, i.e., values must be built from constructors and a corresponding case operator decomposes them. Every datatype and codatatype is algebraic and thus eligible for lazification. \ subsection \The type \lazy\\ typedef 'a lazy = "UNIV :: 'a set" .. setup_lifting type_definition_lazy lift_definition delay :: "(unit \ 'a) \ 'a lazy" is "\f. f ()" . lift_definition force :: "'a lazy \ 'a" is "\x. x" . code_datatype delay lemma force_delay [code]: "force (delay f) = f ()" by transfer (rule refl) lemma delay_force: "delay (\_. force s) = s" by transfer (rule refl) definition termify_lazy2 :: "'a :: typerep lazy \ term" where "termify_lazy2 x = Code_Evaluation.App (Code_Evaluation.Const (STR ''Code_Lazy.delay'') (TYPEREP((unit \ 'a) \ 'a lazy))) (Code_Evaluation.Const (STR ''Pure.dummy_pattern'') (TYPEREP((unit \ 'a))))" definition termify_lazy :: "(String.literal \ 'typerep \ 'term) \ ('term \ 'term \ 'term) \ (String.literal \ 'typerep \ 'term \ 'term) \ 'typerep \ ('typerep \ 'typerep \ 'typerep) \ ('typerep \ 'typerep) \ ('a \ 'term) \ 'typerep \ 'a :: typerep lazy \ 'term \ term" where "termify_lazy _ _ _ _ _ _ _ _ x _ = termify_lazy2 x" declare [[code drop: "Code_Evaluation.term_of :: _ lazy \ _"]] lemma term_of_lazy_code [code]: "Code_Evaluation.term_of x \ termify_lazy Code_Evaluation.Const Code_Evaluation.App Code_Evaluation.Abs TYPEREP(unit) (\T U. typerep.Typerep (STR ''fun'') [T, U]) (\T. typerep.Typerep (STR ''Code_Lazy.lazy'') [T]) Code_Evaluation.term_of TYPEREP('a) x (Code_Evaluation.Const (STR '''') (TYPEREP(unit)))" for x :: "'a :: {typerep, term_of} lazy" by (rule term_of_anything) text \ The implementations of \<^typ>\_ lazy\ using language primitives cache forced values. Term reconstruction for lazy looks into the lazy value and reconstructs it to the depth it has been evaluated. This is not done for Haskell as we do not know of any portable way to inspect whether a lazy value has been evaluated to or not. \ code_printing code_module Lazy \ (SML) \signature LAZY = sig type 'a lazy; val lazy : (unit -> 'a) -> 'a lazy; val force : 'a lazy -> 'a; val peek : 'a lazy -> 'a option val termify_lazy : (string -> 'typerep -> 'term) -> ('term -> 'term -> 'term) -> (string -> 'typerep -> 'term -> 'term) -> 'typerep -> ('typerep -> 'typerep -> 'typerep) -> ('typerep -> 'typerep) -> ('a -> 'term) -> 'typerep -> 'a lazy -> 'term -> 'term; end; structure Lazy : LAZY = struct datatype 'a content = Delay of unit -> 'a | Value of 'a | Exn of exn; datatype 'a lazy = Lazy of 'a content ref; fun lazy f = Lazy (ref (Delay f)); fun force (Lazy x) = case !x of Delay f => ( let val res = f (); val _ = x := Value res; in res end handle exn => (x := Exn exn; raise exn)) | Value x => x | Exn exn => raise exn; fun peek (Lazy x) = case !x of Value x => SOME x | _ => NONE; fun termify_lazy const app abs unitT funT lazyT term_of T x _ = app (const "Code_Lazy.delay" (funT (funT unitT T) (lazyT T))) (case peek x of SOME y => abs "_" unitT (term_of y) | _ => const "Pure.dummy_pattern" (funT unitT T)); end;\ for type_constructor lazy constant delay force termify_lazy | type_constructor lazy \ (SML) "_ Lazy.lazy" | constant delay \ (SML) "Lazy.lazy" | constant force \ (SML) "Lazy.force" | constant termify_lazy \ (SML) "Lazy.termify'_lazy" code_reserved SML Lazy code_printing \ \For code generation within the Isabelle environment, we reuse the thread-safe implementation of lazy from \<^file>\~~/src/Pure/Concurrent/lazy.ML\\ code_module Lazy \ (Eval) \\ for constant undefined | type_constructor lazy \ (Eval) "_ Lazy.lazy" | constant delay \ (Eval) "Lazy.lazy" | constant force \ (Eval) "Lazy.force" | code_module Termify_Lazy \ (Eval) \structure Termify_Lazy = struct fun termify_lazy (_: string -> typ -> term) (_: term -> term -> term) (_: string -> typ -> term -> term) (_: typ) (_: typ -> typ -> typ) (_: typ -> typ) (term_of: 'a -> term) (T: typ) (x: 'a Lazy.lazy) (_: term) = Const ("Code_Lazy.delay", (HOLogic.unitT --> T) --> Type ("Code_Lazy.lazy", [T])) $ (case Lazy.peek x of SOME (Exn.Res x) => absdummy HOLogic.unitT (term_of x) | _ => Const ("Pure.dummy_pattern", HOLogic.unitT --> T)); end;\ for constant termify_lazy | constant termify_lazy \ (Eval) "Termify'_Lazy.termify'_lazy" code_reserved Eval Termify_Lazy code_printing type_constructor lazy \ (OCaml) "_ Lazy.t" | constant delay \ (OCaml) "Lazy.from'_fun" | constant force \ (OCaml) "Lazy.force" | code_module Termify_Lazy \ (OCaml) \module Termify_Lazy : sig val termify_lazy : (string -> 'typerep -> 'term) -> ('term -> 'term -> 'term) -> (string -> 'typerep -> 'term -> 'term) -> 'typerep -> ('typerep -> 'typerep -> 'typerep) -> ('typerep -> 'typerep) -> ('a -> 'term) -> 'typerep -> 'a Lazy.t -> 'term -> 'term end = struct let termify_lazy const app abs unitT funT lazyT term_of ty x _ = app (const "Code_Lazy.delay" (funT (funT unitT ty) (lazyT ty))) (if Lazy.is_val x then abs "_" unitT (term_of (Lazy.force x)) else const "Pure.dummy_pattern" (funT unitT ty));; end;;\ for constant termify_lazy | constant termify_lazy \ (OCaml) "Termify'_Lazy.termify'_lazy" code_reserved OCaml Lazy Termify_Lazy code_printing code_module Lazy \ (Haskell) \ module Lazy(Lazy, delay, force) where newtype Lazy a = Lazy a delay f = Lazy (f ()) force (Lazy x) = x\ for type_constructor lazy constant delay force | type_constructor lazy \ (Haskell) "Lazy.Lazy _" | constant delay \ (Haskell) "Lazy.delay" | constant force \ (Haskell) "Lazy.force" code_reserved Haskell Lazy code_printing code_module Lazy \ (Scala) \object Lazy { final class Lazy[A] (f: Unit => A) { var evaluated = false; - lazy val x: A = f () + lazy val x: A = f(()) def get() : A = { evaluated = true; return x } } def force[A] (x: Lazy[A]) : A = { return x.get() } def delay[A] (f: Unit => A) : Lazy[A] = { return new Lazy[A] (f) } def termify_lazy[Typerep, Term, A] ( const: String => Typerep => Term, app: Term => Term => Term, abs: String => Typerep => Term => Term, unitT: Typerep, funT: Typerep => Typerep => Typerep, lazyT: Typerep => Typerep, term_of: A => Term, ty: Typerep, x: Lazy[A], dummy: Term) : Term = { if (x.evaluated) - app(const("Code_Lazy.delay")(funT(funT(unitT)(ty))(lazyT(ty))))(abs("_")(unitT)(term_of(x.get))) + app(const("Code_Lazy.delay")(funT(funT(unitT)(ty))(lazyT(ty))))(abs("_")(unitT)(term_of(x.get()))) else app(const("Code_Lazy.delay")(funT(funT(unitT)(ty))(lazyT(ty))))(const("Pure.dummy_pattern")(funT(unitT)(ty))) } }\ for type_constructor lazy constant delay force termify_lazy | type_constructor lazy \ (Scala) "Lazy.Lazy[_]" | constant delay \ (Scala) "Lazy.delay" | constant force \ (Scala) "Lazy.force" | constant termify_lazy \ (Scala) "Lazy.termify'_lazy" code_reserved Scala Lazy text \Make evaluation with the simplifier respect \<^term>\delay\s.\ lemma delay_lazy_cong: "delay f = delay f" by simp setup \Code_Simp.map_ss (Simplifier.add_cong @{thm delay_lazy_cong})\ subsection \Implementation\ ML_file \code_lazy.ML\ setup \ Code_Preproc.add_functrans ("lazy_datatype", Code_Lazy.transform_code_eqs) \ end diff --git a/src/HOL/Library/LaTeXsugar.thy b/src/HOL/Library/LaTeXsugar.thy --- a/src/HOL/Library/LaTeXsugar.thy +++ b/src/HOL/Library/LaTeXsugar.thy @@ -1,152 +1,152 @@ (* Title: HOL/Library/LaTeXsugar.thy Author: Gerwin Klein, Tobias Nipkow, Norbert Schirmer Copyright 2005 NICTA and TUM *) (*<*) theory LaTeXsugar imports Main begin (* LOGIC *) notation (latex output) If ("(\<^latex>\\\textsf{\if\<^latex>\}\ (_)/ \<^latex>\\\textsf{\then\<^latex>\}\ (_)/ \<^latex>\\\textsf{\else\<^latex>\}\ (_))" 10) syntax (latex output) "_Let" :: "[letbinds, 'a] => 'a" ("(\<^latex>\\\textsf{\let\<^latex>\}\ (_)/ \<^latex>\\\textsf{\in\<^latex>\}\ (_))" 10) "_case_syntax":: "['a, cases_syn] => 'b" ("(\<^latex>\\\textsf{\case\<^latex>\}\ _ \<^latex>\\\textsf{\of\<^latex>\}\/ _)" 10) (* SETS *) (* empty set *) notation (latex) "Set.empty" ("\") (* insert *) translations "{x} \ A" <= "CONST insert x A" "{x,y}" <= "{x} \ {y}" "{x,y} \ A" <= "{x} \ ({y} \ A)" "{x}" <= "{x} \ \" (* set comprehension *) syntax (latex output) "_Collect" :: "pttrn => bool => 'a set" ("(1{_ | _})") "_CollectIn" :: "pttrn => 'a set => bool => 'a set" ("(1{_ \ _ | _})") translations "_Collect p P" <= "{p. P}" "_Collect p P" <= "{p|xs. P}" "_CollectIn p A P" <= "{p : A. P}" (* card *) notation (latex output) card ("|_|") (* LISTS *) (* Cons *) notation (latex) Cons ("_ \/ _" [66,65] 65) (* length *) notation (latex output) length ("|_|") (* nth *) notation (latex output) nth ("_\<^latex>\\\ensuremath{_{[\\mathit{\_\<^latex>\}]}}\" [1000,0] 1000) (* DUMMY *) consts DUMMY :: 'a ("\<^latex>\\\_\") (* THEOREMS *) notation (Rule output) Pure.imp ("\<^latex>\\\mbox{}\\inferrule{\\mbox{\_\<^latex>\}}\\<^latex>\{\\mbox{\_\<^latex>\}}\") syntax (Rule output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\\\mbox{}\\inferrule{\_\<^latex>\}\\<^latex>\{\\mbox{\_\<^latex>\}}\") "_asms" :: "prop \ asms \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\\\\\/ _") "_asm" :: "prop \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\") notation (Axiom output) "Trueprop" ("\<^latex>\\\mbox{}\\inferrule{\\mbox{}}{\\mbox{\_\<^latex>\}}\") notation (IfThen output) Pure.imp ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _/ \<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") syntax (IfThen output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _ /\<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") "_asms" :: "prop \ asms \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\ /\<^latex>\{\\normalsize \\,\and\<^latex>\\\,}\/ _") "_asm" :: "prop \ asms" ("\<^latex>\\\mbox{\_\<^latex>\}\") notation (IfThenNoBox output) Pure.imp ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _/ \<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") syntax (IfThenNoBox output) "_bigimpl" :: "asms \ prop \ prop" ("\<^latex>\{\\normalsize{}\If\<^latex>\\\,}\ _ /\<^latex>\{\\normalsize \\,\then\<^latex>\\\,}\/ _.") "_asms" :: "prop \ asms \ asms" ("_ /\<^latex>\{\\normalsize \\,\and\<^latex>\\\,}\/ _") "_asm" :: "prop \ asms" ("_") setup \ - Thy_Output.antiquotation_pretty_source_embedded \<^binding>\const_typ\ + Document_Output.antiquotation_pretty_source_embedded \<^binding>\const_typ\ (Scan.lift Args.embedded_inner_syntax) (fn ctxt => fn c => let val tc = Proof_Context.read_const {proper = false, strict = false} ctxt c in - Pretty.block [Thy_Output.pretty_term ctxt tc, Pretty.str " ::", + Pretty.block [Document_Output.pretty_term ctxt tc, Pretty.str " ::", Pretty.brk 1, Syntax.pretty_typ ctxt (fastype_of tc)] end) \ setup\ let fun dummy_pats (wrap $ (eq $ lhs $ rhs)) = let val rhs_vars = Term.add_vars rhs []; fun dummy (v as Var (ixn as (_, T))) = if member ((=) ) rhs_vars ixn then v else Const (\<^const_name>\DUMMY\, T) | dummy (t $ u) = dummy t $ dummy u | dummy (Abs (n, T, b)) = Abs (n, T, dummy b) | dummy t = t; in wrap $ (eq $ dummy lhs $ rhs) end in Term_Style.setup \<^binding>\dummy_pats\ (Scan.succeed (K dummy_pats)) end \ setup \ let fun eta_expand Ts t xs = case t of Abs(x,T,t) => let val (t', xs') = eta_expand (T::Ts) t xs in (Abs (x, T, t'), xs') end | _ => let val (a,ts) = strip_comb t (* assume a atomic *) val (ts',xs') = fold_map (eta_expand Ts) ts xs val t' = list_comb (a, ts'); val Bs = binder_types (fastype_of1 (Ts,t)); val n = Int.min (length Bs, length xs'); val bs = map Bound ((n - 1) downto 0); val xBs = ListPair.zip (xs',Bs); val xs'' = drop n xs'; val t'' = fold_rev Term.abs xBs (list_comb(t', bs)) in (t'', xs'') end val style_eta_expand = (Scan.repeat Args.name) >> (fn xs => fn ctxt => fn t => fst (eta_expand [] t xs)) in Term_Style.setup \<^binding>\eta_expand\ style_eta_expand end \ end (*>*) diff --git a/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML b/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML --- a/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML +++ b/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML @@ -1,1270 +1,1270 @@ (* Title: HOL/Tools/Ctr_Sugar/ctr_sugar.ML Author: Jasmin Blanchette, TU Muenchen Author: Martin Desharnais, TU Muenchen Copyright 2012, 2013 Wrapping existing freely generated type's constructors. *) signature CTR_SUGAR = sig datatype ctr_sugar_kind = Datatype | Codatatype | Record | Unknown type ctr_sugar = {kind: ctr_sugar_kind, T: typ, ctrs: term list, casex: term, discs: term list, selss: term list list, exhaust: thm, nchotomy: thm, injects: thm list, distincts: thm list, case_thms: thm list, case_cong: thm, case_cong_weak: thm, case_distribs: thm list, split: thm, split_asm: thm, disc_defs: thm list, disc_thmss: thm list list, discIs: thm list, disc_eq_cases: thm list, sel_defs: thm list, sel_thmss: thm list list, distinct_discsss: thm list list list, exhaust_discs: thm list, exhaust_sels: thm list, collapses: thm list, expands: thm list, split_sels: thm list, split_sel_asms: thm list, case_eq_ifs: thm list}; val morph_ctr_sugar: morphism -> ctr_sugar -> ctr_sugar val transfer_ctr_sugar: theory -> ctr_sugar -> ctr_sugar val ctr_sugar_of: Proof.context -> string -> ctr_sugar option val ctr_sugar_of_global: theory -> string -> ctr_sugar option val ctr_sugars_of: Proof.context -> ctr_sugar list val ctr_sugars_of_global: theory -> ctr_sugar list val ctr_sugar_of_case: Proof.context -> string -> ctr_sugar option val ctr_sugar_of_case_global: theory -> string -> ctr_sugar option val ctr_sugar_interpretation: string -> (ctr_sugar -> local_theory -> local_theory) -> theory -> theory val interpret_ctr_sugar: (string -> bool) -> ctr_sugar -> local_theory -> local_theory val register_ctr_sugar_raw: ctr_sugar -> local_theory -> local_theory val register_ctr_sugar: (string -> bool) -> ctr_sugar -> local_theory -> local_theory val default_register_ctr_sugar_global: (string -> bool) -> ctr_sugar -> theory -> theory val mk_half_pairss: 'a list * 'a list -> ('a * 'a) list list val join_halves: int -> 'a list list -> 'a list list -> 'a list * 'a list list list val mk_ctr: typ list -> term -> term val mk_case: typ list -> typ -> term -> term val mk_disc_or_sel: typ list -> term -> term val name_of_ctr: term -> string val name_of_disc: term -> string val dest_ctr: Proof.context -> string -> term -> term * term list val dest_case: Proof.context -> string -> typ list -> term -> (ctr_sugar * term list * term list) option type ('c, 'a) ctr_spec = (binding * 'c) * 'a list val disc_of_ctr_spec: ('c, 'a) ctr_spec -> binding val ctr_of_ctr_spec: ('c, 'a) ctr_spec -> 'c val args_of_ctr_spec: ('c, 'a) ctr_spec -> 'a list val code_plugin: string type ctr_options = (string -> bool) * bool type ctr_options_cmd = (Proof.context -> string -> bool) * bool val fake_local_theory_for_sel_defaults: (binding * typ) list -> Proof.context -> Proof.context val free_constructors: ctr_sugar_kind -> ({prems: thm list, context: Proof.context} -> tactic) list list -> ((ctr_options * binding) * (term, binding) ctr_spec list) * term list -> local_theory -> ctr_sugar * local_theory val free_constructors_cmd: ctr_sugar_kind -> ((((Proof.context -> Plugin_Name.filter) * bool) * binding) * ((binding * string) * binding list) list) * string list -> Proof.context -> Proof.state val default_ctr_options: ctr_options val default_ctr_options_cmd: ctr_options_cmd val parse_bound_term: (binding * string) parser val parse_ctr_options: ctr_options_cmd parser val parse_ctr_spec: 'c parser -> 'a parser -> ('c, 'a) ctr_spec parser val parse_sel_default_eqs: string list parser end; structure Ctr_Sugar : CTR_SUGAR = struct open Ctr_Sugar_Util open Ctr_Sugar_Tactics open Ctr_Sugar_Code datatype ctr_sugar_kind = Datatype | Codatatype | Record | Unknown; type ctr_sugar = {kind: ctr_sugar_kind, T: typ, ctrs: term list, casex: term, discs: term list, selss: term list list, exhaust: thm, nchotomy: thm, injects: thm list, distincts: thm list, case_thms: thm list, case_cong: thm, case_cong_weak: thm, case_distribs: thm list, split: thm, split_asm: thm, disc_defs: thm list, disc_thmss: thm list list, discIs: thm list, disc_eq_cases: thm list, sel_defs: thm list, sel_thmss: thm list list, distinct_discsss: thm list list list, exhaust_discs: thm list, exhaust_sels: thm list, collapses: thm list, expands: thm list, split_sels: thm list, split_sel_asms: thm list, case_eq_ifs: thm list}; fun morph_ctr_sugar phi ({kind, T, ctrs, casex, discs, selss, exhaust, nchotomy, injects, distincts, case_thms, case_cong, case_cong_weak, case_distribs, split, split_asm, disc_defs, disc_thmss, discIs, disc_eq_cases, sel_defs, sel_thmss, distinct_discsss, exhaust_discs, exhaust_sels, collapses, expands, split_sels, split_sel_asms, case_eq_ifs} : ctr_sugar) = {kind = kind, T = Morphism.typ phi T, ctrs = map (Morphism.term phi) ctrs, casex = Morphism.term phi casex, discs = map (Morphism.term phi) discs, selss = map (map (Morphism.term phi)) selss, exhaust = Morphism.thm phi exhaust, nchotomy = Morphism.thm phi nchotomy, injects = map (Morphism.thm phi) injects, distincts = map (Morphism.thm phi) distincts, case_thms = map (Morphism.thm phi) case_thms, case_cong = Morphism.thm phi case_cong, case_cong_weak = Morphism.thm phi case_cong_weak, case_distribs = map (Morphism.thm phi) case_distribs, split = Morphism.thm phi split, split_asm = Morphism.thm phi split_asm, disc_defs = map (Morphism.thm phi) disc_defs, disc_thmss = map (map (Morphism.thm phi)) disc_thmss, discIs = map (Morphism.thm phi) discIs, disc_eq_cases = map (Morphism.thm phi) disc_eq_cases, sel_defs = map (Morphism.thm phi) sel_defs, sel_thmss = map (map (Morphism.thm phi)) sel_thmss, distinct_discsss = map (map (map (Morphism.thm phi))) distinct_discsss, exhaust_discs = map (Morphism.thm phi) exhaust_discs, exhaust_sels = map (Morphism.thm phi) exhaust_sels, collapses = map (Morphism.thm phi) collapses, expands = map (Morphism.thm phi) expands, split_sels = map (Morphism.thm phi) split_sels, split_sel_asms = map (Morphism.thm phi) split_sel_asms, case_eq_ifs = map (Morphism.thm phi) case_eq_ifs}; val transfer_ctr_sugar = morph_ctr_sugar o Morphism.transfer_morphism; structure Data = Generic_Data ( type T = (Position.T * ctr_sugar) Symtab.table; val empty = Symtab.empty; val extend = I; fun merge data : T = Symtab.merge (K true) data; ); fun ctr_sugar_of_generic context = Option.map (transfer_ctr_sugar (Context.theory_of context) o #2) o Symtab.lookup (Data.get context); fun ctr_sugars_of_generic context = Symtab.fold (cons o transfer_ctr_sugar (Context.theory_of context) o #2 o #2) (Data.get context) []; fun ctr_sugar_of_case_generic context s = find_first (fn {casex = Const (s', _), ...} => s' = s | _ => false) (ctr_sugars_of_generic context); val ctr_sugar_of = ctr_sugar_of_generic o Context.Proof; val ctr_sugar_of_global = ctr_sugar_of_generic o Context.Theory; val ctr_sugars_of = ctr_sugars_of_generic o Context.Proof; val ctr_sugars_of_global = ctr_sugars_of_generic o Context.Theory; val ctr_sugar_of_case = ctr_sugar_of_case_generic o Context.Proof; val ctr_sugar_of_case_global = ctr_sugar_of_case_generic o Context.Theory; structure Ctr_Sugar_Plugin = Plugin(type T = ctr_sugar); fun ctr_sugar_interpretation name f = Ctr_Sugar_Plugin.interpretation name (fn ctr_sugar => fn lthy => f (transfer_ctr_sugar (Proof_Context.theory_of lthy) ctr_sugar) lthy); val interpret_ctr_sugar = Ctr_Sugar_Plugin.data; fun register_ctr_sugar_raw (ctr_sugar as {T = Type (name, _), ...}) = Local_Theory.declaration {syntax = false, pervasive = true} (fn phi => fn context => let val pos = Position.thread_data () in Data.map (Symtab.update (name, (pos, morph_ctr_sugar phi ctr_sugar))) context end); fun register_ctr_sugar plugins ctr_sugar = register_ctr_sugar_raw ctr_sugar #> interpret_ctr_sugar plugins ctr_sugar; fun default_register_ctr_sugar_global plugins (ctr_sugar as {T = Type (name, _), ...}) thy = let val tab = Data.get (Context.Theory thy); val pos = Position.thread_data (); in if Symtab.defined tab name then thy else thy |> Context.theory_map (Data.put (Symtab.update_new (name, (pos, ctr_sugar)) tab)) |> Named_Target.theory_map (Ctr_Sugar_Plugin.data plugins ctr_sugar) end; val is_prefix = "is_"; val un_prefix = "un_"; val not_prefix = "not_"; fun mk_unN 1 1 suf = un_prefix ^ suf | mk_unN _ l suf = un_prefix ^ suf ^ string_of_int l; val caseN = "case"; val case_congN = "case_cong"; val case_eq_ifN = "case_eq_if"; val collapseN = "collapse"; val discN = "disc"; val disc_eq_caseN = "disc_eq_case"; val discIN = "discI"; val distinctN = "distinct"; val distinct_discN = "distinct_disc"; val exhaustN = "exhaust"; val exhaust_discN = "exhaust_disc"; val expandN = "expand"; val injectN = "inject"; val nchotomyN = "nchotomy"; val selN = "sel"; val exhaust_selN = "exhaust_sel"; val splitN = "split"; val split_asmN = "split_asm"; val split_selN = "split_sel"; val split_sel_asmN = "split_sel_asm"; val splitsN = "splits"; val split_selsN = "split_sels"; val case_cong_weak_thmsN = "case_cong_weak"; val case_distribN = "case_distrib"; val cong_attrs = @{attributes [cong]}; val dest_attrs = @{attributes [dest]}; val safe_elim_attrs = @{attributes [elim!]}; val iff_attrs = @{attributes [iff]}; val inductsimp_attrs = @{attributes [induct_simp]}; val nitpicksimp_attrs = @{attributes [nitpick_simp]}; val simp_attrs = @{attributes [simp]}; fun unflat_lookup eq xs ys = map (fn xs' => permute_like_unique eq xs xs' ys); fun mk_half_pairss' _ ([], []) = [] | mk_half_pairss' indent (x :: xs, _ :: ys) = indent @ fold_rev (cons o single o pair x) ys (mk_half_pairss' ([] :: indent) (xs, ys)); fun mk_half_pairss p = mk_half_pairss' [[]] p; fun join_halves n half_xss other_half_xss = (splice (flat half_xss) (flat other_half_xss), map2 (map2 append) (Library.chop_groups n half_xss) (transpose (Library.chop_groups n other_half_xss))); fun mk_undefined T = Const (\<^const_name>\undefined\, T); fun mk_ctr Ts t = let val Type (_, Ts0) = body_type (fastype_of t) in subst_nonatomic_types (Ts0 ~~ Ts) t end; fun mk_case Ts T t = let val (Type (_, Ts0), body) = strip_type (fastype_of t) |>> List.last in subst_nonatomic_types ((body, T) :: (Ts0 ~~ Ts)) t end; fun mk_disc_or_sel Ts t = subst_nonatomic_types (snd (Term.dest_Type (domain_type (fastype_of t))) ~~ Ts) t; val name_of_ctr = name_of_const "constructor" body_type; fun name_of_disc t = (case head_of t of Abs (_, _, \<^const>\Not\ $ (t' $ Bound 0)) => Long_Name.map_base_name (prefix not_prefix) (name_of_disc t') | Abs (_, _, Const (\<^const_name>\HOL.eq\, _) $ Bound 0 $ t') => Long_Name.map_base_name (prefix is_prefix) (name_of_disc t') | Abs (_, _, \<^const>\Not\ $ (Const (\<^const_name>\HOL.eq\, _) $ Bound 0 $ t')) => Long_Name.map_base_name (prefix (not_prefix ^ is_prefix)) (name_of_disc t') | t' => name_of_const "discriminator" (perhaps (try domain_type)) t'); val base_name_of_ctr = Long_Name.base_name o name_of_ctr; fun dest_ctr ctxt s t = let val (f, args) = Term.strip_comb t in (case ctr_sugar_of ctxt s of SOME {ctrs, ...} => (case find_first (can (fo_match ctxt f)) ctrs of SOME f' => (f', args) | NONE => raise Fail "dest_ctr") | NONE => raise Fail "dest_ctr") end; fun dest_case ctxt s Ts t = (case Term.strip_comb t of (Const (c, _), args as _ :: _) => (case ctr_sugar_of ctxt s of SOME (ctr_sugar as {casex = Const (case_name, _), discs = discs0, selss = selss0, ...}) => if case_name = c then let val n = length discs0 in if n < length args then let val (branches, obj :: leftovers) = chop n args; val discs = map (mk_disc_or_sel Ts) discs0; val selss = map (map (mk_disc_or_sel Ts)) selss0; val conds = map (rapp obj) discs; val branch_argss = map (fn sels => map (rapp obj) sels @ leftovers) selss; val branches' = map2 (curry Term.betapplys) branches branch_argss; in SOME (ctr_sugar, conds, branches') end else NONE end else NONE | _ => NONE) | _ => NONE); fun const_or_free_name (Const (s, _)) = Long_Name.base_name s | const_or_free_name (Free (s, _)) = s | const_or_free_name t = raise TERM ("const_or_free_name", [t]) fun extract_sel_default ctxt t = let fun malformed () = error ("Malformed selector default value equation: " ^ Syntax.string_of_term ctxt t); val ((sel, (ctr, vars)), rhs) = fst (Term.replace_dummy_patterns (Syntax.check_term ctxt t) 0) |> HOLogic.dest_eq |>> (Term.dest_comb #>> const_or_free_name ##> (Term.strip_comb #>> (Term.dest_Const #> fst))) handle TERM _ => malformed (); in if forall (is_Free orf is_Var) vars andalso not (has_duplicates (op aconv) vars) then ((ctr, sel), fold_rev Term.lambda vars rhs) else malformed () end; (* Ideally, we would enrich the context with constants rather than free variables. *) fun fake_local_theory_for_sel_defaults sel_bTs = Proof_Context.allow_dummies #> Proof_Context.add_fixes (map (fn (b, T) => (b, SOME T, NoSyn)) sel_bTs) #> snd; type ('c, 'a) ctr_spec = (binding * 'c) * 'a list; fun disc_of_ctr_spec ((disc, _), _) = disc; fun ctr_of_ctr_spec ((_, ctr), _) = ctr; fun args_of_ctr_spec (_, args) = args; val code_plugin = Plugin_Name.declare_setup \<^binding>\code\; fun prepare_free_constructors kind prep_plugins prep_term ((((raw_plugins, discs_sels), raw_case_binding), ctr_specs), sel_default_eqs) no_defs_lthy = let val plugins = prep_plugins no_defs_lthy raw_plugins; (* TODO: sanity checks on arguments *) val raw_ctrs = map ctr_of_ctr_spec ctr_specs; val raw_disc_bindings = map disc_of_ctr_spec ctr_specs; val raw_sel_bindingss = map args_of_ctr_spec ctr_specs; val n = length raw_ctrs; val ks = 1 upto n; val _ = n > 0 orelse error "No constructors specified"; val ctrs0 = map (prep_term no_defs_lthy) raw_ctrs; val (fcT_name, As0) = (case body_type (fastype_of (hd ctrs0)) of Type T' => T' | _ => error "Expected type constructor in body type of constructor"); val _ = forall ((fn Type (T_name, _) => T_name = fcT_name | _ => false) o body_type o fastype_of) (tl ctrs0) orelse error "Constructors not constructing same type"; val fc_b_name = Long_Name.base_name fcT_name; val fc_b = Binding.name fc_b_name; fun qualify mandatory = Binding.qualify mandatory fc_b_name; val (unsorted_As, [B, C]) = no_defs_lthy |> variant_tfrees (map (fst o dest_TFree_or_TVar) As0) ||> fst o mk_TFrees 2; val As = map2 (resort_tfree_or_tvar o snd o dest_TFree_or_TVar) As0 unsorted_As; val fcT = Type (fcT_name, As); val ctrs = map (mk_ctr As) ctrs0; val ctr_Tss = map (binder_types o fastype_of) ctrs; val ms = map length ctr_Tss; fun can_definitely_rely_on_disc k = not (Binding.is_empty (nth raw_disc_bindings (k - 1))) orelse nth ms (k - 1) = 0; fun can_rely_on_disc k = can_definitely_rely_on_disc k orelse (k = 1 andalso not (can_definitely_rely_on_disc 2)); fun should_omit_disc_binding k = n = 1 orelse (n = 2 andalso can_rely_on_disc (3 - k)); val equal_binding = \<^binding>\=\; fun is_disc_binding_valid b = not (Binding.is_empty b orelse Binding.eq_name (b, equal_binding)); val standard_disc_binding = Binding.name o prefix is_prefix o base_name_of_ctr; val disc_bindings = raw_disc_bindings |> @{map 4} (fn k => fn m => fn ctr => fn disc => qualify false (if Binding.is_empty disc then if m = 0 then equal_binding else if should_omit_disc_binding k then disc else standard_disc_binding ctr else if Binding.eq_name (disc, standard_binding) then standard_disc_binding ctr else disc)) ks ms ctrs0; fun standard_sel_binding m l = Binding.name o mk_unN m l o base_name_of_ctr; val sel_bindingss = @{map 3} (fn ctr => fn m => map2 (fn l => fn sel => qualify false (if Binding.is_empty sel orelse Binding.eq_name (sel, standard_binding) then standard_sel_binding m l ctr else sel)) (1 upto m) o pad_list Binding.empty m) ctrs0 ms raw_sel_bindingss; val add_bindings = Variable.add_fixes (distinct (op =) (filter Symbol_Pos.is_identifier (map Binding.name_of (disc_bindings @ flat sel_bindingss)))) #> snd; val case_Ts = map (fn Ts => Ts ---> B) ctr_Tss; val (((((((((u, exh_y), xss), yss), fs), gs), w), (p, p'))), _) = no_defs_lthy |> add_bindings |> yield_singleton (mk_Frees fc_b_name) fcT ||>> yield_singleton (mk_Frees "y") fcT (* for compatibility with "datatype_realizer.ML" *) ||>> mk_Freess "x" ctr_Tss ||>> mk_Freess "y" ctr_Tss ||>> mk_Frees "f" case_Ts ||>> mk_Frees "g" case_Ts ||>> yield_singleton (mk_Frees "z") B ||>> yield_singleton (apfst (op ~~) oo mk_Frees' "P") HOLogic.boolT; val q = Free (fst p', mk_pred1T B); val xctrs = map2 (curry Term.list_comb) ctrs xss; val yctrs = map2 (curry Term.list_comb) ctrs yss; val xfs = map2 (curry Term.list_comb) fs xss; val xgs = map2 (curry Term.list_comb) gs xss; (* TODO: Eta-expension is for compatibility with the old datatype package (but it also provides nicer names). Consider removing. *) val eta_fs = map2 (fold_rev Term.lambda) xss xfs; val eta_gs = map2 (fold_rev Term.lambda) xss xgs; val case_binding = qualify false (if Binding.is_empty raw_case_binding orelse Binding.eq_name (raw_case_binding, standard_binding) then Binding.prefix_name (caseN ^ "_") fc_b else raw_case_binding); fun mk_case_disj xctr xf xs = list_exists_free xs (HOLogic.mk_conj (HOLogic.mk_eq (u, xctr), HOLogic.mk_eq (w, xf))); val case_rhs = fold_rev (fold_rev Term.lambda) [fs, [u]] (Const (\<^const_name>\The\, (B --> HOLogic.boolT) --> B) $ Term.lambda w (Library.foldr1 HOLogic.mk_disj (@{map 3} mk_case_disj xctrs xfs xss))); val ((raw_case, (_, raw_case_def)), (lthy, lthy_old)) = no_defs_lthy |> (snd o Local_Theory.begin_nested) |> Local_Theory.define ((case_binding, NoSyn), ((Binding.concealed (Thm.def_binding case_binding), []), case_rhs)) ||> `Local_Theory.end_nested; val phi = Proof_Context.export_morphism lthy_old lthy; val case_def = Morphism.thm phi raw_case_def; val case0 = Morphism.term phi raw_case; val casex = mk_case As B case0; val casexC = mk_case As C case0; val casexBool = mk_case As HOLogic.boolT case0; fun mk_uu_eq () = HOLogic.mk_eq (u, u); val exist_xs_u_eq_ctrs = map2 (fn xctr => fn xs => list_exists_free xs (HOLogic.mk_eq (u, xctr))) xctrs xss; val unique_disc_no_def = TrueI; (*arbitrary marker*) val alternate_disc_no_def = FalseE; (*arbitrary marker*) fun alternate_disc_lhs get_udisc k = HOLogic.mk_not (let val b = nth disc_bindings (k - 1) in if is_disc_binding_valid b then get_udisc b (k - 1) else nth exist_xs_u_eq_ctrs (k - 1) end); val no_discs_sels = not discs_sels andalso forall (forall Binding.is_empty) (raw_disc_bindings :: raw_sel_bindingss) andalso null sel_default_eqs; val (all_sels_distinct, discs, selss, disc_defs, sel_defs, sel_defss, lthy) = if no_discs_sels then (true, [], [], [], [], [], lthy) else let val all_sel_bindings = flat sel_bindingss; val num_all_sel_bindings = length all_sel_bindings; val uniq_sel_bindings = distinct Binding.eq_name all_sel_bindings; val all_sels_distinct = (length uniq_sel_bindings = num_all_sel_bindings); val sel_binding_index = if all_sels_distinct then 1 upto num_all_sel_bindings else map (fn b => find_index (curry Binding.eq_name b) uniq_sel_bindings) all_sel_bindings; val all_proto_sels = flat (@{map 3} (fn k => fn xs => map (pair k o pair xs)) ks xss xss); val sel_infos = AList.group (op =) (sel_binding_index ~~ all_proto_sels) |> sort (int_ord o apply2 fst) |> map snd |> curry (op ~~) uniq_sel_bindings; val sel_bindings = map fst sel_infos; val sel_defaults = if null sel_default_eqs then [] else let val sel_Ts = map (curry (op -->) fcT o fastype_of o snd o snd o hd o snd) sel_infos; val fake_lthy = fake_local_theory_for_sel_defaults (sel_bindings ~~ sel_Ts) no_defs_lthy; in map (extract_sel_default fake_lthy o prep_term fake_lthy) sel_default_eqs end; fun disc_free b = Free (Binding.name_of b, mk_pred1T fcT); fun disc_spec b exist_xs_u_eq_ctr = mk_Trueprop_eq (disc_free b $ u, exist_xs_u_eq_ctr); fun alternate_disc k = Term.lambda u (alternate_disc_lhs (K o rapp u o disc_free) (3 - k)); fun mk_sel_case_args b proto_sels T = @{map 3} (fn Const (c, _) => fn Ts => fn k => (case AList.lookup (op =) proto_sels k of NONE => (case filter (curry (op =) (c, Binding.name_of b) o fst) sel_defaults of [] => fold_rev (Term.lambda o curry Free Name.uu) Ts (mk_undefined T) | [(_, t)] => t | _ => error "Multiple default values for selector/constructor pair") | SOME (xs, x) => fold_rev Term.lambda xs x)) ctrs ctr_Tss ks; fun sel_spec b proto_sels = let val _ = (case duplicates (op =) (map fst proto_sels) of k :: _ => error ("Duplicate selector name " ^ quote (Binding.name_of b) ^ " for constructor " ^ quote (Syntax.string_of_term lthy (nth ctrs (k - 1)))) | [] => ()) val T = (case distinct (op =) (map (fastype_of o snd o snd) proto_sels) of [T] => T | T :: T' :: _ => error ("Inconsistent range type for selector " ^ quote (Binding.name_of b) ^ ": " ^ quote (Syntax.string_of_typ lthy T) ^ " vs. " ^ quote (Syntax.string_of_typ lthy T'))); in mk_Trueprop_eq (Free (Binding.name_of b, fcT --> T) $ u, Term.list_comb (mk_case As T case0, mk_sel_case_args b proto_sels T) $ u) end; fun unflat_selss xs = unflat_lookup Binding.eq_name sel_bindings xs sel_bindingss; val (((raw_discs, raw_disc_defs), (raw_sels, raw_sel_defs)), (lthy', lthy)) = lthy |> (snd o Local_Theory.begin_nested) |> apfst split_list o @{fold_map 3} (fn k => fn exist_xs_u_eq_ctr => fn b => if Binding.is_empty b then if n = 1 then pair (Term.lambda u (mk_uu_eq ()), unique_disc_no_def) else pair (alternate_disc k, alternate_disc_no_def) else if Binding.eq_name (b, equal_binding) then pair (Term.lambda u exist_xs_u_eq_ctr, refl) else Specification.definition (SOME (b, NONE, NoSyn)) [] [] ((Thm.def_binding b, []), disc_spec b exist_xs_u_eq_ctr) #>> apsnd snd) ks exist_xs_u_eq_ctrs disc_bindings ||>> apfst split_list o fold_map (fn (b, proto_sels) => Specification.definition (SOME (b, NONE, NoSyn)) [] [] ((Thm.def_binding b, []), sel_spec b proto_sels) #>> apsnd snd) sel_infos ||> `Local_Theory.end_nested; val phi = Proof_Context.export_morphism lthy lthy'; val disc_defs = map (Morphism.thm phi) raw_disc_defs; val sel_defs = map (Morphism.thm phi) raw_sel_defs; val sel_defss = unflat_selss sel_defs; val discs0 = map (Morphism.term phi) raw_discs; val selss0 = unflat_selss (map (Morphism.term phi) raw_sels); val discs = map (mk_disc_or_sel As) discs0; val selss = map (map (mk_disc_or_sel As)) selss0; in (all_sels_distinct, discs, selss, disc_defs, sel_defs, sel_defss, lthy') end; fun mk_imp_p Qs = Logic.list_implies (Qs, HOLogic.mk_Trueprop p); val exhaust_goal = let fun mk_prem xctr xs = fold_rev Logic.all xs (mk_imp_p [mk_Trueprop_eq (exh_y, xctr)]) in fold_rev Logic.all [p, exh_y] (mk_imp_p (map2 mk_prem xctrs xss)) end; val inject_goalss = let fun mk_goal _ _ [] [] = [] | mk_goal xctr yctr xs ys = [fold_rev Logic.all (xs @ ys) (mk_Trueprop_eq (HOLogic.mk_eq (xctr, yctr), Library.foldr1 HOLogic.mk_conj (map2 (curry HOLogic.mk_eq) xs ys)))]; in @{map 4} mk_goal xctrs yctrs xss yss end; val half_distinct_goalss = let fun mk_goal ((xs, xc), (xs', xc')) = fold_rev Logic.all (xs @ xs') (HOLogic.mk_Trueprop (HOLogic.mk_not (HOLogic.mk_eq (xc, xc')))); in map (map mk_goal) (mk_half_pairss (`I (xss ~~ xctrs))) end; val goalss = [exhaust_goal] :: inject_goalss @ half_distinct_goalss; fun after_qed ([exhaust_thm] :: thmss) lthy = let val ((((((((u, u'), (xss, xss')), fs), gs), h), v), p), _) = lthy |> add_bindings |> yield_singleton (apfst (op ~~) oo mk_Frees' fc_b_name) fcT ||>> mk_Freess' "x" ctr_Tss ||>> mk_Frees "f" case_Ts ||>> mk_Frees "g" case_Ts ||>> yield_singleton (mk_Frees "h") (B --> C) ||>> yield_singleton (mk_Frees (fc_b_name ^ "'")) fcT ||>> yield_singleton (mk_Frees "P") HOLogic.boolT; val xfs = map2 (curry Term.list_comb) fs xss; val xgs = map2 (curry Term.list_comb) gs xss; val fcase = Term.list_comb (casex, fs); val ufcase = fcase $ u; val vfcase = fcase $ v; val eta_fcase = Term.list_comb (casex, eta_fs); val eta_gcase = Term.list_comb (casex, eta_gs); val eta_ufcase = eta_fcase $ u; val eta_vgcase = eta_gcase $ v; fun mk_uu_eq () = HOLogic.mk_eq (u, u); val uv_eq = mk_Trueprop_eq (u, v); val ((inject_thms, inject_thmss), half_distinct_thmss) = chop n thmss |>> `flat; val rho_As = map (fn (T, U) => (dest_TVar T, Thm.ctyp_of lthy U)) (map Logic.varifyT_global As ~~ As); fun inst_thm t thm = Thm.instantiate' [] [SOME (Thm.cterm_of lthy t)] (Thm.instantiate (rho_As, []) (Drule.zero_var_indexes thm)); val uexhaust_thm = inst_thm u exhaust_thm; val exhaust_cases = map base_name_of_ctr ctrs; val other_half_distinct_thmss = map (map (fn thm => thm RS not_sym)) half_distinct_thmss; val (distinct_thms, (distinct_thmsss', distinct_thmsss)) = join_halves n half_distinct_thmss other_half_distinct_thmss ||> `transpose; val nchotomy_thm = let val goal = HOLogic.mk_Trueprop (HOLogic.mk_all (fst u', snd u', Library.foldr1 HOLogic.mk_disj exist_xs_u_eq_ctrs)); in Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, prems = _} => mk_nchotomy_tac ctxt n exhaust_thm) |> Thm.close_derivation \<^here> end; val case_thms = let val goals = @{map 3} (fn xctr => fn xf => fn xs => fold_rev Logic.all (fs @ xs) (mk_Trueprop_eq (fcase $ xctr, xf))) xctrs xfs xss; in @{map 4} (fn k => fn goal => fn injects => fn distinctss => Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, ...} => mk_case_tac ctxt n k case_def injects distinctss) |> Thm.close_derivation \<^here>) ks goals inject_thmss distinct_thmsss end; val (case_cong_thm, case_cong_weak_thm) = let fun mk_prem xctr xs xf xg = fold_rev Logic.all xs (Logic.mk_implies (mk_Trueprop_eq (v, xctr), mk_Trueprop_eq (xf, xg))); val goal = Logic.list_implies (uv_eq :: @{map 4} mk_prem xctrs xss xfs xgs, mk_Trueprop_eq (eta_ufcase, eta_vgcase)); val weak_goal = Logic.mk_implies (uv_eq, mk_Trueprop_eq (ufcase, vfcase)); val vars = Variable.add_free_names lthy goal []; val weak_vars = Variable.add_free_names lthy weak_goal []; in (Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, prems = _} => mk_case_cong_tac ctxt uexhaust_thm case_thms), Goal.prove_sorry lthy weak_vars [] weak_goal (fn {context = ctxt, prems = _} => etac ctxt arg_cong 1)) |> apply2 (Thm.close_derivation \<^here>) end; val split_lhs = q $ ufcase; fun mk_split_conjunct xctr xs f_xs = list_all_free xs (HOLogic.mk_imp (HOLogic.mk_eq (u, xctr), q $ f_xs)); fun mk_split_disjunct xctr xs f_xs = list_exists_free xs (HOLogic.mk_conj (HOLogic.mk_eq (u, xctr), HOLogic.mk_not (q $ f_xs))); fun mk_split_goal xctrs xss xfs = mk_Trueprop_eq (split_lhs, Library.foldr1 HOLogic.mk_conj (@{map 3} mk_split_conjunct xctrs xss xfs)); fun mk_split_asm_goal xctrs xss xfs = mk_Trueprop_eq (split_lhs, HOLogic.mk_not (Library.foldr1 HOLogic.mk_disj (@{map 3} mk_split_disjunct xctrs xss xfs))); fun prove_split selss goal = Variable.add_free_names lthy goal [] |> (fn vars => Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, prems = _} => mk_split_tac ctxt uexhaust_thm case_thms selss inject_thmss distinct_thmsss)) |> Thm.close_derivation \<^here>; fun prove_split_asm asm_goal split_thm = Variable.add_free_names lthy asm_goal [] |> (fn vars => Goal.prove_sorry lthy vars [] asm_goal (fn {context = ctxt, ...} => mk_split_asm_tac ctxt split_thm)) |> Thm.close_derivation \<^here>; val (split_thm, split_asm_thm) = let val goal = mk_split_goal xctrs xss xfs; val asm_goal = mk_split_asm_goal xctrs xss xfs; val thm = prove_split (replicate n []) goal; val asm_thm = prove_split_asm asm_goal thm; in (thm, asm_thm) end; val (sel_defs, all_sel_thms, sel_thmss, nontriv_disc_defs, disc_thmss, nontriv_disc_thmss, discI_thms, nontriv_discI_thms, distinct_disc_thms, distinct_disc_thmsss, exhaust_disc_thms, exhaust_sel_thms, all_collapse_thms, safe_collapse_thms, expand_thms, split_sel_thms, split_sel_asm_thms, case_eq_if_thms, disc_eq_case_thms) = if no_discs_sels then ([], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []) else let val udiscs = map (rapp u) discs; val uselss = map (map (rapp u)) selss; val usel_ctrs = map2 (curry Term.list_comb) ctrs uselss; val usel_fs = map2 (curry Term.list_comb) fs uselss; val vdiscs = map (rapp v) discs; val vselss = map (map (rapp v)) selss; fun make_sel_thm xs' case_thm sel_def = zero_var_indexes (Variable.gen_all lthy (Drule.rename_bvars' (map (SOME o fst) xs') (Drule.forall_intr_vars (case_thm RS (sel_def RS trans))))); val sel_thmss = @{map 3} (map oo make_sel_thm) xss' case_thms sel_defss; fun has_undefined_rhs thm = (case snd (HOLogic.dest_eq (HOLogic.dest_Trueprop (Thm.prop_of thm))) of Const (\<^const_name>\undefined\, _) => true | _ => false); val all_sel_thms = (if all_sels_distinct andalso null sel_default_eqs then flat sel_thmss else map_product (fn s => fn (xs', c) => make_sel_thm xs' c s) sel_defs (xss' ~~ case_thms)) |> filter_out has_undefined_rhs; fun mk_unique_disc_def () = let val m = the_single ms; val goal = mk_Trueprop_eq (mk_uu_eq (), the_single exist_xs_u_eq_ctrs); val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, prems = _} => mk_unique_disc_def_tac ctxt m uexhaust_thm) |> Thm.close_derivation \<^here> end; fun mk_alternate_disc_def k = let val goal = mk_Trueprop_eq (alternate_disc_lhs (K (nth udiscs)) (3 - k), nth exist_xs_u_eq_ctrs (k - 1)); val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, ...} => mk_alternate_disc_def_tac ctxt k (nth disc_defs (2 - k)) (nth distinct_thms (2 - k)) uexhaust_thm) |> Thm.close_derivation \<^here> end; val has_alternate_disc_def = exists (fn def => Thm.eq_thm_prop (def, alternate_disc_no_def)) disc_defs; val nontriv_disc_defs = disc_defs |> filter_out (member Thm.eq_thm_prop [unique_disc_no_def, alternate_disc_no_def, refl]); val disc_defs' = map2 (fn k => fn def => if Thm.eq_thm_prop (def, unique_disc_no_def) then mk_unique_disc_def () else if Thm.eq_thm_prop (def, alternate_disc_no_def) then mk_alternate_disc_def k else def) ks disc_defs; val discD_thms = map (fn def => def RS iffD1) disc_defs'; val discI_thms = map2 (fn m => fn def => funpow m (fn thm => exI RS thm) (def RS iffD2)) ms disc_defs'; val not_discI_thms = map2 (fn m => fn def => funpow m (fn thm => allI RS thm) (unfold_thms lthy @{thms not_ex} (def RS @{thm ssubst[of _ _ Not]}))) ms disc_defs'; val (disc_thmss', disc_thmss) = let fun mk_thm discI _ [] = refl RS discI | mk_thm _ not_discI [distinct] = distinct RS not_discI; fun mk_thms discI not_discI distinctss = map (mk_thm discI not_discI) distinctss; in @{map 3} mk_thms discI_thms not_discI_thms distinct_thmsss' |> `transpose end; val nontriv_disc_thmss = map2 (fn b => if is_disc_binding_valid b then I else K []) disc_bindings disc_thmss; fun is_discI_triv b = (n = 1 andalso Binding.is_empty b) orelse Binding.eq_name (b, equal_binding); val nontriv_discI_thms = flat (map2 (fn b => if is_discI_triv b then K [] else single) disc_bindings discI_thms); val (distinct_disc_thms, (distinct_disc_thmsss', distinct_disc_thmsss)) = let fun mk_goal [] = [] | mk_goal [((_, udisc), (_, udisc'))] = [Logic.all u (Logic.mk_implies (HOLogic.mk_Trueprop udisc, HOLogic.mk_Trueprop (HOLogic.mk_not udisc')))]; fun prove tac goal = Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, prems = _} => tac ctxt) |> Thm.close_derivation \<^here>; val half_pairss = mk_half_pairss (`I (ms ~~ discD_thms ~~ udiscs)); val half_goalss = map mk_goal half_pairss; val half_thmss = @{map 3} (fn [] => K (K []) | [goal] => fn [(((m, discD), _), _)] => fn disc_thm => [prove (fn ctxt => mk_half_distinct_disc_tac ctxt m discD disc_thm) goal]) half_goalss half_pairss (flat disc_thmss'); val other_half_goalss = map (mk_goal o map swap) half_pairss; val other_half_thmss = map2 (map2 (fn thm => prove (fn ctxt => mk_other_half_distinct_disc_tac ctxt thm))) half_thmss other_half_goalss; in join_halves n half_thmss other_half_thmss ||> `transpose |>> has_alternate_disc_def ? K [] end; val exhaust_disc_thm = let fun mk_prem udisc = mk_imp_p [HOLogic.mk_Trueprop udisc]; val goal = fold_rev Logic.all [p, u] (mk_imp_p (map mk_prem udiscs)); in Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, prems = _} => mk_exhaust_disc_tac ctxt n exhaust_thm discI_thms) |> Thm.close_derivation \<^here> end; val (safe_collapse_thms, all_collapse_thms) = let fun mk_goal m udisc usel_ctr = let val prem = HOLogic.mk_Trueprop udisc; val concl = mk_Trueprop_eq ((usel_ctr, u) |> m = 0 ? swap); in (prem aconv concl, Logic.all u (Logic.mk_implies (prem, concl))) end; val (trivs, goals) = @{map 3} mk_goal ms udiscs usel_ctrs |> split_list; val thms = @{map 5} (fn m => fn discD => fn sel_thms => fn triv => fn goal => Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, ...} => mk_collapse_tac ctxt m discD sel_thms ORELSE HEADGOAL (assume_tac ctxt)) |> Thm.close_derivation \<^here> |> not triv ? perhaps (try (fn thm => refl RS thm))) ms discD_thms sel_thmss trivs goals; in (map_filter (fn (true, _) => NONE | (false, thm) => SOME thm) (trivs ~~ thms), thms) end; val swapped_all_collapse_thms = map2 (fn m => fn thm => if m = 0 then thm else thm RS sym) ms all_collapse_thms; val exhaust_sel_thm = let fun mk_prem usel_ctr = mk_imp_p [mk_Trueprop_eq (u, usel_ctr)]; val goal = fold_rev Logic.all [p, u] (mk_imp_p (map mk_prem usel_ctrs)); in Goal.prove_sorry lthy [] [] goal (fn {context = ctxt, prems = _} => mk_exhaust_sel_tac ctxt n exhaust_disc_thm swapped_all_collapse_thms) |> Thm.close_derivation \<^here> end; val expand_thm = let fun mk_prems k udisc usels vdisc vsels = (if k = n then [] else [mk_Trueprop_eq (udisc, vdisc)]) @ (if null usels then [] else [Logic.list_implies (if n = 1 then [] else map HOLogic.mk_Trueprop [udisc, vdisc], HOLogic.mk_Trueprop (Library.foldr1 HOLogic.mk_conj (map2 (curry HOLogic.mk_eq) usels vsels)))]); val goal = Library.foldr Logic.list_implies (@{map 5} mk_prems ks udiscs uselss vdiscs vselss, uv_eq); val uncollapse_thms = map2 (fn thm => fn [] => thm | _ => thm RS sym) all_collapse_thms uselss; val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, prems = _} => mk_expand_tac ctxt n ms (inst_thm u exhaust_disc_thm) (inst_thm v exhaust_disc_thm) uncollapse_thms distinct_disc_thmsss distinct_disc_thmsss') |> Thm.close_derivation \<^here> end; val (split_sel_thm, split_sel_asm_thm) = let val zss = map (K []) xss; val goal = mk_split_goal usel_ctrs zss usel_fs; val asm_goal = mk_split_asm_goal usel_ctrs zss usel_fs; val thm = prove_split sel_thmss goal; val asm_thm = prove_split_asm asm_goal thm; in (thm, asm_thm) end; val case_eq_if_thm = let val goal = mk_Trueprop_eq (ufcase, mk_IfN B udiscs usel_fs); val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, ...} => mk_case_eq_if_tac ctxt n uexhaust_thm case_thms disc_thmss' sel_thmss) |> Thm.close_derivation \<^here> end; val disc_eq_case_thms = let fun const_of_bool b = if b then \<^const>\True\ else \<^const>\False\; fun mk_case_args n = map_index (fn (k, argTs) => fold_rev Term.absdummy argTs (const_of_bool (n = k))) ctr_Tss; val goals = map_index (fn (n, udisc) => mk_Trueprop_eq (udisc, list_comb (casexBool, mk_case_args n) $ u)) udiscs; val goal = Logic.mk_conjunction_balanced goals; val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, ...} => mk_disc_eq_case_tac ctxt (Thm.cterm_of ctxt u) exhaust_thm (flat nontriv_disc_thmss) distinct_thms case_thms) |> Thm.close_derivation \<^here> |> Conjunction.elim_balanced (length goals) end; in (sel_defs, all_sel_thms, sel_thmss, nontriv_disc_defs, disc_thmss, nontriv_disc_thmss, discI_thms, nontriv_discI_thms, distinct_disc_thms, distinct_disc_thmsss, [exhaust_disc_thm], [exhaust_sel_thm], all_collapse_thms, safe_collapse_thms, [expand_thm], [split_sel_thm], [split_sel_asm_thm], [case_eq_if_thm], disc_eq_case_thms) end; val case_distrib_thm = let val args = @{map 2} (fn f => fn argTs => let val (args, _) = mk_Frees "x" argTs lthy in fold_rev Term.lambda args (h $ list_comb (f, args)) end) fs ctr_Tss; val goal = mk_Trueprop_eq (h $ ufcase, list_comb (casexC, args) $ u); val vars = Variable.add_free_names lthy goal []; in Goal.prove_sorry lthy vars [] goal (fn {context = ctxt, ...} => mk_case_distrib_tac ctxt (Thm.cterm_of ctxt u) exhaust_thm case_thms) |> Thm.close_derivation \<^here> end; val exhaust_case_names_attr = Attrib.internal (K (Rule_Cases.case_names exhaust_cases)); val cases_type_attr = Attrib.internal (K (Induct.cases_type fcT_name)); val nontriv_disc_eq_thmss = map (map (fn th => th RS @{thm eq_False[THEN iffD2]} handle THM _ => th RS @{thm eq_True[THEN iffD2]})) nontriv_disc_thmss; val anonymous_notes = [(map (fn th => th RS notE) distinct_thms, safe_elim_attrs), (flat nontriv_disc_eq_thmss, nitpicksimp_attrs)] |> map (fn (thms, attrs) => ((Binding.empty, attrs), [(thms, [])])); val notes = [(caseN, case_thms, nitpicksimp_attrs @ simp_attrs), (case_congN, [case_cong_thm], []), (case_cong_weak_thmsN, [case_cong_weak_thm], cong_attrs), (case_distribN, [case_distrib_thm], []), (case_eq_ifN, case_eq_if_thms, []), (collapseN, safe_collapse_thms, if ms = [0] then [] else simp_attrs), (discN, flat nontriv_disc_thmss, simp_attrs), (disc_eq_caseN, disc_eq_case_thms, []), (discIN, nontriv_discI_thms, []), (distinctN, distinct_thms, simp_attrs @ inductsimp_attrs), (distinct_discN, distinct_disc_thms, dest_attrs), (exhaustN, [exhaust_thm], [exhaust_case_names_attr, cases_type_attr]), (exhaust_discN, exhaust_disc_thms, [exhaust_case_names_attr]), (exhaust_selN, exhaust_sel_thms, [exhaust_case_names_attr]), (expandN, expand_thms, []), (injectN, inject_thms, iff_attrs @ inductsimp_attrs), (nchotomyN, [nchotomy_thm], []), (selN, all_sel_thms, nitpicksimp_attrs @ simp_attrs), (splitN, [split_thm], []), (split_asmN, [split_asm_thm], []), (split_selN, split_sel_thms, []), (split_sel_asmN, split_sel_asm_thms, []), (split_selsN, split_sel_thms @ split_sel_asm_thms, []), (splitsN, [split_thm, split_asm_thm], [])] |> filter_out (null o #2) |> map (fn (thmN, thms, attrs) => ((qualify true (Binding.name thmN), attrs), [(thms, [])])); val (noted, lthy') = lthy |> Spec_Rules.add Binding.empty Spec_Rules.equational [casex] case_thms |> fold (uncurry (Spec_Rules.add Binding.empty Spec_Rules.equational)) (AList.group (eq_list (op aconv)) (map (`(single o lhs_head_of)) all_sel_thms)) |> fold (uncurry (Spec_Rules.add Binding.empty Spec_Rules.equational)) (filter_out (null o snd) (map single discs ~~ nontriv_disc_eq_thmss)) |> Local_Theory.declaration {syntax = false, pervasive = true} (fn phi => Case_Translation.register (Morphism.term phi casex) (map (Morphism.term phi) ctrs)) |> plugins code_plugin ? (Code.declare_default_eqns (map (rpair true) (flat nontriv_disc_eq_thmss @ case_thms @ all_sel_thms)) #> Local_Theory.declaration {syntax = false, pervasive = false} (fn phi => Context.mapping (add_ctr_code fcT_name (map (Morphism.typ phi) As) (map (dest_Const o Morphism.term phi) ctrs) (Morphism.fact phi inject_thms) (Morphism.fact phi distinct_thms) (Morphism.fact phi case_thms)) I)) |> Local_Theory.notes (anonymous_notes @ notes) (* for "datatype_realizer.ML": *) |>> name_noted_thms fcT_name exhaustN; val ctr_sugar = {kind = kind, T = fcT, ctrs = ctrs, casex = casex, discs = discs, selss = selss, exhaust = exhaust_thm, nchotomy = nchotomy_thm, injects = inject_thms, distincts = distinct_thms, case_thms = case_thms, case_cong = case_cong_thm, case_cong_weak = case_cong_weak_thm, case_distribs = [case_distrib_thm], split = split_thm, split_asm = split_asm_thm, disc_defs = nontriv_disc_defs, disc_thmss = disc_thmss, discIs = discI_thms, disc_eq_cases = disc_eq_case_thms, sel_defs = sel_defs, sel_thmss = sel_thmss, distinct_discsss = distinct_disc_thmsss, exhaust_discs = exhaust_disc_thms, exhaust_sels = exhaust_sel_thms, collapses = all_collapse_thms, expands = expand_thms, split_sels = split_sel_thms, split_sel_asms = split_sel_asm_thms, case_eq_ifs = case_eq_if_thms} |> morph_ctr_sugar (substitute_noted_thm noted); in (ctr_sugar, lthy' |> register_ctr_sugar plugins ctr_sugar) end; in (goalss, after_qed, lthy) end; fun free_constructors kind tacss = (fn (goalss, after_qed, lthy) => map2 (map2 (Thm.close_derivation \<^here> oo Goal.prove_sorry lthy [] [])) goalss tacss |> (fn thms => after_qed thms lthy)) oo prepare_free_constructors kind (K I) (K I); fun free_constructors_cmd kind = (fn (goalss, after_qed, lthy) => Proof.theorem NONE (snd oo after_qed) (map (map (rpair [])) goalss) lthy) oo prepare_free_constructors kind Plugin_Name.make_filter Syntax.read_term; val parse_bound_term = Parse.binding --| \<^keyword>\:\ -- Parse.term; type ctr_options = Plugin_Name.filter * bool; type ctr_options_cmd = (Proof.context -> Plugin_Name.filter) * bool; val default_ctr_options : ctr_options = (Plugin_Name.default_filter, false); val default_ctr_options_cmd : ctr_options_cmd = (K Plugin_Name.default_filter, false); val parse_ctr_options = Scan.optional (\<^keyword>\(\ |-- Parse.list1 (Plugin_Name.parse_filter >> (apfst o K) || Parse.reserved "discs_sels" >> (apsnd o K o K true)) --| \<^keyword>\)\ >> (fn fs => fold I fs default_ctr_options_cmd)) default_ctr_options_cmd; fun parse_ctr_spec parse_ctr parse_arg = parse_opt_binding_colon -- parse_ctr -- Scan.repeat parse_arg; val parse_ctr_specs = Parse.enum1 "|" (parse_ctr_spec Parse.term Parse.binding); val parse_sel_default_eqs = Scan.optional (\<^keyword>\where\ |-- Parse.enum1 "|" Parse.prop) []; val _ = Outer_Syntax.local_theory_to_proof \<^command_keyword>\free_constructors\ "register an existing freely generated type's constructors" (parse_ctr_options -- Parse.binding --| \<^keyword>\for\ -- parse_ctr_specs -- parse_sel_default_eqs >> free_constructors_cmd Unknown); (** external views **) (* document antiquotations *) local fun antiquote_setup binding co = - Thy_Output.antiquotation_pretty_source_embedded binding + Document_Output.antiquotation_pretty_source_embedded binding ((Scan.ahead (Scan.lift Parse.not_eof) >> Token.pos_of) -- Args.type_name {proper = true, strict = true}) (fn ctxt => fn (pos, type_name) => let fun err () = error ("Bad " ^ Binding.name_of binding ^ ": " ^ quote type_name ^ Position.here pos); in (case ctr_sugar_of ctxt type_name of NONE => err () | SOME {kind, T = T0, ctrs = ctrs0, ...} => let val _ = if co = (kind = Codatatype) then () else err (); val T = Logic.unvarifyT_global T0; val ctrs = map Logic.unvarify_global ctrs0; val pretty_typ_bracket = Syntax.pretty_typ (Config.put pretty_priority 1001 ctxt); fun pretty_ctr ctr = Pretty.block (Pretty.breaks (Syntax.pretty_term ctxt ctr :: map pretty_typ_bracket (binder_types (fastype_of ctr)))); in Pretty.block (Pretty.keyword1 (Binding.name_of binding) :: Pretty.brk 1 :: Syntax.pretty_typ ctxt T :: Pretty.str " =" :: Pretty.brk 1 :: flat (separate [Pretty.brk 1, Pretty.str "| "] (map (single o pretty_ctr) ctrs))) end) end); in val _ = Theory.setup (antiquote_setup \<^binding>\datatype\ false #> antiquote_setup \<^binding>\codatatype\ true); end; (* theory export *) val _ = Export_Theory.setup_presentation (fn context => fn thy => let val parents = map (Data.get o Context.Theory) (Theory.parents_of thy); val datatypes = (Data.get (Context.Theory thy), []) |-> Symtab.fold (fn (name, (pos, {kind, T, ctrs, ...})) => if kind = Record orelse exists (fn tab => Symtab.defined tab name) parents then I else let val pos_properties = Thy_Info.adjust_pos_properties context pos; val typ = Logic.unvarifyT_global T; val constrs = map Logic.unvarify_global ctrs; val typargs = rev (fold Term.add_tfrees (Logic.mk_type typ :: constrs) []); val constructors = map (fn t => (t, Term.type_of t)) constrs; in cons (pos_properties, (name, (kind = Codatatype, (typargs, (typ, constructors))))) end); in if null datatypes then () else Export_Theory.export_body thy "datatypes" let open XML.Encode Term_XML.Encode in list (pair properties (pair string (pair bool (pair (list (pair string sort)) (pair typ (list (pair (term (Sign.consts_of thy)) typ))))))) datatypes end end); end; diff --git a/src/HOL/Tools/value_command.ML b/src/HOL/Tools/value_command.ML --- a/src/HOL/Tools/value_command.ML +++ b/src/HOL/Tools/value_command.ML @@ -1,84 +1,84 @@ (* Title: HOL/Tools/value_command.ML Author: Florian Haftmann, TU Muenchen Generic value command for arbitrary evaluators, with default using nbe or SML. *) signature VALUE_COMMAND = sig val value: Proof.context -> term -> term val value_select: string -> Proof.context -> term -> term val value_cmd: xstring -> string list -> string -> Toplevel.state -> unit val add_evaluator: binding * (Proof.context -> term -> term) -> theory -> string * theory end; structure Value_Command : VALUE_COMMAND = struct structure Evaluators = Theory_Data ( type T = (Proof.context -> term -> term) Name_Space.table; val empty = Name_Space.empty_table "evaluator"; val extend = I; val merge = Name_Space.merge_tables; ) fun add_evaluator (b, evaluator) thy = let val (name, tab') = Name_Space.define (Context.Theory thy) true (b, evaluator) (Evaluators.get thy); val thy' = Evaluators.put tab' thy; in (name, thy') end; fun intern_evaluator ctxt raw_name = if raw_name = "" then "" else Name_Space.intern (Name_Space.space_of_table (Evaluators.get (Proof_Context.theory_of ctxt))) raw_name; fun default_value ctxt t = if null (Term.add_frees t []) then Code_Evaluation.dynamic_value_strict ctxt t else Nbe.dynamic_value ctxt t; fun value_select name ctxt = if name = "" then default_value ctxt else Name_Space.get (Evaluators.get (Proof_Context.theory_of ctxt)) name ctxt; val value = value_select ""; fun value_cmd raw_name modes raw_t state = let val ctxt = Toplevel.context_of state; val name = intern_evaluator ctxt raw_name; val t = Syntax.read_term ctxt raw_t; val t' = value_select name ctxt t; val ty' = Term.type_of t'; val ctxt' = Proof_Context.augment t' ctxt; val p = Print_Mode.with_modes modes (fn () => Pretty.block [Pretty.quote (Syntax.pretty_term ctxt' t'), Pretty.fbrk, Pretty.str "::", Pretty.brk 1, Pretty.quote (Syntax.pretty_typ ctxt' ty')]) (); in Pretty.writeln p end; val opt_modes = Scan.optional (\<^keyword>\(\ |-- Parse.!!! (Scan.repeat1 Parse.name --| \<^keyword>\)\)) []; val opt_evaluator = Scan.optional (\<^keyword>\[\ |-- Parse.name --| \<^keyword>\]\) ""; val _ = Outer_Syntax.command \<^command_keyword>\value\ "evaluate and print term" (opt_evaluator -- opt_modes -- Parse.term >> (fn ((name, modes), t) => Toplevel.keep (value_cmd name modes t))); val _ = Theory.setup - (Thy_Output.antiquotation_pretty_source_embedded \<^binding>\value\ + (Document_Output.antiquotation_pretty_source_embedded \<^binding>\value\ (Scan.lift opt_evaluator -- Term_Style.parse -- Args.term) (fn ctxt => fn ((name, style), t) => - Thy_Output.pretty_term ctxt (style (value_select name ctxt t))) + Document_Output.pretty_term ctxt (style (value_select name ctxt t))) #> add_evaluator (\<^binding>\simp\, Code_Simp.dynamic_value) #> snd #> add_evaluator (\<^binding>\nbe\, Nbe.dynamic_value) #> snd #> add_evaluator (\<^binding>\code\, Code_Evaluation.dynamic_value_strict) #> snd); end; diff --git a/src/Pure/Admin/build_csdp.scala b/src/Pure/Admin/build_csdp.scala --- a/src/Pure/Admin/build_csdp.scala +++ b/src/Pure/Admin/build_csdp.scala @@ -1,207 +1,207 @@ /* Title: Pure/Admin/build_csdp.scala Author: Makarius Build Isabelle CSDP component from official download. */ package isabelle object Build_CSDP { // Note: version 6.2.0 does not quite work for the "sos" proof method val default_download_url = "https://github.com/coin-or/Csdp/archive/releases/6.1.1.tar.gz" /* flags */ sealed case class Flags(platform: String, CFLAGS: String = "", LIBS: String = "") { val changed: List[(String, String)] = List("CFLAGS" -> CFLAGS, "LIBS" -> LIBS).filter(p => p._2.nonEmpty) def print: Option[String] = if (changed.isEmpty) None else - Some(" * " + platform + ":\n" + changed.map(p => " " + p._1 + "=" + p._2) + Some(" * " + platform + ":\n" + changed.map(p => " " + Properties.Eq(p)) .mkString("\n")) def change(path: Path): Unit = { - def change_line(line: String, entry: (String, String)): String = - line.replaceAll(entry._1 + "=.*", entry._1 + "=" + entry._2) + def change_line(line: String, p: (String, String)): String = + line.replaceAll(p._1 + "=.*", Properties.Eq(p)) File.change(path, s => split_lines(s).map(line => changed.foldLeft(line)(change_line)).mkString("\n")) } } val build_flags: List[Flags] = List( Flags("arm64-linux", CFLAGS = "-O3 -ansi -Wall -DNOSHORTS -DBIT64 -DUSESIGTERM -DUSEGETTIME -I../include", LIBS = "-static -L../lib -lsdp -llapack -lblas -lgfortran -lm"), Flags("x86_64-linux", CFLAGS = "-O3 -ansi -Wall -DNOSHORTS -DBIT64 -DUSESIGTERM -DUSEGETTIME -I../include", LIBS = "-static -L../lib -lsdp -llapack -lblas -lgfortran -lquadmath -lm"), Flags("x86_64-darwin", CFLAGS = "-O3 -Wall -DNOSHORTS -DBIT64 -DUSESIGTERM -DUSEGETTIME -I../include", LIBS = "-L../lib -lsdp -llapack -lblas -lm"), Flags("x86_64-windows")) /* build CSDP */ def build_csdp( download_url: String = default_download_url, verbose: Boolean = false, progress: Progress = new Progress, target_dir: Path = Path.current, mingw: MinGW = MinGW.none): Unit = { mingw.check Isabelle_System.with_tmp_dir("build")(tmp_dir => { /* component */ val Archive_Name = """^.*?([^/]+)$""".r val Version = """^[^0-9]*([0-9].*)\.tar.gz$""".r val archive_name = download_url match { case Archive_Name(name) => name case _ => error("Failed to determine source archive name from " + quote(download_url)) } val version = archive_name match { case Version(version) => version case _ => error("Failed to determine component version from " + quote(archive_name)) } val component_name = "csdp-" + version val component_dir = Isabelle_System.new_directory(target_dir + Path.basic(component_name)) progress.echo("Component " + component_dir) /* platform */ val platform_name = proper_string(Isabelle_System.getenv("ISABELLE_WINDOWS_PLATFORM64")) orElse proper_string(Isabelle_System.getenv("ISABELLE_PLATFORM64")) getOrElse error("No 64bit platform") val platform_dir = Isabelle_System.make_directory(component_dir + Path.basic(platform_name)) /* download source */ val archive_path = tmp_dir + Path.basic(archive_name) Isabelle_System.download_file(download_url, archive_path, progress = progress) Isabelle_System.bash("tar xzf " + File.bash_path(archive_path), cwd = tmp_dir.file).check val source_name = File.get_dir(tmp_dir) Isabelle_System.bash( "tar xzf " + archive_path + " && mv " + Bash.string(source_name) + " src", cwd = component_dir.file).check /* build */ progress.echo("Building CSDP for " + platform_name + " ...") val build_dir = tmp_dir + Path.basic(source_name) build_flags.find(flags => flags.platform == platform_name) match { case None => error("No build flags for platform " + quote(platform_name)) case Some(flags) => File.find_files(build_dir.file, pred = file => file.getName == "Makefile"). foreach(file => flags.change(File.path(file))) } progress.bash(mingw.bash_script("make"), cwd = build_dir.file, echo = verbose).check /* install */ Isabelle_System.copy_file(build_dir + Path.explode("LICENSE"), component_dir) Isabelle_System.copy_file(build_dir + Path.explode("solver/csdp").platform_exe, platform_dir) if (Platform.is_windows) { Executable.libraries_closure(platform_dir + Path.explode("csdp.exe"), mingw = mingw, filter = Set("libblas", "liblapack", "libgfortran", "libgcc_s_seh", "libquadmath", "libwinpthread")) } /* settings */ val etc_dir = Isabelle_System.make_directory(component_dir + Path.basic("etc")) File.write(etc_dir + Path.basic("settings"), """# -*- shell-script -*- :mode=shellscript: ISABELLE_CSDP="$COMPONENT/${ISABELLE_WINDOWS_PLATFORM64:-$ISABELLE_PLATFORM64}/csdp" """) /* README */ File.write(component_dir + Path.basic("README"), """This is CSDP """ + version + """ from """ + download_url + """ Makefile flags have been changed for various platforms as follows: """ + build_flags.flatMap(_.print).mkString("\n\n") + """ The distribution has been built like this: cd src && make Only the bare "solver/csdp" program is used for Isabelle. Makarius """ + Date.Format.date(Date.now()) + "\n") }) } /* Isabelle tool wrapper */ val isabelle_tool = Isabelle_Tool("build_csdp", "build prover component from official download", Scala_Project.here, args => { var target_dir = Path.current var mingw = MinGW.none var download_url = default_download_url var verbose = false val getopts = Getopts(""" Usage: isabelle build_csdp [OPTIONS] Options are: -D DIR target directory (default ".") -M DIR msys/mingw root specification for Windows -U URL download URL (default: """" + default_download_url + """") -v verbose Build prover component from official download. """, "D:" -> (arg => target_dir = Path.explode(arg)), "M:" -> (arg => mingw = MinGW(Path.explode(arg))), "U:" -> (arg => download_url = arg), "v" -> (_ => verbose = true)) val more_args = getopts(args) if (more_args.nonEmpty) getopts.usage() val progress = new Console_Progress() build_csdp(download_url = download_url, verbose = verbose, progress = progress, target_dir = target_dir, mingw = mingw) }) } diff --git a/src/Pure/Admin/build_doc.scala b/src/Pure/Admin/build_doc.scala --- a/src/Pure/Admin/build_doc.scala +++ b/src/Pure/Admin/build_doc.scala @@ -1,112 +1,112 @@ /* Title: Pure/Admin/build_doc.scala Author: Makarius Build Isabelle documentation. */ package isabelle object Build_Doc { /* build_doc */ def build_doc( options: Options, progress: Progress = new Progress, all_docs: Boolean = false, max_jobs: Int = 1, sequential: Boolean = false, docs: List[String] = Nil): Unit = { val store = Sessions.store(options) val sessions_structure = Sessions.load_structure(options) val selected = for { session <- sessions_structure.build_topological_order info = sessions_structure(session) if info.groups.contains("doc") doc = info.options.string("document_variants") if all_docs || docs.contains(doc) } yield (doc, session) val documents = selected.map(_._1) val selection = Sessions.Selection(sessions = selected.map(_._2)) docs.filter(doc => !documents.contains(doc)) match { case Nil => case bad => error("No documentation session for " + commas_quote(bad)) } progress.echo("Build started for sessions " + commas_quote(selection.sessions)) Build.build(options, selection = selection, progress = progress, max_jobs = max_jobs).ok || error("Build failed") progress.echo("Build started for documentation " + commas_quote(documents)) val doc_options = options + "document=pdf" val deps = Sessions.load_structure(doc_options).selection_deps(selection) val errs = Par_List.map[(String, String), Option[String]]( { case (doc, session) => try { progress.echo("Documentation " + quote(doc) + " ...") using(store.open_database_context())(db_context => - Presentation.build_documents(session, deps, db_context, + Document_Build.build_documents(Document_Build.context(session, deps, db_context), output_pdf = Some(Path.explode("~~/doc")))) None } catch { case Exn.Interrupt.ERROR(msg) => val sep = if (msg.contains('\n')) "\n" else " " Some("Documentation " + quote(doc) + " failed:" + sep + msg) } }, selected, sequential = sequential).flatten if (errs.nonEmpty) error(cat_lines(errs)) } /* Isabelle tool wrapper */ val isabelle_tool = Isabelle_Tool("build_doc", "build Isabelle documentation", Scala_Project.here, args => { var all_docs = false var max_jobs = 1 var sequential = false var options = Options.init() val getopts = Getopts(""" Usage: isabelle build_doc [OPTIONS] [DOCS ...] Options are: -a select all documentation sessions -j INT maximum number of parallel jobs (default 1) -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -s sequential LaTeX jobs Build Isabelle documentation from documentation sessions with suitable document_variants entry. """, "a" -> (_ => all_docs = true), "j:" -> (arg => max_jobs = Value.Int.parse(arg)), "o:" -> (arg => options = options + arg), "s" -> (_ => sequential = true)) val docs = getopts(args) if (!all_docs && docs.isEmpty) getopts.usage() val progress = new Console_Progress() progress.interrupt_handler { build_doc(options, progress = progress, all_docs = all_docs, max_jobs = max_jobs, sequential = sequential, docs = docs) } }) } diff --git a/src/Pure/Admin/build_log.scala b/src/Pure/Admin/build_log.scala --- a/src/Pure/Admin/build_log.scala +++ b/src/Pure/Admin/build_log.scala @@ -1,1175 +1,1167 @@ /* Title: Pure/Admin/build_log.scala Author: Makarius Management of build log files and database storage. */ package isabelle import java.io.{File => JFile} import java.time.format.{DateTimeFormatter, DateTimeParseException} import java.util.Locale import scala.collection.immutable.SortedMap import scala.collection.mutable import scala.util.matching.Regex object Build_Log { /** content **/ /* properties */ object Prop { val build_tags = SQL.Column.string("build_tags") // lines val build_args = SQL.Column.string("build_args") // lines val build_group_id = SQL.Column.string("build_group_id") val build_id = SQL.Column.string("build_id") val build_engine = SQL.Column.string("build_engine") val build_host = SQL.Column.string("build_host") val build_start = SQL.Column.date("build_start") val build_end = SQL.Column.date("build_end") val isabelle_version = SQL.Column.string("isabelle_version") val afp_version = SQL.Column.string("afp_version") val all_props: List[SQL.Column] = List(build_tags, build_args, build_group_id, build_id, build_engine, build_host, build_start, build_end, isabelle_version, afp_version) } /* settings */ object Settings { val ISABELLE_BUILD_OPTIONS = SQL.Column.string("ISABELLE_BUILD_OPTIONS") val ML_PLATFORM = SQL.Column.string("ML_PLATFORM") val ML_HOME = SQL.Column.string("ML_HOME") val ML_SYSTEM = SQL.Column.string("ML_SYSTEM") val ML_OPTIONS = SQL.Column.string("ML_OPTIONS") val ml_settings = List(ML_PLATFORM, ML_HOME, ML_SYSTEM, ML_OPTIONS) val all_settings = ISABELLE_BUILD_OPTIONS :: ml_settings type Entry = (String, String) type T = List[Entry] object Entry { def unapply(s: String): Option[Entry] = - s.indexOf('=') match { - case -1 => None - case i => - val a = s.substring(0, i) - val b = Library.perhaps_unquote(s.substring(i + 1)) - Some((a, b)) - } - def apply(a: String, b: String): String = a + "=" + quote(b) - def getenv(a: String): String = apply(a, Isabelle_System.getenv(a)) + for { (a, b) <- Properties.Eq.unapply(s) } + yield (a, Library.perhaps_unquote(b)) + def getenv(a: String): String = + Properties.Eq(a, quote(Isabelle_System.getenv(a))) } def show(): String = cat_lines( List(Entry.getenv("ISABELLE_TOOL_JAVA_OPTIONS"), Entry.getenv(ISABELLE_BUILD_OPTIONS.name), "") ::: ml_settings.map(c => Entry.getenv(c.name))) } /* file names */ def log_date(date: Date): String = String.format(Locale.ROOT, "%s.%05d", DateTimeFormatter.ofPattern("yyyy-MM-dd").format(date.rep), java.lang.Long.valueOf((date.time - date.midnight.time).ms / 1000)) def log_subdir(date: Date): Path = Path.explode("log") + Path.explode(date.rep.getYear.toString) def log_filename(engine: String, date: Date, more: List[String] = Nil): Path = Path.explode((engine :: log_date(date) :: more).mkString("", "_", ".log")) /** log file **/ def print_date(date: Date): String = Log_File.Date_Format(date) object Log_File { /* log file */ def plain_name(name: String): String = { List(".log", ".log.gz", ".log.xz", ".gz", ".xz").find(name.endsWith) match { case Some(s) => Library.try_unsuffix(s, name).get case None => name } } def apply(name: String, lines: List[String]): Log_File = new Log_File(plain_name(name), lines.map(Library.trim_line)) def apply(name: String, text: String): Log_File = new Log_File(plain_name(name), Library.trim_split_lines(text)) def apply(file: JFile): Log_File = { val name = file.getName val text = if (name.endsWith(".gz")) File.read_gzip(file) else if (name.endsWith(".xz")) File.read_xz(file) else File.read(file) apply(name, text) } def apply(path: Path): Log_File = apply(path.file) /* log file collections */ def is_log(file: JFile, prefixes: List[String] = List(Build_History.log_prefix, Identify.log_prefix, Identify.log_prefix2, Isatest.log_prefix, AFP_Test.log_prefix, Jenkins.log_prefix), suffixes: List[String] = List(".log", ".log.gz", ".log.xz")): Boolean = { val name = file.getName prefixes.exists(name.startsWith) && suffixes.exists(name.endsWith) && name != "isatest.log" && name != "afp-test.log" && name != "main.log" } /* date format */ val Date_Format = { val fmts = Date.Formatter.variants( List("EEE MMM d HH:mm:ss O yyyy", "EEE MMM d HH:mm:ss VV yyyy"), List(Locale.ENGLISH, Locale.GERMAN)) ::: List( DateTimeFormatter.RFC_1123_DATE_TIME, Date.Formatter.pattern("EEE MMM d HH:mm:ss yyyy").withZone(Date.timezone_berlin)) def tune_timezone(s: String): String = s match { case "CET" | "MET" => "GMT+1" case "CEST" | "MEST" => "GMT+2" case "EST" => "Europe/Berlin" case _ => s } def tune_weekday(s: String): String = s match { case "Die" => "Di" case "Mit" => "Mi" case "Don" => "Do" case "Fre" => "Fr" case "Sam" => "Sa" case "Son" => "So" case _ => s } def tune(s: String): String = Word.implode( Word.explode(s) match { case a :: "M\uFFFDr" :: bs => tune_weekday(a) :: "Mär" :: bs.map(tune_timezone) case a :: bs => tune_weekday(a) :: bs.map(tune_timezone) case Nil => Nil } ) Date.Format.make(fmts, tune) } } class Log_File private(val name: String, val lines: List[String]) { log_file => override def toString: String = name def text: String = cat_lines(lines) def err(msg: String): Nothing = error("Error in log file " + quote(name) + ": " + msg) /* date format */ object Strict_Date { def unapply(s: String): Some[Date] = try { Some(Log_File.Date_Format.parse(s)) } catch { case exn: DateTimeParseException => log_file.err(exn.getMessage) } } /* inlined text */ def filter(Marker: Protocol_Message.Marker): List[String] = for (Marker(text) <- lines) yield text def find(Marker: Protocol_Message.Marker): Option[String] = lines.collectFirst({ case Marker(text) => text }) def find_match(regexes: List[Regex]): Option[String] = regexes match { case Nil => None case regex :: rest => lines.iterator.map(regex.unapplySeq(_)).find(res => res.isDefined && res.get.length == 1). map(res => res.get.head) orElse find_match(rest) } /* settings */ - def get_setting(a: String): Option[Settings.Entry] = - lines.find(_.startsWith(a + "=")) match { - case Some(line) => Settings.Entry.unapply(line) - case None => None - } + def get_setting(name: String): Option[Settings.Entry] = + lines.collectFirst({ case Settings.Entry(a, b) if a == name => a -> b }) def get_all_settings: Settings.T = for { c <- Settings.all_settings; entry <- get_setting(c.name) } yield entry /* properties (YXML) */ val cache: XML.Cache = XML.Cache.make() def parse_props(text: String): Properties.T = try { cache.props(XML.Decode.properties(YXML.parse_body(text))) } catch { case _: XML.Error => log_file.err("malformed properties") } def filter_props(marker: Protocol_Message.Marker): List[Properties.T] = for (text <- filter(marker) if YXML.detect(text)) yield parse_props(text) def find_props(marker: Protocol_Message.Marker): Option[Properties.T] = for (text <- find(marker) if YXML.detect(text)) yield parse_props(text) /* parse various formats */ def parse_meta_info(): Meta_Info = Build_Log.parse_meta_info(log_file) def parse_build_info(ml_statistics: Boolean = false): Build_Info = Build_Log.parse_build_info(log_file, ml_statistics) def parse_session_info( command_timings: Boolean = false, theory_timings: Boolean = false, ml_statistics: Boolean = false, task_statistics: Boolean = false): Session_Info = Build_Log.parse_session_info( log_file, command_timings, theory_timings, ml_statistics, task_statistics) } /** digested meta info: produced by Admin/build_history in log.xz file **/ object Meta_Info { val empty: Meta_Info = Meta_Info(Nil, Nil) } sealed case class Meta_Info(props: Properties.T, settings: Settings.T) { def is_empty: Boolean = props.isEmpty && settings.isEmpty def get(c: SQL.Column): Option[String] = Properties.get(props, c.name) orElse Properties.get(settings, c.name) def get_date(c: SQL.Column): Option[Date] = get(c).map(Log_File.Date_Format.parse) } object Identify { val log_prefix = "isabelle_identify_" val log_prefix2 = "plain_identify_" def engine(log_file: Log_File): String = if (log_file.name.startsWith(Jenkins.log_prefix)) "jenkins_identify" else if (log_file.name.startsWith(log_prefix2)) "plain_identify" else "identify" def content(date: Date, isabelle_version: Option[String], afp_version: Option[String]): String = terminate_lines( List("isabelle_identify: " + Build_Log.print_date(date), "") ::: isabelle_version.map("Isabelle version: " + _).toList ::: afp_version.map("AFP version: " + _).toList) val Start = new Regex("""^isabelle_identify: (.+)$""") val No_End = new Regex("""$.""") val Isabelle_Version = List(new Regex("""^Isabelle version: (\S+)$""")) val AFP_Version = List(new Regex("""^AFP version: (\S+)$""")) } object Isatest { val log_prefix = "isatest-makeall-" val engine = "isatest" val Start = new Regex("""^------------------- starting test --- (.+) --- (.+)$""") val End = new Regex("""^------------------- test (?:successful|FAILED) --- (.+) --- .*$""") val Isabelle_Version = List(new Regex("""^Isabelle version: (\S+)$""")) } object AFP_Test { val log_prefix = "afp-test-devel-" val engine = "afp-test" val Start = new Regex("""^Start test(?: for .+)? at ([^,]+), (.*)$""") val Start_Old = new Regex("""^Start test(?: for .+)? at ([^,]+)$""") val End = new Regex("""^End test on (.+), .+, elapsed time:.*$""") val Isabelle_Version = List(new Regex("""^Isabelle version: .* -- hg id (\S+)$""")) val AFP_Version = List(new Regex("""^AFP version: .* -- hg id (\S+)$""")) val Bad_Init = new Regex("""^cp:.*: Disc quota exceeded$""") } object Jenkins { val log_prefix = "jenkins_" val engine = "jenkins" val Host = new Regex("""^Building remotely on (\S+) \((\S+)\).*$""") val Start = new Regex("""^(?:Started by an SCM change|Started from command line by admin|).*$""") val Start_Date = new Regex("""^Build started at (.+)$""") val No_End = new Regex("""$.""") val Isabelle_Version = List(new Regex("""^(?:Build for Isabelle id|Isabelle id) (\w+).*$"""), new Regex("""^ISABELLE_CI_REPO_ID="(\w+)".*$"""), new Regex("""^(\w{12}) tip.*$""")) val AFP_Version = List(new Regex("""^(?:Build for AFP id|AFP id) (\w+).*$"""), new Regex("""^ISABELLE_CI_AFP_ID="(\w+)".*$""")) val CONFIGURATION = "=== CONFIGURATION ===" val BUILD = "=== BUILD ===" } private def parse_meta_info(log_file: Log_File): Meta_Info = { def parse(engine: String, host: String, start: Date, End: Regex, Isabelle_Version: List[Regex], AFP_Version: List[Regex]): Meta_Info = { val build_id = { val prefix = proper_string(host) orElse proper_string(engine) getOrElse "build" prefix + ":" + start.time.ms } val build_engine = if (engine == "") Nil else List(Prop.build_engine.name -> engine) val build_host = if (host == "") Nil else List(Prop.build_host.name -> host) val start_date = List(Prop.build_start.name -> print_date(start)) val end_date = log_file.lines.last match { case End(log_file.Strict_Date(end_date)) => List(Prop.build_end.name -> print_date(end_date)) case _ => Nil } val isabelle_version = log_file.find_match(Isabelle_Version).map(Prop.isabelle_version.name -> _) val afp_version = log_file.find_match(AFP_Version).map(Prop.afp_version.name -> _) Meta_Info((Prop.build_id.name -> build_id) :: build_engine ::: build_host ::: start_date ::: end_date ::: isabelle_version.toList ::: afp_version.toList, log_file.get_all_settings) } log_file.lines match { case line :: _ if Protocol.Meta_Info_Marker.test_yxml(line) => Meta_Info(log_file.find_props(Protocol.Meta_Info_Marker).get, log_file.get_all_settings) case Identify.Start(log_file.Strict_Date(start)) :: _ => parse(Identify.engine(log_file), "", start, Identify.No_End, Identify.Isabelle_Version, Identify.AFP_Version) case Isatest.Start(log_file.Strict_Date(start), host) :: _ => parse(Isatest.engine, host, start, Isatest.End, Isatest.Isabelle_Version, Nil) case AFP_Test.Start(log_file.Strict_Date(start), host) :: _ => parse(AFP_Test.engine, host, start, AFP_Test.End, AFP_Test.Isabelle_Version, AFP_Test.AFP_Version) case AFP_Test.Start_Old(log_file.Strict_Date(start)) :: _ => parse(AFP_Test.engine, "", start, AFP_Test.End, AFP_Test.Isabelle_Version, AFP_Test.AFP_Version) case Jenkins.Start() :: _ => log_file.lines.dropWhile(_ != Jenkins.BUILD) match { case Jenkins.BUILD :: _ :: Jenkins.Start_Date(log_file.Strict_Date(start)) :: _ => val host = log_file.lines.takeWhile(_ != Jenkins.CONFIGURATION).collectFirst({ case Jenkins.Host(a, b) => a + "." + b }).getOrElse("") parse(Jenkins.engine, host, start.to(Date.timezone_berlin), Jenkins.No_End, Jenkins.Isabelle_Version, Jenkins.AFP_Version) case _ => Meta_Info.empty } case line :: _ if line.startsWith("\u0000") => Meta_Info.empty case List(Isatest.End(_)) => Meta_Info.empty case _ :: AFP_Test.Bad_Init() :: _ => Meta_Info.empty case Nil => Meta_Info.empty case _ => log_file.err("cannot detect log file format") } } /** build info: toplevel output of isabelle build or Admin/build_history **/ val SESSION_NAME = "session_name" object Session_Status extends Enumeration { val existing, finished, failed, cancelled = Value } sealed case class Session_Entry( chapter: String = "", groups: List[String] = Nil, threads: Option[Int] = None, timing: Timing = Timing.zero, ml_timing: Timing = Timing.zero, sources: Option[String] = None, heap_size: Option[Long] = None, status: Option[Session_Status.Value] = None, errors: List[String] = Nil, theory_timings: Map[String, Timing] = Map.empty, ml_statistics: List[Properties.T] = Nil) { def proper_groups: Option[String] = if (groups.isEmpty) None else Some(cat_lines(groups)) def finished: Boolean = status == Some(Session_Status.finished) def failed: Boolean = status == Some(Session_Status.failed) } object Build_Info { val sessions_dummy: Map[String, Session_Entry] = Map("" -> Session_Entry(theory_timings = Map("" -> Timing.zero))) } sealed case class Build_Info(sessions: Map[String, Session_Entry]) { def finished_sessions: List[String] = for ((a, b) <- sessions.toList if b.finished) yield a def failed_sessions: List[String] = for ((a, b) <- sessions.toList if b.failed) yield a } private def parse_build_info(log_file: Log_File, parse_ml_statistics: Boolean): Build_Info = { object Chapter_Name { def unapply(s: String): Some[(String, String)] = space_explode('/', s) match { case List(chapter, name) => Some((chapter, name)) case _ => Some(("", s)) } } val Session_No_Groups = new Regex("""^Session (\S+)$""") val Session_Groups = new Regex("""^Session (\S+) \((.*)\)$""") val Session_Finished1 = new Regex("""^Finished (\S+) \((\d+):(\d+):(\d+) elapsed time, (\d+):(\d+):(\d+) cpu time.*$""") val Session_Finished2 = new Regex("""^Finished ([^\s/]+) \((\d+):(\d+):(\d+) elapsed time.*$""") val Session_Timing = new Regex("""^Timing (\S+) \((\d+) threads, (\d+\.\d+)s elapsed time, (\d+\.\d+)s cpu time, (\d+\.\d+)s GC time.*$""") val Session_Started = new Regex("""^(?:Running|Building) (\S+) \.\.\.$""") val Sources = new Regex("""^Sources (\S+) (\S{""" + SHA1.digest_length + """})$""") val Heap = new Regex("""^Heap (\S+) \((\d+) bytes\)$""") object Theory_Timing { def unapply(line: String): Option[(String, (String, Timing))] = Protocol.Theory_Timing_Marker.unapply(line.replace('~', '-')).map(log_file.parse_props) match { case Some((SESSION_NAME, session) :: props) => for (theory <- Markup.Name.unapply(props)) yield (session, theory -> Markup.Timing_Properties.parse(props)) case _ => None } } var chapter = Map.empty[String, String] var groups = Map.empty[String, List[String]] var threads = Map.empty[String, Int] var timing = Map.empty[String, Timing] var ml_timing = Map.empty[String, Timing] var started = Set.empty[String] var sources = Map.empty[String, String] var heap_sizes = Map.empty[String, Long] var theory_timings = Map.empty[String, Map[String, Timing]] var ml_statistics = Map.empty[String, List[Properties.T]] var errors = Map.empty[String, List[String]] def all_sessions: Set[String] = chapter.keySet ++ groups.keySet ++ threads.keySet ++ timing.keySet ++ ml_timing.keySet ++ started ++ sources.keySet ++ heap_sizes.keySet ++ theory_timings.keySet ++ ml_statistics.keySet for (line <- log_file.lines) { line match { case Session_No_Groups(Chapter_Name(chapt, name)) => chapter += (name -> chapt) groups += (name -> Nil) case Session_Groups(Chapter_Name(chapt, name), grps) => chapter += (name -> chapt) groups += (name -> Word.explode(grps)) case Session_Started(name) => started += name case Session_Finished1(name, Value.Int(e1), Value.Int(e2), Value.Int(e3), Value.Int(c1), Value.Int(c2), Value.Int(c3)) => val elapsed = Time.hms(e1, e2, e3) val cpu = Time.hms(c1, c2, c3) timing += (name -> Timing(elapsed, cpu, Time.zero)) case Session_Finished2(name, Value.Int(e1), Value.Int(e2), Value.Int(e3)) => val elapsed = Time.hms(e1, e2, e3) timing += (name -> Timing(elapsed, Time.zero, Time.zero)) case Session_Timing(name, Value.Int(t), Value.Double(e), Value.Double(c), Value.Double(g)) => val elapsed = Time.seconds(e) val cpu = Time.seconds(c) val gc = Time.seconds(g) ml_timing += (name -> Timing(elapsed, cpu, gc)) threads += (name -> t) case Sources(name, s) => sources += (name -> s) case Heap(name, Value.Long(size)) => heap_sizes += (name -> size) case _ if Protocol.Theory_Timing_Marker.test_yxml(line) => line match { case Theory_Timing(name, theory_timing) => theory_timings += (name -> (theory_timings.getOrElse(name, Map.empty) + theory_timing)) case _ => log_file.err("malformed theory_timing " + quote(line)) } case _ if parse_ml_statistics && Protocol.ML_Statistics_Marker.test_yxml(line) => Protocol.ML_Statistics_Marker.unapply(line).map(log_file.parse_props) match { case Some((SESSION_NAME, name) :: props) => ml_statistics += (name -> (props :: ml_statistics.getOrElse(name, Nil))) case _ => log_file.err("malformed ML_statistics " + quote(line)) } case _ if Protocol.Error_Message_Marker.test_yxml(line) => Protocol.Error_Message_Marker.unapply(line).map(log_file.parse_props) match { case Some(List((SESSION_NAME, name), (Markup.CONTENT, msg))) => errors += (name -> (msg :: errors.getOrElse(name, Nil))) case _ => log_file.err("malformed error message " + quote(line)) } case _ => } } val sessions = Map( (for (name <- all_sessions.toList) yield { val status = if (timing.isDefinedAt(name) || ml_timing.isDefinedAt(name)) Session_Status.finished else if (started(name)) Session_Status.failed else Session_Status.existing val entry = Session_Entry( chapter = chapter.getOrElse(name, ""), groups = groups.getOrElse(name, Nil), threads = threads.get(name), timing = timing.getOrElse(name, Timing.zero), ml_timing = ml_timing.getOrElse(name, Timing.zero), sources = sources.get(name), heap_size = heap_sizes.get(name), status = Some(status), errors = errors.getOrElse(name, Nil).reverse, theory_timings = theory_timings.getOrElse(name, Map.empty), ml_statistics = ml_statistics.getOrElse(name, Nil).reverse) (name -> entry) }):_*) Build_Info(sessions) } /** session info: produced by isabelle build as session database **/ sealed case class Session_Info( session_timing: Properties.T, command_timings: List[Properties.T], theory_timings: List[Properties.T], ml_statistics: List[Properties.T], task_statistics: List[Properties.T], errors: List[String]) { def error(s: String): Session_Info = copy(errors = errors ::: List(s)) } private def parse_session_info( log_file: Log_File, command_timings: Boolean, theory_timings: Boolean, ml_statistics: Boolean, task_statistics: Boolean): Session_Info = { Session_Info( session_timing = log_file.find_props(Protocol.Session_Timing_Marker) getOrElse Nil, command_timings = if (command_timings) log_file.filter_props(Protocol.Command_Timing_Marker) else Nil, theory_timings = if (theory_timings) log_file.filter_props(Protocol.Theory_Timing_Marker) else Nil, ml_statistics = if (ml_statistics) log_file.filter_props(Protocol.ML_Statistics_Marker) else Nil, task_statistics = if (task_statistics) log_file.filter_props(Protocol.Task_Statistics_Marker) else Nil, errors = log_file.filter(Protocol.Error_Message_Marker)) } def compress_errors(errors: List[String], cache: XZ.Cache = XZ.Cache()): Option[Bytes] = if (errors.isEmpty) None else { Some(Bytes(YXML.string_of_body(XML.Encode.list(XML.Encode.string)(errors))). compress(cache = cache)) } def uncompress_errors(bytes: Bytes, cache: XML.Cache = XML.Cache.make()): List[String] = if (bytes.is_empty) Nil else { XML.Decode.list(YXML.string_of_body)( YXML.parse_body(bytes.uncompress(cache = cache.xz).text, cache = cache)) } /** persistent store **/ /* SQL data model */ object Data { def build_log_table(name: String, columns: List[SQL.Column], body: String = ""): SQL.Table = SQL.Table("isabelle_build_log_" + name, columns, body) /* main content */ val log_name = SQL.Column.string("log_name").make_primary_key val session_name = SQL.Column.string("session_name").make_primary_key val theory_name = SQL.Column.string("theory_name").make_primary_key val chapter = SQL.Column.string("chapter") val groups = SQL.Column.string("groups") val threads = SQL.Column.int("threads") val timing_elapsed = SQL.Column.long("timing_elapsed") val timing_cpu = SQL.Column.long("timing_cpu") val timing_gc = SQL.Column.long("timing_gc") val timing_factor = SQL.Column.double("timing_factor") val ml_timing_elapsed = SQL.Column.long("ml_timing_elapsed") val ml_timing_cpu = SQL.Column.long("ml_timing_cpu") val ml_timing_gc = SQL.Column.long("ml_timing_gc") val ml_timing_factor = SQL.Column.double("ml_timing_factor") val theory_timing_elapsed = SQL.Column.long("theory_timing_elapsed") val theory_timing_cpu = SQL.Column.long("theory_timing_cpu") val theory_timing_gc = SQL.Column.long("theory_timing_gc") val heap_size = SQL.Column.long("heap_size") val status = SQL.Column.string("status") val errors = SQL.Column.bytes("errors") val sources = SQL.Column.string("sources") val ml_statistics = SQL.Column.bytes("ml_statistics") val known = SQL.Column.bool("known") val meta_info_table = build_log_table("meta_info", log_name :: Prop.all_props ::: Settings.all_settings) val sessions_table = build_log_table("sessions", List(log_name, session_name, chapter, groups, threads, timing_elapsed, timing_cpu, timing_gc, timing_factor, ml_timing_elapsed, ml_timing_cpu, ml_timing_gc, ml_timing_factor, heap_size, status, errors, sources)) val theories_table = build_log_table("theories", List(log_name, session_name, theory_name, theory_timing_elapsed, theory_timing_cpu, theory_timing_gc)) val ml_statistics_table = build_log_table("ml_statistics", List(log_name, session_name, ml_statistics)) /* AFP versions */ val isabelle_afp_versions_table: SQL.Table = { val version1 = Prop.isabelle_version val version2 = Prop.afp_version build_log_table("isabelle_afp_versions", List(version1.make_primary_key, version2), SQL.select(List(version1, version2), distinct = true) + meta_info_table + " WHERE " + version1.defined + " AND " + version2.defined) } /* earliest pull date for repository version (PostgreSQL queries) */ def pull_date(afp: Boolean = false): SQL.Column = if (afp) SQL.Column.date("afp_pull_date") else SQL.Column.date("pull_date") def pull_date_table(afp: Boolean = false): SQL.Table = { val (name, versions) = if (afp) ("afp_pull_date", List(Prop.isabelle_version, Prop.afp_version)) else ("pull_date", List(Prop.isabelle_version)) build_log_table(name, versions.map(_.make_primary_key) ::: List(pull_date(afp)), "SELECT " + versions.mkString(", ") + ", min(" + Prop.build_start + ") AS " + pull_date(afp) + " FROM " + meta_info_table + " WHERE " + (versions ::: List(Prop.build_start)).map(_.defined).mkString(" AND ") + " GROUP BY " + versions.mkString(", ")) } /* recent entries */ def recent_time(days: Int): SQL.Source = "now() - INTERVAL '" + days.max(0) + " days'" def recent_pull_date_table( days: Int, rev: String = "", afp_rev: Option[String] = None): SQL.Table = { val afp = afp_rev.isDefined val rev2 = afp_rev.getOrElse("") val table = pull_date_table(afp) val version1 = Prop.isabelle_version val version2 = Prop.afp_version val eq1 = version1(table).toString + " = " + SQL.string(rev) val eq2 = version2(table).toString + " = " + SQL.string(rev2) SQL.Table("recent_pull_date", table.columns, table.select(table.columns, "WHERE " + pull_date(afp)(table) + " > " + recent_time(days) + (if (rev != "" && rev2 == "") " OR " + eq1 else if (rev == "" && rev2 != "") " OR " + eq2 else if (rev != "" && rev2 != "") " OR (" + eq1 + " AND " + eq2 + ")" else ""))) } def select_recent_log_names(days: Int): SQL.Source = { val table1 = meta_info_table val table2 = recent_pull_date_table(days) table1.select(List(log_name), distinct = true) + SQL.join_inner + table2.query_named + " ON " + Prop.isabelle_version(table1) + " = " + Prop.isabelle_version(table2) } def select_recent_versions(days: Int, rev: String = "", afp_rev: Option[String] = None, sql: SQL.Source = ""): SQL.Source = { val afp = afp_rev.isDefined val version = Prop.isabelle_version val table1 = recent_pull_date_table(days, rev = rev, afp_rev = afp_rev) val table2 = meta_info_table val aux_table = SQL.Table("aux", table2.columns, table2.select(sql = sql)) val columns = table1.columns.map(c => c(table1)) ::: List(known.copy(expr = log_name(aux_table).defined)) SQL.select(columns, distinct = true) + table1.query_named + SQL.join_outer + aux_table.query_named + " ON " + version(table1) + " = " + version(aux_table) + " ORDER BY " + pull_date(afp)(table1) + " DESC" } /* universal view on main data */ val universal_table: SQL.Table = { val afp_pull_date = pull_date(afp = true) val version1 = Prop.isabelle_version val version2 = Prop.afp_version val table1 = meta_info_table val table2 = pull_date_table(afp = true) val table3 = pull_date_table() val a_columns = log_name :: afp_pull_date :: table1.columns.tail val a_table = SQL.Table("a", a_columns, SQL.select(List(log_name, afp_pull_date) ::: table1.columns.tail.map(_.apply(table1))) + table1 + SQL.join_outer + table2 + " ON " + version1(table1) + " = " + version1(table2) + " AND " + version2(table1) + " = " + version2(table2)) val b_columns = log_name :: pull_date() :: a_columns.tail val b_table = SQL.Table("b", b_columns, SQL.select( List(log_name(a_table), pull_date()(table3)) ::: a_columns.tail.map(_.apply(a_table))) + a_table.query_named + SQL.join_outer + table3 + " ON " + version1(a_table) + " = " + version1(table3)) val c_columns = b_columns ::: sessions_table.columns.tail val c_table = SQL.Table("c", c_columns, SQL.select(log_name(b_table) :: c_columns.tail) + b_table.query_named + SQL.join_inner + sessions_table + " ON " + log_name(b_table) + " = " + log_name(sessions_table)) SQL.Table("isabelle_build_log", c_columns ::: List(ml_statistics), { SQL.select(c_columns.map(_.apply(c_table)) ::: List(ml_statistics)) + c_table.query_named + SQL.join_outer + ml_statistics_table + " ON " + log_name(c_table) + " = " + log_name(ml_statistics_table) + " AND " + session_name(c_table) + " = " + session_name(ml_statistics_table) }) } } /* database access */ def store(options: Options, cache: XML.Cache = XML.Cache.make()): Store = new Store(options, cache) class Store private[Build_Log](options: Options, val cache: XML.Cache) { def open_database( user: String = options.string("build_log_database_user"), password: String = options.string("build_log_database_password"), database: String = options.string("build_log_database_name"), host: String = options.string("build_log_database_host"), port: Int = options.int("build_log_database_port"), ssh_host: String = options.string("build_log_ssh_host"), ssh_user: String = options.string("build_log_ssh_user"), ssh_port: Int = options.int("build_log_ssh_port")): PostgreSQL.Database = { PostgreSQL.open_database( user = user, password = password, database = database, host = host, port = port, ssh = if (ssh_host == "") None else Some(SSH.open_session(options, host = ssh_host, user = ssh_user, port = ssh_port)), ssh_close = true) } def update_database( db: PostgreSQL.Database, dirs: List[Path], ml_statistics: Boolean = false): Unit = { val log_files = dirs.flatMap(dir => File.find_files(dir.file, pred = Log_File.is_log(_), follow_links = true)) write_info(db, log_files, ml_statistics = ml_statistics) db.create_view(Data.pull_date_table()) db.create_view(Data.pull_date_table(afp = true)) db.create_view(Data.universal_table) } def snapshot_database(db: PostgreSQL.Database, sqlite_database: Path, days: Int = 100, ml_statistics: Boolean = false): Unit = { Isabelle_System.make_directory(sqlite_database.dir) sqlite_database.file.delete using(SQLite.open_database(sqlite_database))(db2 => { db.transaction { db2.transaction { // main content db2.create_table(Data.meta_info_table) db2.create_table(Data.sessions_table) db2.create_table(Data.theories_table) db2.create_table(Data.ml_statistics_table) val recent_log_names = db.using_statement(Data.select_recent_log_names(days))(stmt => stmt.execute_query().iterator(_.string(Data.log_name)).toList) for (log_name <- recent_log_names) { read_meta_info(db, log_name).foreach(meta_info => update_meta_info(db2, log_name, meta_info)) update_sessions(db2, log_name, read_build_info(db, log_name)) if (ml_statistics) { update_ml_statistics(db2, log_name, read_build_info(db, log_name, ml_statistics = true)) } } // pull_date for (afp <- List(false, true)) { val afp_rev = if (afp) Some("") else None val table = Data.pull_date_table(afp) db2.create_table(table) db2.using_statement(table.insert())(stmt2 => { db.using_statement( Data.recent_pull_date_table(days, afp_rev = afp_rev).query)(stmt => { val res = stmt.execute_query() while (res.next()) { for ((c, i) <- table.columns.zipWithIndex) { stmt2.string(i + 1) = res.get_string(c) } stmt2.execute() } }) }) } // full view db2.create_view(Data.universal_table) } } db2.rebuild }) } def domain(db: SQL.Database, table: SQL.Table, column: SQL.Column): Set[String] = db.using_statement(table.select(List(column), distinct = true))(stmt => stmt.execute_query().iterator(_.string(column)).toSet) def update_meta_info(db: SQL.Database, log_name: String, meta_info: Meta_Info): Unit = { val table = Data.meta_info_table db.using_statement(db.insert_permissive(table))(stmt => { stmt.string(1) = log_name for ((c, i) <- table.columns.tail.zipWithIndex) { if (c.T == SQL.Type.Date) stmt.date(i + 2) = meta_info.get_date(c) else stmt.string(i + 2) = meta_info.get(c) } stmt.execute() }) } def update_sessions(db: SQL.Database, log_name: String, build_info: Build_Info): Unit = { val table = Data.sessions_table db.using_statement(db.insert_permissive(table))(stmt => { val sessions = if (build_info.sessions.isEmpty) Build_Info.sessions_dummy else build_info.sessions for ((session_name, session) <- sessions) { stmt.string(1) = log_name stmt.string(2) = session_name stmt.string(3) = proper_string(session.chapter) stmt.string(4) = session.proper_groups stmt.int(5) = session.threads stmt.long(6) = session.timing.elapsed.proper_ms stmt.long(7) = session.timing.cpu.proper_ms stmt.long(8) = session.timing.gc.proper_ms stmt.double(9) = session.timing.factor stmt.long(10) = session.ml_timing.elapsed.proper_ms stmt.long(11) = session.ml_timing.cpu.proper_ms stmt.long(12) = session.ml_timing.gc.proper_ms stmt.double(13) = session.ml_timing.factor stmt.long(14) = session.heap_size stmt.string(15) = session.status.map(_.toString) stmt.bytes(16) = compress_errors(session.errors, cache = cache.xz) stmt.string(17) = session.sources stmt.execute() } }) } def update_theories(db: SQL.Database, log_name: String, build_info: Build_Info): Unit = { val table = Data.theories_table db.using_statement(db.insert_permissive(table))(stmt => { val sessions = if (build_info.sessions.forall({ case (_, session) => session.theory_timings.isEmpty })) Build_Info.sessions_dummy else build_info.sessions for { (session_name, session) <- sessions (theory_name, timing) <- session.theory_timings } { stmt.string(1) = log_name stmt.string(2) = session_name stmt.string(3) = theory_name stmt.long(4) = timing.elapsed.ms stmt.long(5) = timing.cpu.ms stmt.long(6) = timing.gc.ms stmt.execute() } }) } def update_ml_statistics(db: SQL.Database, log_name: String, build_info: Build_Info): Unit = { val table = Data.ml_statistics_table db.using_statement(db.insert_permissive(table))(stmt => { val ml_stats: List[(String, Option[Bytes])] = Par_List.map[(String, Session_Entry), (String, Option[Bytes])]( { case (a, b) => (a, Properties.compress(b.ml_statistics, cache = cache.xz).proper) }, build_info.sessions.iterator.filter(p => p._2.ml_statistics.nonEmpty).toList) val entries = if (ml_stats.nonEmpty) ml_stats else List("" -> None) for ((session_name, ml_statistics) <- entries) { stmt.string(1) = log_name stmt.string(2) = session_name stmt.bytes(3) = ml_statistics stmt.execute() } }) } def write_info(db: SQL.Database, files: List[JFile], ml_statistics: Boolean = false): Unit = { abstract class Table_Status(table: SQL.Table) { db.create_table(table) private var known: Set[String] = domain(db, table, Data.log_name) def required(file: JFile): Boolean = !known(Log_File.plain_name(file.getName)) def update_db(db: SQL.Database, log_file: Log_File): Unit def update(log_file: Log_File): Unit = { if (!known(log_file.name)) { update_db(db, log_file) known += log_file.name } } } val status = List( new Table_Status(Data.meta_info_table) { override def update_db(db: SQL.Database, log_file: Log_File): Unit = update_meta_info(db, log_file.name, log_file.parse_meta_info()) }, new Table_Status(Data.sessions_table) { override def update_db(db: SQL.Database, log_file: Log_File): Unit = update_sessions(db, log_file.name, log_file.parse_build_info()) }, new Table_Status(Data.theories_table) { override def update_db(db: SQL.Database, log_file: Log_File): Unit = update_theories(db, log_file.name, log_file.parse_build_info()) }, new Table_Status(Data.ml_statistics_table) { override def update_db(db: SQL.Database, log_file: Log_File): Unit = if (ml_statistics) { update_ml_statistics(db, log_file.name, log_file.parse_build_info(ml_statistics = true)) } }) for (file_group <- files.filter(file => status.exists(_.required(file))). grouped(options.int("build_log_transaction_size") max 1)) { val log_files = Par_List.map[JFile, Log_File](Log_File.apply, file_group) db.transaction { log_files.foreach(log_file => status.foreach(_.update(log_file))) } } } def read_meta_info(db: SQL.Database, log_name: String): Option[Meta_Info] = { val table = Data.meta_info_table val columns = table.columns.tail db.using_statement(table.select(columns, Data.log_name.where_equal(log_name)))(stmt => { val res = stmt.execute_query() if (!res.next()) None else { val results = columns.map(c => c.name -> (if (c.T == SQL.Type.Date) res.get_date(c).map(Log_File.Date_Format(_)) else res.get_string(c))) val n = Prop.all_props.length val props = for ((x, Some(y)) <- results.take(n)) yield (x, y) val settings = for ((x, Some(y)) <- results.drop(n)) yield (x, y) Some(Meta_Info(props, settings)) } }) } def read_build_info( db: SQL.Database, log_name: String, session_names: List[String] = Nil, ml_statistics: Boolean = false): Build_Info = { val table1 = Data.sessions_table val table2 = Data.ml_statistics_table val where_log_name = Data.log_name(table1).where_equal(log_name) + " AND " + Data.session_name(table1) + " <> ''" val where = if (session_names.isEmpty) where_log_name else where_log_name + " AND " + SQL.member(Data.session_name(table1).ident, session_names) val columns1 = table1.columns.tail.map(_.apply(table1)) val (columns, from) = if (ml_statistics) { val columns = columns1 ::: List(Data.ml_statistics(table2)) val join = table1.toString + SQL.join_outer + table2 + " ON " + Data.log_name(table1) + " = " + Data.log_name(table2) + " AND " + Data.session_name(table1) + " = " + Data.session_name(table2) (columns, SQL.enclose(join)) } else (columns1, table1.ident) val sessions = db.using_statement(SQL.select(columns) + from + " " + where)(stmt => { stmt.execute_query().iterator(res => { val session_name = res.string(Data.session_name) val session_entry = Session_Entry( chapter = res.string(Data.chapter), groups = split_lines(res.string(Data.groups)), threads = res.get_int(Data.threads), timing = res.timing(Data.timing_elapsed, Data.timing_cpu, Data.timing_gc), ml_timing = res.timing(Data.ml_timing_elapsed, Data.ml_timing_cpu, Data.ml_timing_gc), sources = res.get_string(Data.sources), heap_size = res.get_long(Data.heap_size), status = res.get_string(Data.status).map(Session_Status.withName), errors = uncompress_errors(res.bytes(Data.errors), cache = cache), ml_statistics = if (ml_statistics) { Properties.uncompress(res.bytes(Data.ml_statistics), cache = cache) } else Nil) session_name -> session_entry }).toMap }) Build_Info(sessions) } } } diff --git a/src/Pure/Admin/components.scala b/src/Pure/Admin/components.scala --- a/src/Pure/Admin/components.scala +++ b/src/Pure/Admin/components.scala @@ -1,356 +1,356 @@ /* Title: Pure/Admin/components.scala Author: Makarius Isabelle system components. */ package isabelle import java.io.{File => JFile} object Components { /* archive name */ object Archive { val suffix: String = ".tar.gz" def apply(name: String): String = if (name == "") error("Bad component name: " + quote(name)) else name + suffix def unapply(archive: String): Option[String] = { for { name0 <- Library.try_unsuffix(suffix, archive) name <- proper_string(name0) } yield name } def get_name(archive: String): String = unapply(archive) getOrElse error("Bad component archive name (expecting .tar.gz): " + quote(archive)) } /* component collections */ def default_component_repository: String = Isabelle_System.getenv("ISABELLE_COMPONENT_REPOSITORY") val default_components_base: Path = Path.explode("$ISABELLE_COMPONENTS_BASE") def admin(dir: Path): Path = dir + Path.explode("Admin/components") def contrib(dir: Path = Path.current, name: String = ""): Path = dir + Path.explode("contrib") + Path.explode(name) def unpack(dir: Path, archive: Path, progress: Progress = new Progress): String = { val name = Archive.get_name(archive.file_name) progress.echo("Unpacking " + name) Isabelle_System.gnutar("-xzf " + File.bash_path(archive), dir = dir).check name } def resolve(base_dir: Path, names: List[String], target_dir: Option[Path] = None, copy_dir: Option[Path] = None, progress: Progress = new Progress): Unit = { Isabelle_System.make_directory(base_dir) for (name <- names) { val archive_name = Archive(name) val archive = base_dir + Path.explode(archive_name) if (!archive.is_file) { val remote = Components.default_component_repository + "/" + archive_name Isabelle_System.download_file(remote, archive, progress = progress) } for (dir <- copy_dir) { Isabelle_System.make_directory(dir) Isabelle_System.copy_file(archive, dir) } unpack(target_dir getOrElse base_dir, archive, progress = progress) } } private val platforms_family: Map[Platform.Family.Value, Set[String]] = Map( Platform.Family.linux_arm -> Set("arm64-linux", "arm64_32-linux"), Platform.Family.linux -> Set("x86_64-linux", "x86_64_32-linux"), Platform.Family.macos -> Set("arm64-darwin", "arm64_32-darwin", "x86_64-darwin", "x86_64_32-darwin"), Platform.Family.windows -> Set("x86_64-cygwin", "x86_64-windows", "x86_64_32-windows", "x86-windows")) private val platforms_all: Set[String] = Set("x86-linux", "x86-cygwin") ++ platforms_family.iterator.flatMap(_._2) def purge(dir: Path, platform: Platform.Family.Value): Unit = { val purge_set = platforms_all -- platforms_family(platform) File.find_files(dir.file, (file: JFile) => file.isDirectory && purge_set(file.getName), include_dirs = true).foreach(Isabelle_System.rm_tree) } /* component directory content */ def settings(dir: Path = Path.current): Path = dir + Path.explode("etc/settings") def components(dir: Path = Path.current): Path = dir + Path.explode("etc/components") def check_dir(dir: Path): Boolean = settings(dir).is_file || components(dir).is_file def read_components(dir: Path): List[String] = split_lines(File.read(components(dir))).filter(_.nonEmpty) def write_components(dir: Path, lines: List[String]): Unit = File.write(components(dir), terminate_lines(lines)) /* component repository content */ val components_sha1: Path = Path.explode("~~/Admin/components/components.sha1") sealed case class SHA1_Digest(sha1: String, file_name: String) { override def toString: String = sha1 + " " + file_name } def read_components_sha1(lines: List[String] = Nil): List[SHA1_Digest] = (proper_list(lines) getOrElse split_lines(File.read(components_sha1))).flatMap(line => Word.explode(line) match { case Nil => None case List(sha1, name) => Some(SHA1_Digest(sha1, name)) case _ => error("Bad components.sha1 entry: " + quote(line)) }) def write_components_sha1(entries: List[SHA1_Digest]): Unit = File.write(components_sha1, entries.sortBy(_.file_name).mkString("", "\n", "\n")) /** manage user components **/ val components_path = Path.explode("$ISABELLE_HOME_USER/etc/components") def read_components(): List[String] = if (components_path.is_file) Library.trim_split_lines(File.read(components_path)) else Nil def write_components(lines: List[String]): Unit = { Isabelle_System.make_directory(components_path.dir) File.write(components_path, Library.terminate_lines(lines)) } def update_components(add: Boolean, path0: Path, progress: Progress = new Progress): Unit = { val path = path0.expand.absolute if (!(path + Path.explode("etc/settings")).is_file && !(path + Path.explode("etc/components")).is_file) error("Bad component directory: " + path) val lines1 = read_components() val lines2 = lines1.filter(line => line.isEmpty || line.startsWith("#") || !File.eq(Path.explode(line), path)) val lines3 = if (add) lines2 ::: List(path.implode) else lines2 if (lines1 != lines3) write_components(lines3) val prefix = if (lines1 == lines3) "Unchanged" else if (add) "Added" else "Removed" progress.echo(prefix + " component " + path) } /* main entry point */ def main(args: Array[String]): Unit = { Command_Line.tool { for (arg <- args) { val add = if (arg.startsWith("+")) true else if (arg.startsWith("-")) false else error("Bad argument: " + quote(arg)) val path = Path.explode(arg.substring(1)) update_components(add, path, progress = new Console_Progress) } } } /** build and publish components **/ def build_components( options: Options, components: List[Path], progress: Progress = new Progress, publish: Boolean = false, force: Boolean = false, update_components_sha1: Boolean = false): Unit = { val archives: List[Path] = for (path <- components) yield { path.file_name match { case Archive(_) => path case name => if (!path.is_dir) error("Bad component directory: " + path) else if (!check_dir(path)) { error("Malformed component directory: " + path + "\n (requires " + settings() + " or " + Components.components() + ")") } else { val component_path = path.expand val archive_dir = component_path.dir val archive_name = Archive(name) val archive = archive_dir + Path.explode(archive_name) if (archive.is_file && !force) { error("Component archive already exists: " + archive) } progress.echo("Packaging " + archive_name) Isabelle_System.gnutar("-czf " + File.bash_path(archive) + " " + Bash.string(name), dir = archive_dir).check archive } } } if ((publish && archives.nonEmpty) || update_components_sha1) { options.string("isabelle_components_server") match { case SSH.Target(user, host) => using(SSH.open_session(options, host = host, user = user))(ssh => { val components_dir = Path.explode(options.string("isabelle_components_dir")) val contrib_dir = Path.explode(options.string("isabelle_components_contrib_dir")) for (dir <- List(components_dir, contrib_dir) if !ssh.is_dir(dir)) { error("Bad remote directory: " + dir) } if (publish) { for (archive <- archives) { val archive_name = archive.file_name val name = Archive.get_name(archive_name) val remote_component = components_dir + archive.base val remote_contrib = contrib_dir + Path.explode(name) // component archive if (ssh.is_file(remote_component) && !force) { error("Remote component archive already exists: " + remote_component) } progress.echo("Uploading " + archive_name) ssh.write_file(remote_component, archive) // contrib directory val is_standard_component = Isabelle_System.with_tmp_dir("component")(tmp_dir => { Isabelle_System.gnutar("-xzf " + File.bash_path(archive), dir = tmp_dir).check check_dir(tmp_dir + Path.explode(name)) }) if (is_standard_component) { if (ssh.is_dir(remote_contrib)) { if (force) ssh.rm_tree(remote_contrib) else error("Remote component directory already exists: " + remote_contrib) } progress.echo("Unpacking remote " + archive_name) ssh.execute("tar -C " + ssh.bash_path(contrib_dir) + " -xzf " + ssh.bash_path(remote_component)).check } else { progress.echo_warning("No unpacking of non-standard component: " + archive_name) } } } // remote SHA1 digests if (update_components_sha1) { val lines = for { entry <- ssh.read_dir(components_dir) if entry.is_file && entry.name.endsWith(Archive.suffix) } yield { progress.echo("Digesting remote " + entry.name) ssh.execute("cd " + ssh.bash_path(components_dir) + "; sha1sum " + Bash.string(entry.name)).check.out } write_components_sha1(read_components_sha1(lines)) } }) case s => error("Bad isabelle_components_server: " + quote(s)) } } // local SHA1 digests { val new_entries = for (archive <- archives) yield { val file_name = archive.file_name progress.echo("Digesting local " + file_name) val sha1 = SHA1.digest(archive).rep SHA1_Digest(sha1, file_name) } val new_names = new_entries.map(_.file_name).toSet write_components_sha1( new_entries ::: read_components_sha1().filterNot(entry => new_names.contains(entry.file_name))) } } /* Isabelle tool wrapper */ private val relevant_options = List("isabelle_components_server", "isabelle_components_dir", "isabelle_components_contrib_dir") val isabelle_tool = Isabelle_Tool("build_components", "build and publish Isabelle components", Scala_Project.here, args => { var publish = false var update_components_sha1 = false var force = false var options = Options.init() def show_options: String = cat_lines(relevant_options.map(name => options.options(name).print)) val getopts = Getopts(""" Usage: isabelle build_components [OPTIONS] ARCHIVES... DIRS... Options are: -P publish on SSH server (see options below) -f force: overwrite existing component archives and directories -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME) -u update all SHA1 keys in Isabelle repository Admin/components Build and publish Isabelle components as .tar.gz archives on SSH server, depending on system options: -""" + Library.prefix_lines(" ", show_options) + "\n", +""" + Library.indent_lines(2, show_options) + "\n", "P" -> (_ => publish = true), "f" -> (_ => force = true), "o:" -> (arg => options = options + arg), "u" -> (_ => update_components_sha1 = true)) val more_args = getopts(args) if (more_args.isEmpty && !update_components_sha1) getopts.usage() val progress = new Console_Progress build_components(options, more_args.map(Path.explode), progress = progress, publish = publish, force = force, update_components_sha1 = update_components_sha1) }) } diff --git a/src/Pure/General/exn.scala b/src/Pure/General/exn.scala --- a/src/Pure/General/exn.scala +++ b/src/Pure/General/exn.scala @@ -1,147 +1,146 @@ /* Title: Pure/General/exn.scala Author: Makarius Support for exceptions (arbitrary throwables). */ package isabelle object Exn { /* user errors */ class User_Error(message: String) extends RuntimeException(message) { override def equals(that: Any): Boolean = that match { case other: User_Error => message == other.getMessage case _ => false } override def hashCode: Int = message.hashCode override def toString: String = "\n" + Output.error_message_text(message) } object ERROR { def apply(message: String): User_Error = new User_Error(message) def unapply(exn: Throwable): Option[String] = user_message(exn) } def error(message: String): Nothing = throw ERROR(message) def cat_message(msgs: String*): String = cat_lines(msgs.iterator.filterNot(_ == "")) def cat_error(msgs: String*): Nothing = error(cat_message(msgs:_*)) /* exceptions as values */ sealed abstract class Result[A] { def user_error: Result[A] = this match { case Exn(ERROR(msg)) => Exn(ERROR(msg)) case _ => this } def map[B](f: A => B): Result[B] = this match { case Res(res) => Res(f(res)) case Exn(exn) => Exn(exn) } } case class Res[A](res: A) extends Result[A] case class Exn[A](exn: Throwable) extends Result[A] def capture[A](e: => A): Result[A] = try { Res(e) } catch { case exn: Throwable => Exn[A](exn) } def release[A](result: Result[A]): A = result match { case Res(x) => x case Exn(exn) => throw exn } def release_first[A](results: List[Result[A]]): List[A] = results.find({ case Exn(exn) => !is_interrupt(exn) case _ => false }) match { case Some(Exn(exn)) => throw exn case _ => results.map(release) } /* interrupts */ def is_interrupt(exn: Throwable): Boolean = { var found_interrupt = false var e = exn while (!found_interrupt && e != null) { found_interrupt |= e.isInstanceOf[InterruptedException] e = e.getCause } found_interrupt } def interruptible_capture[A](e: => A): Result[A] = try { Res(e) } catch { case exn: Throwable if !is_interrupt(exn) => Exn[A](exn) } object Interrupt { object ERROR { def unapply(exn: Throwable): Option[String] = if (is_interrupt(exn)) Some(message(exn)) else user_message(exn) } def apply(): Throwable = new InterruptedException("Interrupt") def unapply(exn: Throwable): Boolean = is_interrupt(exn) def dispose(): Unit = Thread.interrupted() def expose(): Unit = if (Thread.interrupted()) throw apply() def impose(): Unit = Thread.currentThread.interrupt() val return_code = 130 } /* POSIX return code */ def return_code(exn: Throwable, rc: Int): Int = if (is_interrupt(exn)) Interrupt.return_code else rc /* message */ def user_message(exn: Throwable): Option[String] = - if (exn.getClass == classOf[RuntimeException] || - exn.getClass == classOf[User_Error]) + if (exn.isInstanceOf[User_Error] || exn.getClass == classOf[RuntimeException]) { Some(proper_string(exn.getMessage) getOrElse "Error") } else if (exn.isInstanceOf[java.sql.SQLException]) { Some(proper_string(exn.getMessage) getOrElse "SQL error") } else if (exn.isInstanceOf[java.io.IOException]) { val msg = exn.getMessage Some(if (msg == null || msg == "") "I/O error" else "I/O error: " + msg) } else if (exn.isInstanceOf[RuntimeException]) Some(exn.toString) else None def message(exn: Throwable): String = user_message(exn) getOrElse (if (is_interrupt(exn)) "Interrupt" else exn.toString) /* trace */ def trace(exn: Throwable): String = exn.getStackTrace.mkString("\n") } diff --git a/src/Pure/General/http.scala b/src/Pure/General/http.scala --- a/src/Pure/General/http.scala +++ b/src/Pure/General/http.scala @@ -1,304 +1,304 @@ /* Title: Pure/General/http.scala Author: Makarius HTTP client and server support. */ package isabelle import java.io.{File => JFile} import java.net.{InetSocketAddress, URI, URL, URLConnection, HttpURLConnection} import com.sun.net.httpserver.{HttpExchange, HttpHandler, HttpServer} object HTTP { /** content **/ val mime_type_bytes: String = "application/octet-stream" val mime_type_text: String = "text/plain; charset=utf-8" val mime_type_html: String = "text/html; charset=utf-8" val default_mime_type: String = mime_type_bytes val default_encoding: String = UTF8.charset_name sealed case class Content( bytes: Bytes, file_name: String = "", mime_type: String = default_mime_type, encoding: String = default_encoding, elapsed_time: Time = Time.zero) { def text: String = new String(bytes.array, encoding) } def read_content(file: JFile): Content = { val bytes = Bytes.read(file) val file_name = file.getName val mime_type = Option(URLConnection.guessContentTypeFromName(file_name)).getOrElse(default_mime_type) Content(bytes, file_name = file_name, mime_type = mime_type) } def read_content(path: Path): Content = read_content(path.file) /** client **/ val NEWLINE: String = "\r\n" object Client { val default_timeout: Time = Time.seconds(180) def open_connection(url: URL, timeout: Time = default_timeout, user_agent: String = ""): HttpURLConnection = { url.openConnection match { case connection: HttpURLConnection => if (0 < timeout.ms && timeout.ms <= Integer.MAX_VALUE) { val ms = timeout.ms.toInt connection.setConnectTimeout(ms) connection.setReadTimeout(ms) } proper_string(user_agent).foreach(s => connection.setRequestProperty("User-Agent", s)) connection case _ => error("Bad URL (not HTTP): " + quote(url.toString)) } } def get_content(connection: HttpURLConnection): Content = { val Charset = """.*\bcharset="?([\S^"]+)"?.*""".r val start = Time.now() using(connection.getInputStream)(stream => { val bytes = Bytes.read_stream(stream, hint = connection.getContentLength) val stop = Time.now() val file_name = Url.file_name(connection.getURL) val mime_type = Option(connection.getContentType).getOrElse(default_mime_type) val encoding = (connection.getContentEncoding, mime_type) match { case (enc, _) if enc != null => enc case (_, Charset(enc)) => enc case _ => default_encoding } Content(bytes, file_name = file_name, mime_type = mime_type, encoding = encoding, elapsed_time = stop - start) }) } def get(url: URL, timeout: Time = default_timeout, user_agent: String = ""): Content = get_content(open_connection(url, timeout = timeout, user_agent = user_agent)) def post(url: URL, parameters: List[(String, Any)], timeout: Time = default_timeout, user_agent: String = ""): Content = { val connection = open_connection(url, timeout = timeout, user_agent = user_agent) connection.setRequestMethod("POST") connection.setDoOutput(true) val boundary = UUID.random_string() connection.setRequestProperty( "Content-Type", "multipart/form-data; boundary=" + quote(boundary)) using(connection.getOutputStream)(out => { def output(s: String): Unit = out.write(UTF8.bytes(s)) def output_newline(n: Int = 1): Unit = (1 to n).foreach(_ => output(NEWLINE)) def output_boundary(end: Boolean = false): Unit = output("--" + boundary + (if (end) "--" else "") + NEWLINE) def output_name(name: String): Unit = output("Content-Disposition: form-data; name=" + quote(name)) def output_value(value: Any): Unit = { output_newline(2) output(value.toString) } def output_content(content: Content): Unit = { proper_string(content.file_name).foreach(s => output("; filename=" + quote(s))) output_newline() proper_string(content.mime_type).foreach(s => output("Content-Type: " + s)) output_newline(2) content.bytes.write_stream(out) } output_newline(2) for { (name, value) <- parameters } { output_boundary() output_name(name) value match { case content: Content => output_content(content) case file: JFile => output_content(read_content(file)) case path: Path => output_content(read_content(path)) case _ => output_value(value) } output_newline() } output_boundary(end = true) out.flush() }) get_content(connection) } } /** server **/ /* response */ object Response { def apply( bytes: Bytes = Bytes.empty, content_type: String = mime_type_bytes): Response = new Response(bytes, content_type) val empty: Response = apply() def text(s: String): Response = apply(Bytes(s), mime_type_text) def html(s: String): Response = apply(Bytes(s), mime_type_html) } class Response private[HTTP](val bytes: Bytes, val content_type: String) { override def toString: String = bytes.toString } /* exchange */ class Exchange private[HTTP](val http_exchange: HttpExchange) { def request_method: String = http_exchange.getRequestMethod def request_uri: URI = http_exchange.getRequestURI def read_request(): Bytes = using(http_exchange.getRequestBody)(Bytes.read_stream(_)) def write_response(code: Int, response: Response): Unit = { http_exchange.getResponseHeaders.set("Content-Type", response.content_type) http_exchange.sendResponseHeaders(code, response.bytes.length.toLong) using(http_exchange.getResponseBody)(response.bytes.write_stream) } } /* handler for request method */ sealed case class Arg(method: String, uri: URI, request: Bytes) { def decode_properties: Properties.T = - space_explode('&', request.text).map(s => - space_explode('=', s) match { - case List(a, b) => Url.decode(a) -> Url.decode(b) - case _ => error("Malformed key-value pair in HTTP/POST: " + quote(s)) + space_explode('&', request.text).map( + { + case Properties.Eq(a, b) => Url.decode(a) -> Url.decode(b) + case s => error("Malformed key-value pair in HTTP/POST: " + quote(s)) }) } object Handler { def apply(root: String, body: Exchange => Unit): Handler = new Handler(root, (x: HttpExchange) => body(new Exchange(x))) def method(name: String, root: String, body: Arg => Option[Response]): Handler = apply(root, http => { val request = http.read_request() if (http.request_method == name) { val arg = Arg(name, http.request_uri, request) Exn.capture(body(arg)) match { case Exn.Res(Some(response)) => http.write_response(200, response) case Exn.Res(None) => http.write_response(404, Response.empty) case Exn.Exn(ERROR(msg)) => http.write_response(500, Response.text(Output.error_message_text(msg))) case Exn.Exn(exn) => throw exn } } else http.write_response(400, Response.empty) }) def get(root: String, body: Arg => Option[Response]): Handler = method("GET", root, body) def post(root: String, body: Arg => Option[Response]): Handler = method("POST", root, body) } class Handler private(val root: String, val handler: HttpHandler) { override def toString: String = root } /* server */ class Server private[HTTP](val http_server: HttpServer) { def += (handler: Handler): Unit = http_server.createContext(handler.root, handler.handler) def -= (handler: Handler): Unit = http_server.removeContext(handler.root) def start(): Unit = http_server.start() def stop(): Unit = http_server.stop(0) def address: InetSocketAddress = http_server.getAddress def url: String = "http://" + address.getHostName + ":" + address.getPort override def toString: String = url } def server(handlers: List[Handler] = isabelle_resources): Server = { val http_server = HttpServer.create(new InetSocketAddress(isabelle.Server.localhost, 0), 0) http_server.setExecutor(null) val server = new Server(http_server) for (handler <- handlers) server += handler server } /** Isabelle resources **/ lazy val isabelle_resources: List[Handler] = List(welcome(), fonts()) /* welcome */ def welcome(root: String = "/"): Handler = Handler.get(root, arg => if (arg.uri.toString == root) { Some(Response.text("Welcome to " + Isabelle_System.identification())) } else None) /* fonts */ private lazy val html_fonts: List[Isabelle_Fonts.Entry] = Isabelle_Fonts.fonts(hidden = true) def fonts(root: String = "/fonts"): Handler = Handler.get(root, arg => { val uri_name = arg.uri.toString if (uri_name == root) { Some(Response.text(cat_lines(html_fonts.map(entry => entry.path.file_name)))) } else { html_fonts.collectFirst( { case entry if uri_name == root + "/" + entry.path.file_name => Response(entry.bytes) }) } }) } diff --git a/src/Pure/General/path.scala b/src/Pure/General/path.scala --- a/src/Pure/General/path.scala +++ b/src/Pure/General/path.scala @@ -1,313 +1,313 @@ /* Title: Pure/General/path.scala Author: Makarius Algebra of file-system paths: basic POSIX notation, extended by named roots (e.g. //foo) and variables (e.g. $BAR). */ package isabelle import java.io.{File => JFile} import scala.util.matching.Regex object Path { /* path elements */ sealed abstract class Elem private case class Root(name: String) extends Elem private case class Basic(name: String) extends Elem private case class Variable(name: String) extends Elem private case object Parent extends Elem private def err_elem(msg: String, s: String): Nothing = error(msg + " path element " + quote(s)) private val illegal_elem = Set("", "~", "~~", ".", "..") private val illegal_char = "/\\$:\"'<>|?*" private def check_elem(s: String): String = if (illegal_elem.contains(s)) err_elem("Illegal", s) else { for (c <- s) { if (c.toInt < 32) err_elem("Illegal control character " + c.toInt + " in", s) if (illegal_char.contains(c)) err_elem("Illegal character " + quote(c.toString) + " in", s) } s } private def root_elem(s: String): Elem = Root(check_elem(s)) private def basic_elem(s: String): Elem = Basic(check_elem(s)) private def variable_elem(s: String): Elem = Variable(check_elem(s)) private def apply_elem(y: Elem, xs: List[Elem]): List[Elem] = (y, xs) match { case (Root(_), _) => List(y) case (Parent, Root(_) :: _) => xs case (Parent, Basic(_) :: rest) => rest case _ => y :: xs } private def norm_elems(elems: List[Elem]): List[Elem] = elems.foldRight(List.empty[Elem])(apply_elem) private def implode_elem(elem: Elem, short: Boolean): String = elem match { case Root("") => "" case Root(s) => "//" + s case Basic(s) => s case Variable("USER_HOME") if short => "~" case Variable("ISABELLE_HOME") if short => "~~" case Variable(s) => "$" + s case Parent => ".." } private def squash_elem(elem: Elem): String = elem match { case Root("") => "ROOT" case Root(s) => "SERVER_" + s case Basic(s) => s case Variable(s) => s case Parent => "PARENT" } /* path constructors */ val current: Path = new Path(Nil) val root: Path = new Path(List(Root(""))) def named_root(s: String): Path = new Path(List(root_elem(s))) def make(elems: List[String]): Path = new Path(elems.reverse.map(basic_elem)) def basic(s: String): Path = new Path(List(basic_elem(s))) def variable(s: String): Path = new Path(List(variable_elem(s))) val parent: Path = new Path(List(Parent)) val USER_HOME: Path = variable("USER_HOME") val ISABELLE_HOME: Path = variable("ISABELLE_HOME") /* explode */ def explode(str: String): Path = { def explode_elem(s: String): Elem = try { if (s == "..") Parent else if (s == "~") Variable("USER_HOME") else if (s == "~~") Variable("ISABELLE_HOME") else if (s.startsWith("$")) variable_elem(s.substring(1)) else basic_elem(s) } catch { case ERROR(msg) => cat_error(msg, "The error(s) above occurred in " + quote(str)) } val ss = space_explode('/', str) val r = ss.takeWhile(_.isEmpty).length val es = ss.dropWhile(_.isEmpty) val (roots, raw_elems) = if (r == 0) (Nil, es) else if (r == 1) (List(Root("")), es) else if (es.isEmpty) (List(Root("")), Nil) else (List(root_elem(es.head)), es.tail) val elems = raw_elems.filterNot(s => s.isEmpty || s == ".").map(explode_elem) new Path(norm_elems(elems reverse_::: roots)) } def is_wellformed(str: String): Boolean = try { explode(str); true } catch { case ERROR(_) => false } def is_valid(str: String): Boolean = try { explode(str).expand; true } catch { case ERROR(_) => false } def split(str: String): List[Path] = space_explode(':', str).filterNot(_.isEmpty).map(explode) /* encode */ val encode: XML.Encode.T[Path] = (path => XML.Encode.string(path.implode)) /* reserved names */ private val reserved_windows: Set[String] = Set("CON", "PRN", "AUX", "NUL", "COM1", "COM2", "COM3", "COM4", "COM5", "COM6", "COM7", "COM8", "COM9", "LPT1", "LPT2", "LPT3", "LPT4", "LPT5", "LPT6", "LPT7", "LPT8", "LPT9") def is_reserved(name: String): Boolean = Long_Name.explode(name).exists(a => reserved_windows.contains(Word.uppercase(a))) /* case-insensitive names */ def check_case_insensitive(paths: List[Path]): Unit = { val table = paths.foldLeft(Multi_Map.empty[String, String]) { case (tab, path) => val name = path.expand.implode tab.insert(Word.lowercase(name), name) } val collisions = (for { (_, coll) <- table.iterator_list if coll.length > 1 } yield coll).toList.flatten if (collisions.nonEmpty) { error(("Collision of file names due case-insensitivity:" :: collisions).mkString("\n ")) } } } final class Path private(protected val elems: List[Path.Elem]) // reversed elements { override def hashCode: Int = elems.hashCode override def equals(that: Any): Boolean = that match { case other: Path => elems == other.elems case _ => false } def is_current: Boolean = elems.isEmpty def is_absolute: Boolean = elems.nonEmpty && elems.last.isInstanceOf[Path.Root] def is_root: Boolean = elems match { case List(Path.Root(_)) => true case _ => false } def is_basic: Boolean = elems match { case List(Path.Basic(_)) => true case _ => false } def starts_basic: Boolean = elems.nonEmpty && elems.last.isInstanceOf[Path.Basic] def +(other: Path): Path = new Path(other.elems.foldRight(elems)(Path.apply_elem)) /* implode */ private def gen_implode(short: Boolean): String = elems match { case Nil => "." case List(Path.Root("")) => "/" case _ => elems.map(Path.implode_elem(_, short)).reverse.mkString("/") } def implode: String = gen_implode(false) def implode_short: String = gen_implode(true) override def toString: String = quote(implode) /* base element */ private def split_path: (Path, String) = elems match { case Path.Basic(s) :: xs => (new Path(xs), s) case _ => error("Cannot split path into dir/base: " + toString) } def dir: Path = split_path._1 def base: Path = new Path(List(Path.Basic(split_path._2))) def ext(e: String): Path = if (e == "") this else { val (prfx, s) = split_path prfx + Path.basic(s + "." + e) } def xz: Path = ext("xz") def xml: Path = ext("xml") def html: Path = ext("html") def tex: Path = ext("tex") def pdf: Path = ext("pdf") def thy: Path = ext("thy") def tar: Path = ext("tar") def gz: Path = ext("gz") def log: Path = ext("log") def backup: Path = { val (prfx, s) = split_path prfx + Path.basic(s + "~") } def backup2: Path = { val (prfx, s) = split_path prfx + Path.basic(s + "~~") } def platform_exe: Path = if (Platform.is_windows) ext("exe") else this private val Ext = new Regex("(.*)\\.([^.]*)") def split_ext: (Path, String) = { val (prefix, base) = split_path base match { case Ext(b, e) => (prefix + Path.basic(b), e) case _ => (prefix + Path.basic(base), "") } } def drop_ext: Path = split_ext._1 def get_ext: String = split_ext._2 def squash: Path = new Path(elems.map(elem => Path.Basic(Path.squash_elem(elem)))) /* expand */ def expand_env(env: Map[String, String]): Path = { def eval(elem: Path.Elem): List[Path.Elem] = elem match { case Path.Variable(s) => val path = Path.explode(Isabelle_System.getenv_strict(s, env)) if (path.elems.exists(_.isInstanceOf[Path.Variable])) - error("Illegal path variable nesting: " + s + "=" + path.toString) + error("Illegal path variable nesting: " + Properties.Eq(s, path.toString)) else path.elems case x => List(x) } new Path(Path.norm_elems(elems.flatMap(eval))) } def expand: Path = expand_env(Isabelle_System.settings()) def file_name: String = expand.base.implode /* implode wrt. given directories */ def implode_symbolic: String = { val directories = Library.space_explode(':', Isabelle_System.getenv("ISABELLE_DIRECTORIES")).reverse val full_name = expand.implode directories.view.flatMap(a => try { val b = Path.explode(a).expand.implode if (full_name == b) Some(a) else { Library.try_unprefix(b + "/", full_name) match { case Some(name) => Some(a + "/" + name) case None => None } } } catch { case ERROR(_) => None }).headOption.getOrElse(implode) } def position: Position.T = Position.File(implode_symbolic) /* platform files */ def file: JFile = File.platform_file(this) def is_file: Boolean = file.isFile def is_dir: Boolean = file.isDirectory def absolute_file: JFile = File.absolute(file) def canonical_file: JFile = File.canonical(file) def absolute: Path = File.path(absolute_file) def canonical: Path = File.path(canonical_file) } diff --git a/src/Pure/General/properties.scala b/src/Pure/General/properties.scala --- a/src/Pure/General/properties.scala +++ b/src/Pure/General/properties.scala @@ -1,128 +1,140 @@ /* Title: Pure/General/properties.scala Author: Makarius Property lists. */ package isabelle object Properties { /* entries */ type Entry = (java.lang.String, java.lang.String) type T = List[Entry] + object Eq + { + def apply(a: java.lang.String, b: java.lang.String): java.lang.String = a + "=" + b + def apply(entry: Entry): java.lang.String = apply(entry._1, entry._2) + + def unapply(str: java.lang.String): Option[Entry] = + { + val i = str.indexOf('=') + if (i <= 0) None else Some((str.substring(0, i), str.substring(i + 1))) + } + } + def defined(props: T, name: java.lang.String): java.lang.Boolean = props.exists({ case (x, _) => x == name }) def get(props: T, name: java.lang.String): Option[java.lang.String] = props.collectFirst({ case (x, y) if x == name => y }) def put(props: T, entry: Entry): T = { val (x, y) = entry def update(ps: T): T = ps match { case (p @ (x1, _)) :: rest => if (x1 == x) (x1, y) :: rest else p :: update(rest) case Nil => Nil } if (defined(props, x)) update(props) else entry :: props } /* external storage */ def encode(ps: T): Bytes = Bytes(YXML.string_of_body(XML.Encode.properties(ps))) def decode(bs: Bytes, cache: XML.Cache = XML.Cache.none): T = cache.props(XML.Decode.properties(YXML.parse_body(bs.text))) def compress(ps: List[T], options: XZ.Options = XZ.options(), cache: XZ.Cache = XZ.Cache()): Bytes = { if (ps.isEmpty) Bytes.empty else { Bytes(YXML.string_of_body(XML.Encode.list(XML.Encode.properties)(ps))). compress(options = options, cache = cache) } } def uncompress(bs: Bytes, cache: XML.Cache = XML.Cache.none): List[T] = { if (bs.is_empty) Nil else { val ps = XML.Decode.list(XML.Decode.properties)( YXML.parse_body(bs.uncompress(cache = cache.xz).text)) if (cache.no_cache) ps else ps.map(cache.props) } } /* multi-line entries */ def encode_lines(props: T): T = props.map({ case (a, b) => (a, Library.encode_lines(b)) }) def decode_lines(props: T): T = props.map({ case (a, b) => (a, Library.decode_lines(b)) }) def lines_nonempty(x: java.lang.String, ys: List[java.lang.String]): Properties.T = if (ys.isEmpty) Nil else List((x, cat_lines(ys))) /* entry types */ class String(val name: java.lang.String) { def apply(value: java.lang.String): T = List((name, value)) def unapply(props: T): Option[java.lang.String] = props.find(_._1 == name).map(_._2) def get(props: T): java.lang.String = unapply(props).getOrElse("") } class Boolean(val name: java.lang.String) { def apply(value: scala.Boolean): T = List((name, Value.Boolean(value))) def unapply(props: T): Option[scala.Boolean] = props.find(_._1 == name) match { case None => None case Some((_, value)) => Value.Boolean.unapply(value) } def get(props: T): scala.Boolean = unapply(props).getOrElse(false) } class Int(val name: java.lang.String) { def apply(value: scala.Int): T = List((name, Value.Int(value))) def unapply(props: T): Option[scala.Int] = props.find(_._1 == name) match { case None => None case Some((_, value)) => Value.Int.unapply(value) } def get(props: T): scala.Int = unapply(props).getOrElse(0) } class Long(val name: java.lang.String) { def apply(value: scala.Long): T = List((name, Value.Long(value))) def unapply(props: T): Option[scala.Long] = props.find(_._1 == name) match { case None => None case Some((_, value)) => Value.Long.unapply(value) } def get(props: T): scala.Long = unapply(props).getOrElse(0) } class Double(val name: java.lang.String) { def apply(value: scala.Double): T = List((name, Value.Double(value))) def unapply(props: T): Option[scala.Double] = props.find(_._1 == name) match { case None => None case Some((_, value)) => Value.Double.unapply(value) } def get(props: T): scala.Double = unapply(props).getOrElse(0.0) } } diff --git a/src/Pure/ML/ml_file.ML b/src/Pure/ML/ml_file.ML --- a/src/Pure/ML/ml_file.ML +++ b/src/Pure/ML/ml_file.ML @@ -1,39 +1,39 @@ (* Title: Pure/ML/ml_file.ML Author: Makarius Commands to load ML files. *) signature ML_FILE = sig val command: string -> bool option -> (theory -> Token.file) -> Toplevel.transition -> Toplevel.transition val ML: bool option -> (theory -> Token.file) -> Toplevel.transition -> Toplevel.transition val SML: bool option -> (theory -> Token.file) -> Toplevel.transition -> Toplevel.transition end; structure ML_File: ML_FILE = struct fun command environment debug get_file = Toplevel.generic_theory (fn gthy => let val file = get_file (Context.theory_of gthy); val provide = Resources.provide_file file; val source = Token.file_source file; - val _ = Thy_Output.check_comments (Context.proof_of gthy) (Input.source_explode source); + val _ = Document_Output.check_comments (Context.proof_of gthy) (Input.source_explode source); val flags: ML_Compiler.flags = {environment = environment, redirect = true, verbose = true, debug = debug, writeln = writeln, warning = warning}; in gthy |> ML_Context.exec (fn () => ML_Context.eval_source flags source) |> Local_Theory.propagate_ml_env |> Context.mapping provide (Local_Theory.background_theory provide) end); val ML = command ""; val SML = command ML_Env.SML; end; diff --git a/src/Pure/ML/ml_statistics.scala b/src/Pure/ML/ml_statistics.scala --- a/src/Pure/ML/ml_statistics.scala +++ b/src/Pure/ML/ml_statistics.scala @@ -1,324 +1,319 @@ /* Title: Pure/ML/ml_statistics.scala Author: Makarius ML runtime statistics. */ package isabelle import scala.annotation.tailrec import scala.collection.mutable import scala.collection.immutable.{SortedSet, SortedMap} import scala.swing.{Frame, Component} import org.jfree.data.xy.{XYSeries, XYSeriesCollection} import org.jfree.chart.{JFreeChart, ChartPanel, ChartFactory} import org.jfree.chart.plot.PlotOrientation object ML_Statistics { /* properties */ val Now = new Properties.Double("now") def now(props: Properties.T): Double = Now.unapply(props).get /* memory status */ val Heap_Size = new Properties.Long("size_heap") val Heap_Free = new Properties.Long("size_heap_free_last_GC") val GC_Percent = new Properties.Int("GC_percent") sealed case class Memory_Status(heap_size: Long, heap_free: Long, gc_percent: Int) { def heap_used: Long = (heap_size - heap_free) max 0 def heap_used_fraction: Double = if (heap_size == 0) 0.0 else heap_used.toDouble / heap_size def gc_progress: Option[Double] = if (1 <= gc_percent && gc_percent <= 100) Some((gc_percent - 1) * 0.01) else None } def memory_status(props: Properties.T): Memory_Status = { val heap_size = Heap_Size.get(props) val heap_free = Heap_Free.get(props) val gc_percent = GC_Percent.get(props) Memory_Status(heap_size, heap_free, gc_percent) } /* monitor process */ def monitor(pid: Long, stats_dir: String = "", delay: Time = Time.seconds(0.5), consume: Properties.T => Unit = Console.println): Unit = { def progress_stdout(line: String): Unit = { - val props = - Library.space_explode(',', line).flatMap((entry: String) => - Library.space_explode('=', entry) match { - case List(a, b) => Some((a, b)) - case _ => None - }) + val props = Library.space_explode(',', line).flatMap(Properties.Eq.unapply) if (props.nonEmpty) consume(props) } val env_prefix = if (stats_dir.isEmpty) "" else "export POLYSTATSDIR=" + Bash.string(stats_dir) + "\n" Bash.process(env_prefix + "\"$POLYML_EXE\" -q --use src/Pure/ML/ml_statistics.ML --eval " + Bash.string("ML_Statistics.monitor " + ML_Syntax.print_long(pid) + " " + ML_Syntax.print_double(delay.seconds)), cwd = Path.ISABELLE_HOME.file) .result(progress_stdout = progress_stdout, strict = false).check } /* protocol handler */ class Handler extends Session.Protocol_Handler { private var session: Session = null private var monitoring: Future[Unit] = Future.value(()) override def init(session: Session): Unit = synchronized { this.session = session } override def exit(): Unit = synchronized { session = null monitoring.cancel() } private def consume(props: Properties.T): Unit = synchronized { if (session != null) { val props1 = (session.cache.props(props ::: Java_Statistics.jvm_statistics())) session.runtime_statistics.post(Session.Runtime_Statistics(props1)) } } private def ml_statistics(msg: Prover.Protocol_Output): Boolean = synchronized { msg.properties match { case Markup.ML_Statistics(pid, stats_dir) => monitoring = Future.thread("ML_statistics") { monitor(pid, stats_dir = stats_dir, consume = consume) } true case _ => false } } override val functions = List(Markup.ML_Statistics.name -> ml_statistics) } /* memory fields (mega bytes) */ def mem_print(x: Long): Option[String] = if (x == 0L) None else Some(x.toString + " M") def mem_scale(x: Long): Long = x / 1024 / 1024 def mem_field_scale(name: String, x: Double): Double = if (heap_fields._2.contains(name) || program_fields._2.contains(name)) mem_scale(x.toLong).toDouble else x val CODE_SIZE = "size_code" val STACK_SIZE = "size_stacks" val HEAP_SIZE = "size_heap" /* standard fields */ type Fields = (String, List[String]) val tasks_fields: Fields = ("Future tasks", List("tasks_ready", "tasks_pending", "tasks_running", "tasks_passive", "tasks_urgent", "tasks_total")) val workers_fields: Fields = ("Worker threads", List("workers_total", "workers_active", "workers_waiting")) val GC_fields: Fields = ("GCs", List("partial_GCs", "full_GCs", "share_passes")) val heap_fields: Fields = ("Heap", List(HEAP_SIZE, "size_allocation", "size_allocation_free", "size_heap_free_last_full_GC", "size_heap_free_last_GC")) val program_fields: Fields = ("Program", List("size_code", "size_stacks")) val threads_fields: Fields = ("Threads", List("threads_total", "threads_in_ML", "threads_wait_condvar", "threads_wait_IO", "threads_wait_mutex", "threads_wait_signal")) val time_fields: Fields = ("Time", List("time_elapsed", "time_elapsed_GC", "time_CPU", "time_GC")) val speed_fields: Fields = ("Speed", List("speed_CPU", "speed_GC")) private val time_speed = Map("time_CPU" -> "speed_CPU", "time_GC" -> "speed_GC") val java_heap_fields: Fields = ("Java heap", List("java_heap_size", "java_heap_used")) val java_thread_fields: Fields = ("Java threads", List("java_threads_total", "java_workers_total", "java_workers_active")) val main_fields: List[Fields] = List(heap_fields, tasks_fields, workers_fields) val other_fields: List[Fields] = List(threads_fields, GC_fields, program_fields, time_fields, speed_fields, java_heap_fields, java_thread_fields) val all_fields: List[Fields] = main_fields ::: other_fields /* content interpretation */ final case class Entry(time: Double, data: Map[String, Double]) { def get(field: String): Double = data.getOrElse(field, 0.0) } val empty: ML_Statistics = apply(Nil) def apply(ml_statistics0: List[Properties.T], heading: String = "", domain: String => Boolean = (key: String) => true): ML_Statistics = { require(ml_statistics0.forall(props => Now.unapply(props).isDefined), "missing \"now\" field") val ml_statistics = ml_statistics0.sortBy(now) val time_start = if (ml_statistics.isEmpty) 0.0 else now(ml_statistics.head) val duration = if (ml_statistics.isEmpty) 0.0 else now(ml_statistics.last) - time_start val fields = SortedSet.empty[String] ++ (for { props <- ml_statistics.iterator (x, _) <- props.iterator if x != Now.name && domain(x) } yield x) val content = { var last_edge = Map.empty[String, (Double, Double, Double)] val result = new mutable.ListBuffer[ML_Statistics.Entry] for (props <- ml_statistics) { val time = now(props) - time_start // rising edges -- relative speed val speeds = (for { (key, value) <- props.iterator key1 <- time_speed.get(key) if domain(key1) } yield { val (x0, y0, s0) = last_edge.getOrElse(key, (0.0, 0.0, 0.0)) val x1 = time val y1 = java.lang.Double.parseDouble(value) val s1 = if (x1 == x0) 0.0 else (y1 - y0) / (x1 - x0) if (y1 > y0) { last_edge += (key -> (x1, y1, s1)) (key1, s1.toString) } else (key1, s0.toString) }).toList val data = SortedMap.empty[String, Double] ++ (for { (x, y) <- props.iterator ++ speeds.iterator if x != Now.name && domain(x) z = java.lang.Double.parseDouble(y) if z != 0.0 } yield { (x.intern, mem_field_scale(x, z)) }) result += ML_Statistics.Entry(time, data) } result.toList } new ML_Statistics(heading, fields, content, time_start, duration) } } final class ML_Statistics private( val heading: String, val fields: Set[String], val content: List[ML_Statistics.Entry], val time_start: Double, val duration: Double) { /* content */ def maximum(field: String): Double = content.foldLeft(0.0) { case (m, e) => m max e.get(field) } def average(field: String): Double = { @tailrec def sum(t0: Double, list: List[ML_Statistics.Entry], acc: Double): Double = list match { case Nil => acc case e :: es => val t = e.time sum(t, es, (t - t0) * e.get(field) + acc) } content match { case Nil => 0.0 case List(e) => e.get(field) case e :: es => sum(e.time, es, 0.0) / duration } } /* charts */ def update_data(data: XYSeriesCollection, selected_fields: List[String]): Unit = { data.removeAllSeries for (field <- selected_fields) { val series = new XYSeries(field) content.foreach(entry => series.add(entry.time, entry.get(field))) data.addSeries(series) } } def chart(title: String, selected_fields: List[String]): JFreeChart = { val data = new XYSeriesCollection update_data(data, selected_fields) ChartFactory.createXYLineChart(title, "time", "value", data, PlotOrientation.VERTICAL, true, true, true) } def chart(fields: ML_Statistics.Fields): JFreeChart = chart(fields._1, fields._2) def show_frames(fields: List[ML_Statistics.Fields] = ML_Statistics.main_fields): Unit = fields.map(chart).foreach(c => GUI_Thread.later { new Frame { iconImage = GUI.isabelle_image() title = heading contents = Component.wrap(new ChartPanel(c)) visible = true } }) } diff --git a/src/Pure/PIDE/command.ML b/src/Pure/PIDE/command.ML --- a/src/Pure/PIDE/command.ML +++ b/src/Pure/PIDE/command.ML @@ -1,507 +1,507 @@ (* Title: Pure/PIDE/command.ML Author: Makarius Prover command execution: read -- eval -- print. *) signature COMMAND = sig type blob = {file_node: string, src_path: Path.T, content: (SHA1.digest * string list) option} val read_file: Path.T -> Position.T -> bool -> Path.T -> Token.file val read_thy: Toplevel.state -> theory val read: Keyword.keywords -> theory -> Path.T-> (unit -> theory) -> blob Exn.result list * int -> Token.T list -> Toplevel.transition val read_span: Keyword.keywords -> Toplevel.state -> Path.T -> (unit -> theory) -> Command_Span.span -> Toplevel.transition type eval val eval_command_id: eval -> Document_ID.command val eval_exec_id: eval -> Document_ID.exec val eval_eq: eval * eval -> bool val eval_running: eval -> bool val eval_finished: eval -> bool val eval_result_command: eval -> Toplevel.transition val eval_result_state: eval -> Toplevel.state val eval: Keyword.keywords -> Path.T -> (unit -> theory) -> blob Exn.result list * int -> Document_ID.command -> Token.T list -> eval -> eval type print type print_fn = Toplevel.transition -> Toplevel.state -> unit val print0: {pri: int, print_fn: print_fn} -> eval -> print val print: bool -> (string * string list) list -> Keyword.keywords -> string -> eval -> print list -> print list option val parallel_print: print -> bool type print_function = {keywords: Keyword.keywords, command_name: string, args: string list, exec_id: Document_ID.exec} -> {delay: Time.time option, pri: int, persistent: bool, strict: bool, print_fn: print_fn} option val print_function: string -> print_function -> unit val no_print_function: string -> unit type exec = eval * print list val init_exec: theory option -> exec val no_exec: exec val exec_ids: exec option -> Document_ID.exec list val exec: Document_ID.execution -> exec -> unit val exec_parallel_prints: Document_ID.execution -> Future.task list -> exec -> exec option end; structure Command: COMMAND = struct (** main phases of execution **) fun task_context group f = f |> Future.interruptible_task |> Future.task_context "Command.run_process" group; (* read *) type blob = {file_node: string, src_path: Path.T, content: (SHA1.digest * string list) option}; fun read_file_node file_node master_dir pos delimited src_path = let val _ = if Context_Position.pide_reports () then Position.report pos (Markup.language_path delimited) else (); fun read_file () = let val path = File.check_file (File.full_path master_dir src_path); val text = File.read path; val file_pos = Path.position path; in (text, file_pos) end; fun read_url () = let val text = Isabelle_System.download file_node; val file_pos = Position.file file_node; in (text, file_pos) end; val (text, file_pos) = (case try Url.explode file_node of NONE => read_file () | SOME (Url.File _) => read_file () | _ => read_url ()); val lines = split_lines text; val digest = SHA1.digest text; in {src_path = src_path, lines = lines, digest = digest, pos = Position.copy_id pos file_pos} end handle ERROR msg => error (msg ^ Position.here pos); val read_file = read_file_node ""; local fun blob_file src_path lines digest file_node = let val file_pos = Position.file file_node |> (case Position.get_id (Position.thread_data ()) of NONE => I | SOME exec_id => Position.put_id exec_id); in {src_path = src_path, lines = lines, digest = digest, pos = file_pos} end fun resolve_files master_dir (blobs, blobs_index) toks = (case Outer_Syntax.parse_spans toks of [Command_Span.Span (Command_Span.Command_Span _, _)] => (case try (nth toks) blobs_index of SOME tok => let val source = Token.input_of tok; val pos = Input.pos_of source; val delimited = Input.is_delimited source; fun make_file (Exn.Res {file_node, src_path, content = NONE}) = Exn.interruptible_capture (fn () => read_file_node file_node master_dir pos delimited src_path) () | make_file (Exn.Res {file_node, src_path, content = SOME (digest, lines)}) = (Position.report pos (Markup.language_path delimited); Exn.Res (blob_file src_path lines digest file_node)) | make_file (Exn.Exn e) = Exn.Exn e; val files = map make_file blobs; in toks |> map_index (fn (i, tok) => if i = blobs_index then Token.put_files files tok else tok) end | NONE => toks) | _ => toks); fun reports_of_token keywords tok = let val malformed_symbols = Input.source_explode (Token.input_of tok) |> map_filter (fn (sym, pos) => if Symbol.is_malformed sym then SOME ((pos, Markup.bad ()), "Malformed symbolic character") else NONE); val is_malformed = Token.is_error tok orelse not (null malformed_symbols); val reports = Token.reports keywords tok @ Token.completion_report tok @ malformed_symbols; in (is_malformed, reports) end; in fun read_thy st = Toplevel.theory_of st handle Toplevel.UNDEF => Pure_Syn.bootstrap_thy; fun read keywords thy master_dir init blobs_info span = let val command_reports = Outer_Syntax.command_reports thy; val token_reports = map (reports_of_token keywords) span; val _ = Position.reports_text (maps #2 token_reports @ maps command_reports span); val verbatim = span |> map_filter (fn tok => if Token.kind_of tok = Token.Verbatim then SOME (Token.pos_of tok) else NONE); val _ = if null verbatim then () else legacy_feature ("Old-style {* verbatim *} token -- use \cartouche\ instead" ^ Position.here_list verbatim); in if exists #1 token_reports then Toplevel.malformed (#1 (Token.core_range_of span)) "Malformed command syntax" else Outer_Syntax.parse_span thy init (resolve_files master_dir blobs_info span) end; end; fun read_span keywords st master_dir init = Command_Span.content #> read keywords (read_thy st) master_dir init ([], ~1); (* eval *) type eval_state = {failed: bool, command: Toplevel.transition, state: Toplevel.state}; fun init_eval_state opt_thy = {failed = false, command = Toplevel.empty, state = (case opt_thy of NONE => Toplevel.init_toplevel () | SOME thy => Toplevel.theory_toplevel thy)}; datatype eval = Eval of {command_id: Document_ID.command, exec_id: Document_ID.exec, eval_process: eval_state lazy}; fun eval_command_id (Eval {command_id, ...}) = command_id; fun eval_exec_id (Eval {exec_id, ...}) = exec_id; val eval_eq = op = o apply2 eval_exec_id; val eval_running = Execution.is_running_exec o eval_exec_id; fun eval_finished (Eval {eval_process, ...}) = Lazy.is_finished eval_process; fun eval_result (Eval {eval_process, ...}) = Exn.release (Lazy.finished_result eval_process); val eval_result_command = #command o eval_result; val eval_result_state = #state o eval_result; local fun reset_state keywords tr st0 = Toplevel.setmp_thread_position tr (fn () => let val name = Toplevel.name_of tr; val res = if Keyword.is_theory_body keywords name then Toplevel.reset_theory st0 else if Keyword.is_proof keywords name then Toplevel.reset_proof st0 else if Keyword.is_theory_end keywords name then (case Toplevel.reset_notepad st0 of NONE => Toplevel.reset_theory st0 | some => some) else NONE; in (case res of NONE => st0 | SOME st => (Output.error_message (Toplevel.type_error tr ^ " -- using reset state"); st)) end) (); fun run keywords int tr st = if Future.proofs_enabled 1 andalso Keyword.is_diag keywords (Toplevel.name_of tr) then let val (tr1, tr2) = Toplevel.fork_presentation tr; val _ = Execution.fork {name = "Toplevel.diag", pos = Toplevel.pos_of tr, pri = ~1} (fn () => Toplevel.command_exception int tr1 st); in Toplevel.command_errors int tr2 st end else Toplevel.command_errors int tr st; fun check_token_comments ctxt tok = - (Thy_Output.check_comments ctxt (Input.source_explode (Token.input_of tok)); []) + (Document_Output.check_comments ctxt (Input.source_explode (Token.input_of tok)); []) handle exn => if Exn.is_interrupt exn then Exn.reraise exn else Runtime.exn_messages exn; fun check_span_comments ctxt span tr = Toplevel.setmp_thread_position tr (fn () => maps (check_token_comments ctxt) span) (); fun report_indent tr st = (case try Toplevel.proof_of st of SOME prf => let val keywords = Thy_Header.get_keywords (Proof.theory_of prf) in if Keyword.command_kind keywords (Toplevel.name_of tr) = SOME Keyword.prf_script then (case try (Thm.nprems_of o #goal o Proof.goal) prf of NONE => () | SOME 0 => () | SOME n => let val report = Markup.markup_only (Markup.command_indent (n - 1)); in Toplevel.setmp_thread_position tr (fn () => Output.report [report]) () end) else () end | NONE => ()); fun status tr m = Toplevel.setmp_thread_position tr (fn () => Output.status [Markup.markup_only m]) (); fun eval_state keywords span tr ({state, ...}: eval_state) = let val _ = Thread_Attributes.expose_interrupt (); val st = reset_state keywords tr state; val _ = report_indent tr st; val _ = status tr Markup.running; val (errs1, result) = run keywords true tr st; val errs2 = (case result of NONE => [] | SOME st' => check_span_comments (Toplevel.presentation_context st') span tr); val errs = errs1 @ errs2; val _ = List.app (Future.error_message (Toplevel.pos_of tr)) errs; in (case result of NONE => let val _ = status tr Markup.failed; val _ = status tr Markup.finished; val _ = if null errs then (status tr Markup.canceled; Exn.interrupt ()) else (); in {failed = true, command = tr, state = st} end | SOME st' => let val _ = if Keyword.is_theory_end keywords (Toplevel.name_of tr) andalso can (Toplevel.end_theory Position.none) st' then status tr Markup.finalized else (); val _ = status tr Markup.finished; in {failed = false, command = tr, state = st'} end) end; in fun eval keywords master_dir init blobs_info command_id span eval0 = let val exec_id = Document_ID.make (); fun process () = let val eval_state0 = eval_result eval0; val thy = read_thy (#state eval_state0); val tr = Position.setmp_thread_data (Position.id_only (Document_ID.print exec_id)) (fn () => read keywords thy master_dir init blobs_info span |> Toplevel.exec_id exec_id) (); in eval_state keywords span tr eval_state0 end; in Eval {command_id = command_id, exec_id = exec_id, eval_process = Lazy.lazy_name "Command.eval" process} end; end; (* print *) datatype print = Print of {name: string, args: string list, delay: Time.time option, pri: int, persistent: bool, exec_id: Document_ID.exec, print_process: unit lazy}; fun print_exec_id (Print {exec_id, ...}) = exec_id; val print_eq = op = o apply2 print_exec_id; type print_fn = Toplevel.transition -> Toplevel.state -> unit; type print_function = {keywords: Keyword.keywords, command_name: string, args: string list, exec_id: Document_ID.exec} -> {delay: Time.time option, pri: int, persistent: bool, strict: bool, print_fn: print_fn} option; local val print_functions = Synchronized.var "Command.print_functions" ([]: (string * print_function) list); fun print_error tr opt_context e = (Toplevel.setmp_thread_position tr o Runtime.controlled_execution opt_context) e () handle exn => if Exn.is_interrupt exn then Exn.reraise exn else List.app (Future.error_message (Toplevel.pos_of tr)) (Runtime.exn_messages exn); fun print_finished (Print {print_process, ...}) = Lazy.is_finished print_process; fun print_persistent (Print {persistent, ...}) = persistent; val overlay_ord = prod_ord string_ord (list_ord string_ord); fun make_print (name, args) {delay, pri, persistent, strict, print_fn} eval = let val exec_id = Document_ID.make (); fun process () = let val {failed, command, state = st', ...} = eval_result eval; val tr = Toplevel.exec_id exec_id command; val opt_context = try Toplevel.generic_theory_of st'; in if failed andalso strict then () else print_error tr opt_context (fn () => print_fn tr st') end; in Print { name = name, args = args, delay = delay, pri = pri, persistent = persistent, exec_id = exec_id, print_process = Lazy.lazy_name "Command.print" process} end; fun bad_print name_args exn = make_print name_args {delay = NONE, pri = 0, persistent = false, strict = false, print_fn = fn _ => fn _ => Exn.reraise exn}; in fun print0 {pri, print_fn} = make_print ("", [serial_string ()]) {delay = NONE, pri = pri, persistent = true, strict = true, print_fn = print_fn}; fun print command_visible command_overlays keywords command_name eval old_prints = let val print_functions = Synchronized.value print_functions; fun new_print (name, args) get_pr = let val params = {keywords = keywords, command_name = command_name, args = args, exec_id = eval_exec_id eval}; in (case Exn.capture (Runtime.controlled_execution NONE get_pr) params of Exn.Res NONE => NONE | Exn.Res (SOME pr) => SOME (make_print (name, args) pr eval) | Exn.Exn exn => SOME (bad_print (name, args) exn eval)) end; fun get_print (name, args) = (case find_first (fn Print print => (#name print, #args print) = (name, args)) old_prints of NONE => (case AList.lookup (op =) print_functions name of NONE => SOME (bad_print (name, args) (ERROR ("Missing print function " ^ quote name)) eval) | SOME get_pr => new_print (name, args) get_pr) | some => some); val retained_prints = filter (fn print => print_finished print andalso print_persistent print) old_prints; val visible_prints = if command_visible then fold (fn (name, _) => cons (name, [])) print_functions command_overlays |> sort_distinct overlay_ord |> map_filter get_print else []; val new_prints = visible_prints @ retained_prints; in if eq_list print_eq (old_prints, new_prints) then NONE else SOME new_prints end; fun parallel_print (Print {pri, ...}) = pri <= 0 orelse (Future.enabled () andalso Options.default_bool "parallel_print"); fun print_function name f = Synchronized.change print_functions (fn funs => (if name = "" then error "Unnamed print function" else (); if not (AList.defined (op =) funs name) then () else warning ("Redefining command print function: " ^ quote name); AList.update (op =) (name, f) funs)); fun no_print_function name = Synchronized.change print_functions (filter_out (equal name o #1)); end; val _ = print_function "Execution.print" (fn {args, exec_id, ...} => if null args then SOME {delay = NONE, pri = Task_Queue.urgent_pri + 2, persistent = false, strict = false, print_fn = fn _ => fn _ => Execution.fork_prints exec_id} else NONE); val _ = print_function "print_state" (fn {keywords, command_name, ...} => if Options.default_bool "editor_output_state" andalso Keyword.is_printed keywords command_name then SOME {delay = NONE, pri = Task_Queue.urgent_pri + 1, persistent = false, strict = false, print_fn = fn _ => fn st => if Toplevel.is_proof st then Output.state (Toplevel.string_of_state st) else ()} else NONE); (* combined execution *) type exec = eval * print list; fun init_exec opt_thy : exec = (Eval {command_id = Document_ID.none, exec_id = Document_ID.none, eval_process = Lazy.value (init_eval_state opt_thy)}, []); val no_exec = init_exec NONE; fun exec_ids NONE = [] | exec_ids (SOME (eval, prints)) = eval_exec_id eval :: map print_exec_id prints; local fun run_process execution_id exec_id process = let val group = Future.worker_subgroup () in if Execution.running execution_id exec_id [group] then ignore (task_context group (fn () => Lazy.force_result {strict = true} process) ()) else () end; fun ignore_process process = Lazy.is_running process orelse Lazy.is_finished process; fun run_eval execution_id (Eval {exec_id, eval_process, ...}) = if Lazy.is_finished eval_process then () else run_process execution_id exec_id eval_process; fun fork_print execution_id deps (Print {name, delay, pri, exec_id, print_process, ...}) = let val group = Future.worker_subgroup (); fun fork () = ignore ((singleton o Future.forks) {name = name, group = SOME group, deps = deps, pri = pri, interrupts = true} (fn () => if ignore_process print_process then () else run_process execution_id exec_id print_process)); in (case delay of NONE => fork () | SOME d => ignore (Event_Timer.request {physical = true} (Time.now () + d) fork)) end; fun run_print execution_id (print as Print {exec_id, print_process, ...}) = if ignore_process print_process then () else if parallel_print print then fork_print execution_id [] print else run_process execution_id exec_id print_process; in fun exec execution_id (eval, prints) = (run_eval execution_id eval; List.app (run_print execution_id) prints); fun exec_parallel_prints execution_id deps (exec as (Eval {eval_process, ...}, prints)) = if Lazy.is_finished eval_process then (List.app (fork_print execution_id deps) prints; NONE) else SOME exec; end; end; diff --git a/src/Pure/PIDE/prover.scala b/src/Pure/PIDE/prover.scala --- a/src/Pure/PIDE/prover.scala +++ b/src/Pure/PIDE/prover.scala @@ -1,323 +1,323 @@ /* Title: Pure/PIDE/prover.scala Author: Makarius Options: :folding=explicit: Prover process wrapping. */ package isabelle import java.io.{InputStream, OutputStream, BufferedOutputStream, IOException} object Prover { /* messages */ sealed abstract class Message type Receiver = Message => Unit class Input(val name: String, val args: List[String]) extends Message { override def toString: String = XML.Elem(Markup(Markup.PROVER_COMMAND, List((Markup.NAME, name))), args.flatMap(s => List(XML.newline, XML.elem(Markup.PROVER_ARG, YXML.parse_body(s))))).toString } class Output(val message: XML.Elem) extends Message { def kind: String = message.markup.name def properties: Properties.T = message.markup.properties def body: XML.Body = message.body def is_init: Boolean = kind == Markup.INIT def is_exit: Boolean = kind == Markup.EXIT def is_stdout: Boolean = kind == Markup.STDOUT def is_stderr: Boolean = kind == Markup.STDERR def is_system: Boolean = kind == Markup.SYSTEM def is_status: Boolean = kind == Markup.STATUS def is_report: Boolean = kind == Markup.REPORT def is_syslog: Boolean = is_init || is_exit || is_system || is_stderr override def toString: String = { val res = if (is_status || is_report) message.body.map(_.toString).mkString else Pretty.string_of(message.body, metric = Symbol.Metric) if (properties.isEmpty) kind + " [[" + res + "]]" else kind + " " + - (for ((x, y) <- properties) yield x + "=" + y).mkString("{", ",", "}") + " [[" + res + "]]" + (properties.map(Properties.Eq.apply)).mkString("{", ",", "}") + " [[" + res + "]]" } } class Malformed(msg: String) extends Exn.User_Error("Malformed prover message: " + msg) def bad_header(print: String): Nothing = throw new Malformed("bad message header\n" + print) def bad_chunks(): Nothing = throw new Malformed("bad message chunks") def the_chunk(chunks: List[Bytes], print: => String): Bytes = chunks match { case List(chunk) => chunk case _ => throw new Malformed("single chunk expected: " + print) } class Protocol_Output(props: Properties.T, val chunks: List[Bytes]) extends Output(XML.Elem(Markup(Markup.PROTOCOL, props), Nil)) { def chunk: Bytes = the_chunk(chunks, toString) lazy val text: String = chunk.text } } class Prover( receiver: Prover.Receiver, cache: XML.Cache, channel: System_Channel, process: Bash.Process) extends Protocol { /** receiver output **/ private def system_output(text: String): Unit = { receiver(new Prover.Output(XML.Elem(Markup(Markup.SYSTEM, Nil), List(XML.Text(text))))) } private def protocol_output(props: Properties.T, chunks: List[Bytes]): Unit = { receiver(new Prover.Protocol_Output(props, chunks)) } private def output(kind: String, props: Properties.T, body: XML.Body): Unit = { val main = XML.Elem(Markup(kind, props), Protocol_Message.clean_reports(body)) val reports = Protocol_Message.reports(props, body) for (msg <- main :: reports) receiver(new Prover.Output(cache.elem(msg))) } private def exit_message(result: Process_Result): Unit = { output(Markup.EXIT, Markup.Process_Result(result), List(XML.Text(result.print_return_code))) } /** process manager **/ private val process_result: Future[Process_Result] = Future.thread("process_result") { val rc = process.join val timing = process.get_timing Process_Result(rc, timing = timing) } private def terminate_process(): Unit = { try { process.terminate() } catch { case exn @ ERROR(_) => system_output("Failed to terminate prover process: " + exn.getMessage) } } private val process_manager = Isabelle_Thread.fork(name = "process_manager") { val stdout = physical_output(false) val (startup_failed, startup_errors) = { var finished: Option[Boolean] = None val result = new StringBuilder(100) while (finished.isEmpty && (process.stderr.ready || !process_result.is_finished)) { while (finished.isEmpty && process.stderr.ready) { try { val c = process.stderr.read if (c == 2) finished = Some(true) else result += c.toChar } catch { case _: IOException => finished = Some(false) } } Time.seconds(0.05).sleep() } (finished.isEmpty || !finished.get, result.toString.trim) } if (startup_errors != "") system_output(startup_errors) if (startup_failed) { terminate_process() process_result.join stdout.join exit_message(Process_Result(127)) } else { val (command_stream, message_stream) = channel.rendezvous() command_input_init(command_stream) val stderr = physical_output(true) val message = message_output(message_stream) val result = process_result.join system_output("process terminated") command_input_close() for (thread <- List(stdout, stderr, message)) thread.join system_output("process_manager terminated") exit_message(result) } channel.shutdown() } /* management methods */ def join(): Unit = process_manager.join() def terminate(): Unit = { system_output("Terminating prover process") command_input_close() var count = 10 while (!process_result.is_finished && count > 0) { Time.seconds(0.1).sleep() count -= 1 } if (!process_result.is_finished) terminate_process() } /** process streams **/ /* command input */ private var command_input: Option[Consumer_Thread[List[Bytes]]] = None private def command_input_close(): Unit = command_input.foreach(_.shutdown()) private def command_input_init(raw_stream: OutputStream): Unit = { val name = "command_input" val stream = new BufferedOutputStream(raw_stream) command_input = Some( Consumer_Thread.fork(name)( consume = { case chunks => try { Bytes(chunks.map(_.length).mkString("", ",", "\n")).write_stream(stream) chunks.foreach(_.write_stream(stream)) stream.flush true } catch { case e: IOException => system_output(name + ": " + e.getMessage); false } }, finish = { case () => stream.close(); system_output(name + " terminated") } ) ) } /* physical output */ private def physical_output(err: Boolean): Thread = { val (name, reader, markup) = if (err) ("standard_error", process.stderr, Markup.STDERR) else ("standard_output", process.stdout, Markup.STDOUT) Isabelle_Thread.fork(name = name) { try { var result = new StringBuilder(100) var finished = false while (!finished) { //{{{ var c = -1 var done = false while (!done && (result.isEmpty || reader.ready)) { c = reader.read if (c >= 0) result.append(c.asInstanceOf[Char]) else done = true } if (result.nonEmpty) { output(markup, Nil, List(XML.Text(Symbol.decode(result.toString)))) result.clear() } else { reader.close() finished = true } //}}} } } catch { case e: IOException => system_output(name + ": " + e.getMessage) } system_output(name + " terminated") } } /* message output */ private def message_output(stream: InputStream): Thread = { def decode_chunk(chunk: Bytes): XML.Body = Symbol.decode_yxml_failsafe(chunk.text, cache = cache) val thread_name = "message_output" Isabelle_Thread.fork(name = thread_name) { try { var finished = false while (!finished) { Byte_Message.read_message(stream) match { case None => finished = true case Some(header :: chunks) => decode_chunk(header) match { case List(XML.Elem(Markup(kind, props), Nil)) => if (kind == Markup.PROTOCOL) protocol_output(props, chunks) else output(kind, props, decode_chunk(Prover.the_chunk(chunks, kind))) case _ => Prover.bad_header(header.toString) } case Some(_) => Prover.bad_chunks() } } } catch { case e: IOException => system_output("Cannot read message:\n" + e.getMessage) case e: Prover.Malformed => system_output(e.getMessage) } stream.close() system_output(thread_name + " terminated") } } /** protocol commands **/ var trace: Boolean = false def protocol_command_raw(name: String, args: List[Bytes]): Unit = command_input match { case Some(thread) if thread.is_active => if (trace) { val payload = args.foldLeft(0) { case (n, b) => n + b.length } Output.writeln( "protocol_command " + name + ", args = " + args.length + ", payload = " + payload) } thread.send(Bytes(name) :: args) case _ => error("Inactive prover input thread for command " + quote(name)) } def protocol_command_args(name: String, args: List[String]): Unit = { receiver(new Prover.Input(name, args)) protocol_command_raw(name, args.map(Bytes(_))) } def protocol_command(name: String, args: String*): Unit = protocol_command_args(name, args.toList) } diff --git a/src/Pure/PIDE/resources.ML b/src/Pure/PIDE/resources.ML --- a/src/Pure/PIDE/resources.ML +++ b/src/Pure/PIDE/resources.ML @@ -1,443 +1,443 @@ (* Title: Pure/PIDE/resources.ML Author: Makarius Resources for theories and auxiliary files. *) signature RESOURCES = sig val default_qualifier: string val init_session: {session_positions: (string * Properties.T) list, session_directories: (string * string) list, session_chapters: (string * string) list, bibtex_entries: (string * string list) list, command_timings: Properties.T list, scala_functions: (string * (bool * Position.T)) list, global_theories: (string * string) list, loaded_theories: string list} -> unit val init_session_yxml: string -> unit val init_session_file: Path.T -> unit val finish_session_base: unit -> unit val global_theory: string -> string option val loaded_theory: string -> bool val check_session: Proof.context -> string * Position.T -> string val session_chapter: string -> string val last_timing: Toplevel.transition -> Time.time val scala_functions: unit -> string list val check_scala_function: Proof.context -> string * Position.T -> string * bool val master_directory: theory -> Path.T val imports_of: theory -> (string * Position.T) list val begin_theory: Path.T -> Thy_Header.header -> theory list -> theory val thy_path: Path.T -> Path.T val theory_qualifier: string -> string val theory_bibtex_entries: string -> string list val find_theory_file: string -> Path.T option val import_name: string -> Path.T -> string -> {node_name: Path.T, master_dir: Path.T, theory_name: string} val check_thy: Path.T -> string -> {master: Path.T * SHA1.digest, text: string, theory_pos: Position.T, imports: (string * Position.T) list, keywords: Thy_Header.keywords} val parse_files: (Path.T -> Path.T list) -> (theory -> Token.file list) parser val parse_file: (theory -> Token.file) parser val provide: Path.T * SHA1.digest -> theory -> theory val provide_file: Token.file -> theory -> theory val provide_parse_files: (Path.T -> Path.T list) -> (theory -> Token.file list * theory) parser val provide_parse_file: (theory -> Token.file * theory) parser val loaded_files_current: theory -> bool val check_path: Proof.context -> Path.T option -> Input.source -> Path.T val check_file: Proof.context -> Path.T option -> Input.source -> Path.T val check_dir: Proof.context -> Path.T option -> Input.source -> Path.T val check_session_dir: Proof.context -> Path.T option -> Input.source -> Path.T end; structure Resources: RESOURCES = struct (* command timings *) type timings = ((string * Time.time) Inttab.table) Symtab.table; (*file -> offset -> name, time*) val empty_timings: timings = Symtab.empty; fun update_timings props = (case Markup.parse_command_timing_properties props of SOME ({file, offset, name}, time) => Symtab.map_default (file, Inttab.empty) (Inttab.map_default (offset, (name, time)) (fn (_, t) => (name, t + time))) | NONE => I); fun make_timings command_timings = fold update_timings command_timings empty_timings; fun approximative_id name pos = (case (Position.file_of pos, Position.offset_of pos) of (SOME file, SOME offset) => if name = "" then NONE else SOME {file = file, offset = offset, name = name} | _ => NONE); fun get_timings timings tr = (case approximative_id (Toplevel.name_of tr) (Toplevel.pos_of tr) of SOME {file, offset, name} => (case Symtab.lookup timings file of SOME offsets => (case Inttab.lookup offsets offset of SOME (name', time) => if name = name' then SOME time else NONE | NONE => NONE) | NONE => NONE) | NONE => NONE) |> the_default Time.zeroTime; (* session base *) val default_qualifier = "Draft"; type entry = {pos: Position.T, serial: serial}; fun make_entry props : entry = {pos = Position.of_properties props, serial = serial ()}; val empty_session_base = ({session_positions = []: (string * entry) list, session_directories = Symtab.empty: Path.T list Symtab.table, session_chapters = Symtab.empty: string Symtab.table, bibtex_entries = Symtab.empty: string list Symtab.table, timings = empty_timings, scala_functions = Symtab.empty: (bool * Position.T) Symtab.table}, {global_theories = Symtab.empty: string Symtab.table, loaded_theories = Symtab.empty: unit Symtab.table}); val global_session_base = Synchronized.var "Sessions.base" empty_session_base; fun init_session {session_positions, session_directories, session_chapters, bibtex_entries, command_timings, scala_functions, global_theories, loaded_theories} = Synchronized.change global_session_base (fn _ => ({session_positions = sort_by #1 (map (apsnd make_entry) session_positions), session_directories = fold_rev (fn (dir, name) => Symtab.cons_list (name, Path.explode dir)) session_directories Symtab.empty, session_chapters = Symtab.make session_chapters, bibtex_entries = Symtab.make bibtex_entries, timings = make_timings command_timings, scala_functions = Symtab.make scala_functions}, {global_theories = Symtab.make global_theories, loaded_theories = Symtab.make_set loaded_theories})); fun init_session_yxml yxml = let val (session_positions, (session_directories, (session_chapters, (bibtex_entries, (command_timings, (scala_functions, (global_theories, loaded_theories))))))) = YXML.parse_body yxml |> let open XML.Decode in (pair (list (pair string properties)) (pair (list (pair string string)) (pair (list (pair string string)) (pair (list (pair string (list string))) (pair (list properties) (pair (list (pair string (pair bool properties))) (pair (list (pair string string)) (list string)))))))) end; in init_session {session_positions = session_positions, session_directories = session_directories, session_chapters = session_chapters, bibtex_entries = bibtex_entries, command_timings = command_timings, scala_functions = (map o apsnd o apsnd) Position.of_properties scala_functions, global_theories = global_theories, loaded_theories = loaded_theories} end; fun init_session_file path = init_session_yxml (File.read path) before File.rm path; fun finish_session_base () = Synchronized.change global_session_base (apfst (K (#1 empty_session_base))); fun get_session_base f = f (Synchronized.value global_session_base); fun get_session_base1 f = get_session_base (f o #1); fun get_session_base2 f = get_session_base (f o #2); fun global_theory a = Symtab.lookup (get_session_base2 #global_theories) a; fun loaded_theory a = Symtab.defined (get_session_base2 #loaded_theories) a; fun check_session ctxt arg = Completion.check_item "session" (fn (name, {pos, serial}) => Markup.entity Markup.sessionN name |> Markup.properties (Position.entity_properties_of false serial pos)) (get_session_base1 #session_positions) ctxt arg; fun session_chapter name = the_default "Unsorted" (Symtab.lookup (get_session_base1 #session_chapters) name); fun last_timing tr = get_timings (get_session_base1 #timings) tr; (* Scala functions *) (*raw bootstrap environment*) fun scala_functions () = space_explode "," (getenv "ISABELLE_SCALA_FUNCTIONS"); (*regular resources*) fun scala_function a = (a, the_default (false, Position.none) (Symtab.lookup (get_session_base1 #scala_functions) a)); fun check_scala_function ctxt arg = let val funs = scala_functions () |> sort_strings |> map scala_function; val name = Completion.check_entity Markup.scala_functionN (map (apsnd #2) funs) ctxt arg; val multi = (case AList.lookup (op =) funs name of SOME (multi, _) => multi | NONE => false); in (name, multi) end; val _ = Theory.setup - (Thy_Output.antiquotation_verbatim_embedded \<^binding>\scala_function\ + (Document_Output.antiquotation_verbatim_embedded \<^binding>\scala_function\ (Scan.lift Parse.embedded_position) (#1 oo check_scala_function) #> ML_Antiquotation.inline_embedded \<^binding>\scala_function\ (Args.context -- Scan.lift Parse.embedded_position >> (uncurry check_scala_function #> #1 #> ML_Syntax.print_string)) #> ML_Antiquotation.value_embedded \<^binding>\scala\ (Args.context -- Scan.lift Args.embedded_position >> (fn (ctxt, arg) => let val (name, multi) = check_scala_function ctxt arg; val func = if multi then "Scala.function" else "Scala.function1"; in ML_Syntax.atomic (func ^ " " ^ ML_Syntax.print_string name) end))); (* manage source files *) type files = {master_dir: Path.T, (*master directory of theory source*) imports: (string * Position.T) list, (*source specification of imports*) provided: (Path.T * SHA1.digest) list}; (*source path, digest*) fun make_files (master_dir, imports, provided): files = {master_dir = master_dir, imports = imports, provided = provided}; structure Files = Theory_Data ( type T = files; val empty = make_files (Path.current, [], []); val extend = I; fun merge ({master_dir, imports, provided = provided1}, {provided = provided2, ...}) = let val provided' = Library.merge (op =) (provided1, provided2) in make_files (master_dir, imports, provided') end ); fun map_files f = Files.map (fn {master_dir, imports, provided} => make_files (f (master_dir, imports, provided))); val master_directory = #master_dir o Files.get; val imports_of = #imports o Files.get; fun begin_theory master_dir {name, imports, keywords} parents = Theory.begin_theory name parents |> map_files (fn _ => (Path.explode (Path.implode_symbolic master_dir), imports, [])) |> Thy_Header.add_keywords keywords; (* theory files *) val thy_path = Path.ext "thy"; fun theory_qualifier theory = (case global_theory theory of SOME qualifier => qualifier | NONE => Long_Name.qualifier theory); fun theory_name qualifier theory = if Long_Name.is_qualified theory orelse is_some (global_theory theory) then theory else Long_Name.qualify qualifier theory; fun theory_bibtex_entries theory = Symtab.lookup_list (get_session_base1 #bibtex_entries) (theory_qualifier theory); fun find_theory_file thy_name = let val thy_file = thy_path (Path.basic (Long_Name.base_name thy_name)); val session = theory_qualifier thy_name; val dirs = Symtab.lookup_list (get_session_base1 #session_directories) session; in dirs |> get_first (fn dir => let val path = dir + thy_file in if File.is_file path then SOME path else NONE end) end; fun make_theory_node node_name theory = {node_name = node_name, master_dir = Path.dir node_name, theory_name = theory}; fun loaded_theory_node theory = {node_name = Path.basic theory, master_dir = Path.current, theory_name = theory}; fun import_name qualifier dir s = let val theory = theory_name qualifier (Thy_Header.import_name s); fun theory_node () = make_theory_node (File.full_path dir (thy_path (Path.expand (Path.explode s)))) theory; in if not (Thy_Header.is_base_name s) then theory_node () else if loaded_theory theory then loaded_theory_node theory else (case find_theory_file theory of SOME node_name => make_theory_node node_name theory | NONE => if Long_Name.is_qualified s then loaded_theory_node theory else theory_node ()) end; fun check_file dir file = File.check_file (File.full_path dir file); fun check_thy dir thy_name = let val thy_base_name = Long_Name.base_name thy_name; val master_file = (case find_theory_file thy_name of SOME path => check_file Path.current path | NONE => check_file dir (thy_path (Path.basic thy_base_name))); val text = File.read master_file; val {name = (name, pos), imports, keywords} = Thy_Header.read (Path.position master_file) text; val _ = thy_base_name <> name andalso error ("Bad theory name " ^ quote name ^ " for file " ^ Path.print (Path.base master_file) ^ Position.here pos); in {master = (master_file, SHA1.digest text), text = text, theory_pos = pos, imports = imports, keywords = keywords} end; (* load files *) fun parse_files make_paths = Scan.ahead Parse.not_eof -- Parse.path_input >> (fn (tok, source) => fn thy => (case Token.get_files tok of [] => let val master_dir = master_directory thy; val name = Input.string_of source; val pos = Input.pos_of source; val delimited = Input.is_delimited source; val src_paths = make_paths (Path.explode name); in map (Command.read_file master_dir pos delimited) src_paths end | files => map Exn.release files)); val parse_file = parse_files single >> (fn f => f #> the_single); fun provide (src_path, id) = map_files (fn (master_dir, imports, provided) => if AList.defined (op =) provided src_path then error ("Duplicate use of source file: " ^ Path.print src_path) else (master_dir, imports, (src_path, id) :: provided)); fun provide_file (file: Token.file) = provide (#src_path file, #digest file); fun provide_parse_files make_paths = parse_files make_paths >> (fn files => fn thy => let val fs = files thy; val thy' = fold (fn {src_path, digest, ...} => provide (src_path, digest)) fs thy; in (fs, thy') end); val provide_parse_file = provide_parse_files single >> (fn f => f #>> the_single); fun load_file thy src_path = let val full_path = check_file (master_directory thy) src_path; val text = File.read full_path; val id = SHA1.digest text; in ((full_path, id), text) end; fun loaded_files_current thy = #provided (Files.get thy) |> forall (fn (src_path, id) => (case try (load_file thy) src_path of NONE => false | SOME ((_, id'), _) => id = id')); (* formal check *) fun formal_check check_file ctxt opt_dir source = let val name = Input.string_of source; val pos = Input.pos_of source; val delimited = Input.is_delimited source; val _ = Context_Position.report ctxt pos (Markup.language_path delimited); fun err msg = error (msg ^ Position.here pos); val dir = (case opt_dir of SOME dir => dir | NONE => master_directory (Proof_Context.theory_of ctxt)); val path = dir + Path.explode name handle ERROR msg => err msg; val _ = Path.expand path handle ERROR msg => err msg; val _ = Context_Position.report ctxt pos (Markup.path (Path.implode_symbolic path)); val _ : Path.T = check_file path handle ERROR msg => err msg; in path end; val check_path = formal_check I; val check_file = formal_check File.check_file; val check_dir = formal_check File.check_dir; fun check_session_dir ctxt opt_dir s = let val dir = Path.expand (check_dir ctxt opt_dir s); val ok = File.is_file (dir + Path.explode("ROOT")) orelse File.is_file (dir + Path.explode("ROOTS")); in if ok then dir else error ("Bad session root directory (missing ROOT or ROOTS): " ^ Path.print dir ^ Position.here (Input.pos_of s)) end; (* antiquotations *) local fun document_antiq (check: Proof.context -> Path.T option -> Input.source -> Path.T) = Args.context -- Scan.lift Parse.path_input >> (fn (ctxt, source) => let val _ = check ctxt NONE source; val latex = Latex.string (Latex.output_ascii_breakable "/" (Input.string_of source)); in Latex.enclose_block "\\isatt{" "}" [latex] end); fun ML_antiq check = Args.context -- Scan.lift Parse.path_input >> (fn (ctxt, source) => check ctxt (SOME Path.current) source |> ML_Syntax.print_path); in val _ = Theory.setup - (Thy_Output.antiquotation_verbatim_embedded \<^binding>\session\ + (Document_Output.antiquotation_verbatim_embedded \<^binding>\session\ (Scan.lift Parse.embedded_position) check_session #> - Thy_Output.antiquotation_raw_embedded \<^binding>\path\ (document_antiq check_path) (K I) #> - Thy_Output.antiquotation_raw_embedded \<^binding>\file\ (document_antiq check_file) (K I) #> - Thy_Output.antiquotation_raw_embedded \<^binding>\dir\ (document_antiq check_dir) (K I) #> + Document_Output.antiquotation_raw_embedded \<^binding>\path\ (document_antiq check_path) (K I) #> + Document_Output.antiquotation_raw_embedded \<^binding>\file\ (document_antiq check_file) (K I) #> + Document_Output.antiquotation_raw_embedded \<^binding>\dir\ (document_antiq check_dir) (K I) #> ML_Antiquotation.value_embedded \<^binding>\path\ (ML_antiq check_path) #> ML_Antiquotation.value_embedded \<^binding>\file\ (ML_antiq check_file) #> ML_Antiquotation.value_embedded \<^binding>\dir\ (ML_antiq check_dir) #> ML_Antiquotation.value_embedded \<^binding>\path_binding\ (Scan.lift (Parse.position Parse.path) >> (ML_Syntax.print_path_binding o Path.explode_binding)) #> ML_Antiquotation.value \<^binding>\master_dir\ (Args.theory >> (ML_Syntax.print_path o master_directory))); end; end; diff --git a/src/Pure/PIDE/yxml.scala b/src/Pure/PIDE/yxml.scala --- a/src/Pure/PIDE/yxml.scala +++ b/src/Pure/PIDE/yxml.scala @@ -1,153 +1,148 @@ /* Title: Pure/PIDE/yxml.scala Author: Makarius Efficient text representation of XML trees. Suitable for direct inlining into plain text. */ package isabelle import scala.collection.mutable object YXML { /* chunk markers */ val X = '\u0005' val Y = '\u0006' val is_X: Char => Boolean = _ == X val is_Y: Char => Boolean = _ == Y val X_string: String = X.toString val Y_string: String = Y.toString val XY_string: String = X_string + Y_string val XYX_string: String = XY_string + X_string def detect(s: String): Boolean = s.exists(c => c == X || c == Y) def detect_elem(s: String): Boolean = s.startsWith(XY_string) /* string representation */ def traversal(string: String => Unit, body: XML.Body): Unit = { def tree(t: XML.Tree): Unit = t match { case XML.Elem(markup @ Markup(name, atts), ts) => if (markup.is_empty) ts.foreach(tree) else { string(XY_string) string(name) for ((a, x) <- atts) { string(Y_string); string(a); string("="); string(x) } string(X_string) ts.foreach(tree) string(XYX_string) } case XML.Text(text) => string(text) } body.foreach(tree) } def string_of_body(body: XML.Body): String = { val s = new StringBuilder traversal(str => s ++= str, body) s.toString } def string_of_tree(tree: XML.Tree): String = string_of_body(List(tree)) /* parsing */ private def err(msg: String) = error("Malformed YXML: " + msg) private def err_attribute() = err("bad attribute") private def err_element() = err("bad element") private def err_unbalanced(name: String) = if (name == "") err("unbalanced element") else err("unbalanced element " + quote(name)) private def parse_attrib(source: CharSequence): (String, String) = - { - val s = source.toString - val i = s.indexOf('=') - if (i <= 0) err_attribute() - (s.substring(0, i), s.substring(i + 1)) - } + Properties.Eq.unapply(source.toString) getOrElse err_attribute() def parse_body(source: CharSequence, cache: XML.Cache = XML.Cache.none): XML.Body = { /* stack operations */ def buffer(): mutable.ListBuffer[XML.Tree] = new mutable.ListBuffer[XML.Tree] var stack: List[(Markup, mutable.ListBuffer[XML.Tree])] = List((Markup.Empty, buffer())) def add(x: XML.Tree): Unit = (stack: @unchecked) match { case (_, body) :: _ => body += x } def push(name: String, atts: XML.Attributes): Unit = if (name == "") err_element() else stack = (cache.markup(Markup(name, atts)), buffer()) :: stack def pop(): Unit = (stack: @unchecked) match { case (Markup.Empty, _) :: _ => err_unbalanced("") case (markup, body) :: pending => stack = pending add(cache.tree0(XML.Elem(markup, body.toList))) } /* parse chunks */ for (chunk <- Library.separated_chunks(is_X, source) if chunk.length != 0) { if (chunk.length == 1 && chunk.charAt(0) == Y) pop() else { Library.separated_chunks(is_Y, chunk).toList match { case ch :: name :: atts if ch.length == 0 => push(name.toString, atts.map(parse_attrib)) case txts => for (txt <- txts) add(cache.tree0(XML.Text(cache.string(txt.toString)))) } } } (stack: @unchecked) match { case List((Markup.Empty, body)) => body.toList case (Markup(name, _), _) :: _ => err_unbalanced(name) } } def parse(source: CharSequence, cache: XML.Cache = XML.Cache.none): XML.Tree = parse_body(source, cache = cache) match { case List(result) => result case Nil => XML.no_text case _ => err("multiple XML trees") } def parse_elem(source: CharSequence, cache: XML.Cache = XML.Cache.none): XML.Tree = parse_body(source, cache = cache) match { case List(elem: XML.Elem) => elem case _ => err("single XML element expected") } /* failsafe parsing */ private def markup_broken(source: CharSequence) = XML.Elem(Markup.Broken, List(XML.Text(source.toString))) def parse_body_failsafe(source: CharSequence, cache: XML.Cache = XML.Cache.none): XML.Body = { try { parse_body(source, cache = cache) } catch { case ERROR(_) => List(markup_broken(source)) } } def parse_failsafe(source: CharSequence, cache: XML.Cache = XML.Cache.none): XML.Tree = { try { parse(source, cache = cache) } catch { case ERROR(_) => markup_broken(source) } } } diff --git a/src/Pure/ROOT.ML b/src/Pure/ROOT.ML --- a/src/Pure/ROOT.ML +++ b/src/Pure/ROOT.ML @@ -1,358 +1,358 @@ (* Title: Pure/ROOT.ML Author: Makarius Main entry point for the Isabelle/Pure bootstrap process. Note: When this file is open in the Prover IDE, the ML files of Isabelle/Pure can be explored interactively. This is a separate copy of Pure within Pure: it does not affect the running logic session. *) chapter "Isabelle/Pure bootstrap"; ML_file "ML/ml_name_space.ML"; section "Bootstrap phase 0: Poly/ML setup"; ML_file "ML/ml_init.ML"; ML_file "ML/ml_system.ML"; ML_file "General/basics.ML"; ML_file "General/symbol_explode.ML"; ML_file "Concurrent/multithreading.ML"; ML_file "Concurrent/unsynchronized.ML"; ML_file "Concurrent/synchronized.ML"; ML_file "Concurrent/counter.ML"; ML_file "ML/ml_heap.ML"; ML_file "ML/ml_profiling.ML"; ML_file "ML/ml_print_depth0.ML"; ML_file "ML/ml_pretty.ML"; ML_file "ML/ml_compiler0.ML"; section "Bootstrap phase 1: towards ML within position context"; subsection "Library of general tools"; ML_file "library.ML"; ML_file "General/print_mode.ML"; ML_file "General/alist.ML"; ML_file "General/table.ML"; ML_file "General/random.ML"; ML_file "General/value.ML"; ML_file "General/properties.ML"; ML_file "General/output.ML"; ML_file "PIDE/markup.ML"; ML_file "General/utf8.ML"; ML_file "General/scan.ML"; ML_file "General/source.ML"; ML_file "General/symbol.ML"; ML_file "General/position.ML"; ML_file "General/symbol_pos.ML"; ML_file "General/input.ML"; ML_file "General/comment.ML"; ML_file "General/antiquote.ML"; ML_file "ML/ml_lex.ML"; ML_file "ML/ml_compiler1.ML"; section "Bootstrap phase 2: towards ML within Isar context"; subsection "Library of general tools"; ML_file "General/integer.ML"; ML_file "General/stack.ML"; ML_file "General/queue.ML"; ML_file "General/heap.ML"; ML_file "General/same.ML"; ML_file "General/ord_list.ML"; ML_file "General/balanced_tree.ML"; ML_file "General/linear_set.ML"; ML_file "General/buffer.ML"; ML_file "General/pretty.ML"; ML_file "General/rat.ML"; ML_file "PIDE/xml.ML"; ML_file "General/path.ML"; ML_file "General/url.ML"; ML_file "System/bash.ML"; ML_file "General/file.ML"; ML_file "General/long_name.ML"; ML_file "General/binding.ML"; ML_file "General/socket_io.ML"; ML_file "General/seq.ML"; ML_file "General/time.ML"; ML_file "General/timing.ML"; ML_file "General/sha1.ML"; ML_file "PIDE/yxml.ML"; ML_file "PIDE/byte_message.ML"; ML_file "PIDE/protocol_message.ML"; ML_file "PIDE/document_id.ML"; ML_file "General/change_table.ML"; ML_file "General/graph.ML"; ML_file "System/options.ML"; subsection "Fundamental structures"; ML_file "name.ML"; ML_file "term.ML"; ML_file "context.ML"; ML_file "config.ML"; ML_file "context_position.ML"; ML_file "soft_type_system.ML"; subsection "Concurrency within the ML runtime"; ML_file "ML/exn_properties.ML"; ML_file_no_debug "ML/exn_debugger.ML"; ML_file "Concurrent/thread_data_virtual.ML"; ML_file "Concurrent/isabelle_thread.ML"; ML_file "Concurrent/single_assignment.ML"; ML_file "Concurrent/par_exn.ML"; ML_file "Concurrent/task_queue.ML"; ML_file "Concurrent/future.ML"; ML_file "Concurrent/event_timer.ML"; ML_file "Concurrent/timeout.ML"; ML_file "Concurrent/lazy.ML"; ML_file "Concurrent/par_list.ML"; ML_file "Concurrent/mailbox.ML"; ML_file "Concurrent/cache.ML"; ML_file "PIDE/active.ML"; ML_file "Thy/export.ML"; subsection "Inner syntax"; ML_file "Syntax/type_annotation.ML"; ML_file "Syntax/term_position.ML"; ML_file "Syntax/lexicon.ML"; ML_file "Syntax/ast.ML"; ML_file "Syntax/syntax_ext.ML"; ML_file "Syntax/parser.ML"; ML_file "Syntax/syntax_trans.ML"; ML_file "Syntax/mixfix.ML"; ML_file "Syntax/printer.ML"; ML_file "Syntax/syntax.ML"; subsection "Core of tactical proof system"; ML_file "term_ord.ML"; ML_file "term_subst.ML"; ML_file "General/completion.ML"; ML_file "General/name_space.ML"; ML_file "sorts.ML"; ML_file "type.ML"; ML_file "logic.ML"; ML_file "Syntax/simple_syntax.ML"; ML_file "net.ML"; ML_file "item_net.ML"; ML_file "envir.ML"; ML_file "consts.ML"; ML_file "term_xml.ML"; ML_file "primitive_defs.ML"; ML_file "sign.ML"; ML_file "defs.ML"; ML_file "term_sharing.ML"; ML_file "pattern.ML"; ML_file "unify.ML"; ML_file "theory.ML"; ML_file "proofterm.ML"; ML_file "thm.ML"; ML_file "more_pattern.ML"; ML_file "more_unify.ML"; ML_file "more_thm.ML"; ML_file "facts.ML"; ML_file "thm_name.ML"; ML_file "global_theory.ML"; ML_file "pure_thy.ML"; ML_file "drule.ML"; ML_file "morphism.ML"; ML_file "variable.ML"; ML_file "conv.ML"; ML_file "goal_display.ML"; ML_file "tactical.ML"; ML_file "search.ML"; ML_file "tactic.ML"; ML_file "raw_simplifier.ML"; ML_file "conjunction.ML"; ML_file "assumption.ML"; subsection "Isar -- Intelligible Semi-Automated Reasoning"; (*ML support and global execution*) ML_file "ML/ml_syntax.ML"; ML_file "ML/ml_env.ML"; ML_file "ML/ml_options.ML"; ML_file "ML/ml_print_depth.ML"; ML_file_no_debug "Isar/runtime.ML"; ML_file "PIDE/execution.ML"; ML_file "ML/ml_compiler.ML"; ML_file "skip_proof.ML"; ML_file "goal.ML"; (*outer syntax*) ML_file "Isar/keyword.ML"; ML_file "Isar/token.ML"; ML_file "Isar/parse.ML"; ML_file "Thy/document_source.ML"; ML_file "Thy/thy_header.ML"; ML_file "Thy/document_marker.ML"; (*proof context*) ML_file "Isar/object_logic.ML"; ML_file "Isar/rule_cases.ML"; ML_file "Isar/auto_bind.ML"; ML_file "type_infer.ML"; ML_file "Syntax/local_syntax.ML"; ML_file "Isar/proof_context.ML"; ML_file "type_infer_context.ML"; ML_file "Syntax/syntax_phases.ML"; ML_file "Isar/args.ML"; (*theory specifications*) ML_file "Isar/local_defs.ML"; ML_file "Isar/local_theory.ML"; ML_file "Isar/entity.ML"; ML_file "PIDE/command_span.ML"; ML_file "Thy/thy_element.ML"; ML_file "Thy/markdown.ML"; ML_file "Thy/latex.ML"; (*ML with context and antiquotations*) ML_file "ML/ml_context.ML"; ML_file "ML/ml_antiquotation.ML"; ML_file "ML/ml_compiler2.ML"; ML_file "ML/ml_antiquotations1.ML"; section "Bootstrap phase 3: towards theory Pure and final ML toplevel setup"; (*basic proof engine*) ML_file "par_tactical.ML"; ML_file "context_tactic.ML"; ML_file "Isar/proof_display.ML"; ML_file "Isar/attrib.ML"; ML_file "Isar/context_rules.ML"; ML_file "Isar/method.ML"; ML_file "Isar/proof.ML"; ML_file "Isar/element.ML"; ML_file "Isar/obtain.ML"; ML_file "Isar/subgoal.ML"; ML_file "Isar/calculation.ML"; (*local theories and targets*) ML_file "Isar/locale.ML"; ML_file "Isar/generic_target.ML"; ML_file "Isar/bundle.ML"; ML_file "Isar/overloading.ML"; ML_file "axclass.ML"; ML_file "Isar/class.ML"; ML_file "Isar/named_target.ML"; ML_file "Isar/expression.ML"; ML_file "Isar/interpretation.ML"; ML_file "Isar/class_declaration.ML"; ML_file "Isar/target_context.ML"; ML_file "Isar/experiment.ML"; ML_file "simplifier.ML"; ML_file "Tools/plugin.ML"; (*executable theory content*) ML_file "Isar/code.ML"; (*specifications*) ML_file "Isar/spec_rules.ML"; ML_file "Isar/specification.ML"; ML_file "Isar/parse_spec.ML"; ML_file "Isar/typedecl.ML"; (*toplevel transactions*) ML_file "Isar/proof_node.ML"; ML_file "Isar/toplevel.ML"; (*proof term operations*) ML_file "Proof/proof_rewrite_rules.ML"; ML_file "Proof/proof_syntax.ML"; ML_file "Proof/proof_checker.ML"; ML_file "Proof/extraction.ML"; (*Isabelle system*) ML_file "PIDE/protocol_command.ML"; ML_file "System/scala.ML"; ML_file "System/process_result.ML"; ML_file "System/isabelle_system.ML"; (*theory documents*) ML_file "Thy/term_style.ML"; ML_file "Isar/outer_syntax.ML"; ML_file "ML/ml_antiquotations2.ML"; ML_file "ML/ml_pid.ML"; ML_file "Thy/document_antiquotation.ML"; -ML_file "Thy/thy_output.ML"; +ML_file "Thy/document_output.ML"; ML_file "Thy/document_antiquotations.ML"; ML_file "General/graph_display.ML"; ML_file "pure_syn.ML"; ML_file "PIDE/command.ML"; ML_file "PIDE/query_operation.ML"; ML_file "PIDE/resources.ML"; ML_file "Thy/thy_info.ML"; ML_file "thm_deps.ML"; ML_file "Thy/export_theory.ML"; ML_file "Thy/sessions.ML"; ML_file "PIDE/session.ML"; ML_file "PIDE/document.ML"; (*theory and proof operations*) ML_file "Isar/isar_cmd.ML"; subsection "Isabelle/Isar system"; ML_file "System/command_line.ML"; ML_file "System/message_channel.ML"; ML_file "System/isabelle_process.ML"; ML_file "System/scala_compiler.ML"; ML_file "System/isabelle_tool.ML"; ML_file "Thy/bibtex.ML"; ML_file "PIDE/protocol.ML"; ML_file "General/output_primitives_virtual.ML"; subsection "Miscellaneous tools and packages for Pure Isabelle"; ML_file "ML/ml_pp.ML"; ML_file "ML/ml_thms.ML"; ML_file "ML/ml_file.ML"; ML_file "Tools/build.ML"; ML_file "Tools/named_thms.ML"; ML_file "Tools/print_operation.ML"; ML_file "Tools/rail.ML"; ML_file "Tools/rule_insts.ML"; ML_file "Tools/thy_deps.ML"; ML_file "Tools/class_deps.ML"; ML_file "Tools/find_theorems.ML"; ML_file "Tools/find_consts.ML"; ML_file "Tools/simplifier_trace.ML"; ML_file_no_debug "Tools/debugger.ML"; ML_file "Tools/named_theorems.ML"; ML_file "Tools/doc.ML"; ML_file "Tools/jedit.ML"; ML_file "Tools/ghc.ML"; ML_file "Tools/generated_files.ML" diff --git a/src/Pure/ROOT.scala b/src/Pure/ROOT.scala --- a/src/Pure/ROOT.scala +++ b/src/Pure/ROOT.scala @@ -1,24 +1,23 @@ /* Title: Pure/ROOT.scala Author: Makarius Root of isabelle package. */ package object isabelle { val ERROR = Exn.ERROR val error = Exn.error _ def cat_error(msgs: String*): Nothing = Exn.cat_error(msgs:_*) def using[A <: AutoCloseable, B](a: A)(f: A => B): B = Library.using(a)(f) val space_explode = Library.space_explode _ val split_lines = Library.split_lines _ val cat_lines = Library.cat_lines _ val terminate_lines = Library.terminate_lines _ val quote = Library.quote _ val commas = Library.commas _ val commas_quote = Library.commas_quote _ val proper_string = Library.proper_string _ def proper_list[A](list: List[A]): Option[List[A]] = Library.proper_list(list) } - diff --git a/src/Pure/System/isabelle_platform.scala b/src/Pure/System/isabelle_platform.scala --- a/src/Pure/System/isabelle_platform.scala +++ b/src/Pure/System/isabelle_platform.scala @@ -1,67 +1,64 @@ /* Title: Pure/System/isabelle_platform.scala Author: Makarius General hardware and operating system type for Isabelle system tools. */ package isabelle object Isabelle_Platform { val settings: List[String] = List( "ISABELLE_PLATFORM_FAMILY", "ISABELLE_PLATFORM64", "ISABELLE_WINDOWS_PLATFORM32", "ISABELLE_WINDOWS_PLATFORM64", "ISABELLE_APPLE_PLATFORM64") def apply(ssh: Option[SSH.Session] = None): Isabelle_Platform = { ssh match { case None => new Isabelle_Platform(settings.map(a => (a, Isabelle_System.getenv(a)))) case Some(ssh) => val script = File.read(Path.explode("~~/lib/scripts/isabelle-platform")) + "\n" + settings.map(a => "echo \"" + Bash.string(a) + "=$" + Bash.string(a) + "\"").mkString("\n") val result = ssh.execute("bash -c " + Bash.string(script)).check new Isabelle_Platform( result.out_lines.map(line => - space_explode('=', line) match { - case List(a, b) => (a, b) - case _ => error("Bad output: " + quote(result.out)) - })) + Properties.Eq.unapply(line) getOrElse error("Bad output: " + quote(result.out)))) } } lazy val self: Isabelle_Platform = apply() } class Isabelle_Platform private(val settings: List[(String, String)]) { private def get(name: String): String = settings.collectFirst({ case (a, b) if a == name => b }). getOrElse(error("Bad platform settings variable: " + quote(name))) val ISABELLE_PLATFORM_FAMILY: String = get("ISABELLE_PLATFORM_FAMILY") val ISABELLE_PLATFORM64: String = get("ISABELLE_PLATFORM64") val ISABELLE_WINDOWS_PLATFORM64: String = get("ISABELLE_WINDOWS_PLATFORM64") val ISABELLE_APPLE_PLATFORM64: String = get("ISABELLE_APPLE_PLATFORM64") def is_arm: Boolean = ISABELLE_PLATFORM64.startsWith("arm64-") || ISABELLE_APPLE_PLATFORM64.startsWith("arm64-") def is_linux: Boolean = ISABELLE_PLATFORM_FAMILY == "linux" def is_macos: Boolean = ISABELLE_PLATFORM_FAMILY == "macos" def is_windows: Boolean = ISABELLE_PLATFORM_FAMILY == "windows" def arch_64: String = if (is_arm) "arm64" else "x86_64" def arch_64_32: String = if (is_arm) "arm64_32" else "x86_64_32" def os_name: String = if (is_macos) "darwin" else ISABELLE_PLATFORM_FAMILY override def toString: String = ISABELLE_PLATFORM_FAMILY } diff --git a/src/Pure/System/isabelle_system.scala b/src/Pure/System/isabelle_system.scala --- a/src/Pure/System/isabelle_system.scala +++ b/src/Pure/System/isabelle_system.scala @@ -1,633 +1,633 @@ /* Title: Pure/System/isabelle_system.scala Author: Makarius Fundamental Isabelle system environment: quasi-static module with optional init operation. */ package isabelle import java.io.{File => JFile, IOException} import java.nio.file.{Path => JPath, Files, SimpleFileVisitor, FileVisitResult, StandardCopyOption, FileSystemException} import java.nio.file.attribute.BasicFileAttributes import scala.jdk.CollectionConverters._ object Isabelle_System { /** bootstrap information **/ def jdk_home(): String = { val java_home = System.getProperty("java.home", "") val home = new JFile(java_home) val parent = home.getParent if (home.getName == "jre" && parent != null && (new JFile(new JFile(parent, "bin"), "javac")).exists) parent else java_home } def bootstrap_directory( preference: String, envar: String, property: String, description: String): String = { val value = proper_string(preference) orElse // explicit argument proper_string(System.getenv(envar)) orElse // e.g. inherited from running isabelle tool proper_string(System.getProperty(property)) getOrElse // e.g. via JVM application boot process error("Unknown " + description + " directory") if ((new JFile(value)).isDirectory) value else error("Bad " + description + " directory " + quote(value)) } /** implicit settings environment **/ abstract class Service @volatile private var _settings: Option[Map[String, String]] = None @volatile private var _services: Option[List[Class[Service]]] = None def settings(): Map[String, String] = { if (_settings.isEmpty) init() // unsynchronized check _settings.get } def services(): List[Class[Service]] = { if (_services.isEmpty) init() // unsynchronized check _services.get } def make_services[C](c: Class[C]): List[C] = for { c1 <- services() if Library.is_subclass(c1, c) } yield c1.getDeclaredConstructor().newInstance().asInstanceOf[C] def init(isabelle_root: String = "", cygwin_root: String = ""): Unit = synchronized { if (_settings.isEmpty || _services.isEmpty) { val isabelle_root1 = bootstrap_directory(isabelle_root, "ISABELLE_ROOT", "isabelle.root", "Isabelle root") val cygwin_root1 = if (Platform.is_windows) bootstrap_directory(cygwin_root, "CYGWIN_ROOT", "cygwin.root", "Cygwin root") else "" if (Platform.is_windows) Cygwin.init(isabelle_root1, cygwin_root1) def set_cygwin_root(): Unit = { if (Platform.is_windows) _settings = Some(_settings.getOrElse(Map.empty) + ("CYGWIN_ROOT" -> cygwin_root1)) } set_cygwin_root() def default(env: Map[String, String], entry: (String, String)): Map[String, String] = if (env.isDefinedAt(entry._1) || entry._2 == "") env else env + entry val env = { val temp_windows = { val temp = if (Platform.is_windows) System.getenv("TEMP") else null if (temp != null && temp.contains('\\')) temp else "" } val user_home = System.getProperty("user.home", "") val isabelle_app = System.getProperty("isabelle.app", "") default( default( default(sys.env + ("ISABELLE_JDK_HOME" -> File.standard_path(jdk_home())), "TEMP_WINDOWS" -> temp_windows), "HOME" -> user_home), "ISABELLE_APP" -> "true") } val settings = { val dump = JFile.createTempFile("settings", null) dump.deleteOnExit try { val cmd1 = if (Platform.is_windows) List(cygwin_root1 + "\\bin\\bash", "-l", File.standard_path(isabelle_root1 + "\\bin\\isabelle")) else List(isabelle_root1 + "/bin/isabelle") val cmd = cmd1 ::: List("getenv", "-d", dump.toString) val (output, rc) = process_output(process(cmd, env = env, redirect = true)) if (rc != 0) error(output) val entries = - (for (entry <- space_explode('\u0000', File.read(dump)) if entry != "") yield { - val i = entry.indexOf('=') - if (i <= 0) entry -> "" - else entry.substring(0, i) -> entry.substring(i + 1) - }).toMap + space_explode('\u0000', File.read(dump)).flatMap( + { + case Properties.Eq(a, b) => Some(a -> b) + case s => if (s.isEmpty || s.startsWith("=")) None else Some(s -> "") + }).toMap entries + ("PATH" -> entries("PATH_JVM")) - "PATH_JVM" } finally { dump.delete } } _settings = Some(settings) set_cygwin_root() val variable = "ISABELLE_SCALA_SERVICES" val services = for (name <- space_explode(':', settings.getOrElse(variable, getenv_error(variable)))) yield { def err(msg: String): Nothing = error("Bad entry " + quote(name) + " in " + variable + "\n" + msg) try { Class.forName(name).asInstanceOf[Class[Service]] } catch { case _: ClassNotFoundException => err("Class not found") case exn: Throwable => err(Exn.message(exn)) } } _services = Some(services) } } /* getenv -- dynamic process environment */ private def getenv_error(name: String): Nothing = error("Undefined Isabelle environment variable: " + quote(name)) def getenv(name: String, env: Map[String, String] = settings()): String = env.getOrElse(name, "") def getenv_strict(name: String, env: Map[String, String] = settings()): String = proper_string(getenv(name, env)) getOrElse error("Undefined Isabelle environment variable: " + quote(name)) def cygwin_root(): String = getenv_strict("CYGWIN_ROOT") /* getetc -- static distribution parameters */ def getetc(name: String, root: Path = Path.ISABELLE_HOME): Option[String] = { val path = root + Path.basic("etc") + Path.basic(name) if (path.is_file) { Library.trim_split_lines(File.read(path)) match { case Nil => None case List(s) => Some(s) case _ => error("Single line expected in " + path.absolute) } } else None } /* Isabelle distribution identification */ def isabelle_id(root: Path = Path.ISABELLE_HOME): String = getetc("ISABELLE_ID", root = root) orElse Mercurial.archive_id(root) getOrElse { if (Mercurial.is_repository(root)) Mercurial.repository(root).parent() else error("Failed to identify Isabelle distribution " + root) } object Isabelle_Id extends Scala.Fun_String("isabelle_id") { val here = Scala_Project.here def apply(arg: String): String = isabelle_id() } def isabelle_tags(root: Path = Path.ISABELLE_HOME): String = getetc("ISABELLE_TAGS", root = root) orElse Mercurial.archive_tags(root) getOrElse { if (Mercurial.is_repository(root)) { val hg = Mercurial.repository(root) hg.tags(rev = hg.parent()) } else "" } def isabelle_identifier(): Option[String] = proper_string(getenv("ISABELLE_IDENTIFIER")) def isabelle_heading(): String = isabelle_identifier() match { case None => "" case Some(version) => " (" + version + ")" } def isabelle_name(): String = getenv_strict("ISABELLE_NAME") def identification(): String = "Isabelle/" + isabelle_id() + isabelle_heading() /** file-system operations **/ /* scala functions */ private def apply_paths(args: List[String], fun: List[Path] => Unit): List[String] = { fun(args.map(Path.explode)); Nil } private def apply_paths1(args: List[String], fun: Path => Unit): List[String] = apply_paths(args, { case List(path) => fun(path) }) private def apply_paths2(args: List[String], fun: (Path, Path) => Unit): List[String] = apply_paths(args, { case List(path1, path2) => fun(path1, path2) }) private def apply_paths3(args: List[String], fun: (Path, Path, Path) => Unit): List[String] = apply_paths(args, { case List(path1, path2, path3) => fun(path1, path2, path3) }) /* permissions */ def chmod(arg: String, path: Path): Unit = bash("chmod " + arg + " " + File.bash_path(path)).check def chown(arg: String, path: Path): Unit = bash("chown " + arg + " " + File.bash_path(path)).check /* directories */ def make_directory(path: Path): Path = { if (!path.is_dir) { try { Files.createDirectories(path.file.toPath) } catch { case ERROR(_) => error("Failed to create directory: " + path.absolute) } } path } def new_directory(path: Path): Path = if (path.is_dir) error("Directory already exists: " + path.absolute) else make_directory(path) def copy_dir(dir1: Path, dir2: Path): Unit = { val res = bash("cp -a " + File.bash_path(dir1) + " " + File.bash_path(dir2)) if (!res.ok) { cat_error("Failed to copy directory " + dir1.absolute + " to " + dir2.absolute, res.err) } } object Make_Directory extends Scala.Fun_Strings("make_directory") { val here = Scala_Project.here def apply(args: List[String]): List[String] = apply_paths1(args, make_directory) } object Copy_Dir extends Scala.Fun_Strings("copy_dir") { val here = Scala_Project.here def apply(args: List[String]): List[String] = apply_paths2(args, copy_dir) } /* copy files */ def copy_file(src: JFile, dst: JFile): Unit = { val target = if (dst.isDirectory) new JFile(dst, src.getName) else dst if (!File.eq(src, target)) { try { Files.copy(src.toPath, target.toPath, StandardCopyOption.COPY_ATTRIBUTES, StandardCopyOption.REPLACE_EXISTING) } catch { case ERROR(msg) => cat_error("Failed to copy file " + File.path(src).absolute + " to " + File.path(dst).absolute, msg) } } } def copy_file(src: Path, dst: Path): Unit = copy_file(src.file, dst.file) def copy_file_base(base_dir: Path, src: Path, target_dir: Path): Unit = { val src1 = src.expand val src1_dir = src1.dir if (!src1.starts_basic) error("Illegal path specification " + src1 + " beyond base directory") copy_file(base_dir + src1, Isabelle_System.make_directory(target_dir + src1_dir)) } object Copy_File extends Scala.Fun_Strings("copy_file") { val here = Scala_Project.here def apply(args: List[String]): List[String] = apply_paths2(args, copy_file) } object Copy_File_Base extends Scala.Fun_Strings("copy_file_base") { val here = Scala_Project.here def apply(args: List[String]): List[String] = apply_paths3(args, copy_file_base) } /* move files */ def move_file(src: JFile, dst: JFile): Unit = { val target = if (dst.isDirectory) new JFile(dst, src.getName) else dst if (!File.eq(src, target)) Files.move(src.toPath, target.toPath, StandardCopyOption.REPLACE_EXISTING) } def move_file(src: Path, dst: Path): Unit = move_file(src.file, dst.file) /* symbolic link */ def symlink(src: Path, dst: Path, force: Boolean = false): Unit = { val src_file = src.file val dst_file = dst.file val target = if (dst_file.isDirectory) new JFile(dst_file, src_file.getName) else dst_file if (force) target.delete try { Files.createSymbolicLink(target.toPath, src_file.toPath) } catch { case _: UnsupportedOperationException if Platform.is_windows => Cygwin.link(File.standard_path(src), target) case _: FileSystemException if Platform.is_windows => Cygwin.link(File.standard_path(src), target) } } /* tmp files */ def isabelle_tmp_prefix(): JFile = { val path = Path.explode("$ISABELLE_TMP_PREFIX") path.file.mkdirs // low-level mkdirs to avoid recursion via Isabelle environment File.platform_file(path) } def tmp_file(name: String, ext: String = "", base_dir: JFile = isabelle_tmp_prefix()): JFile = { val suffix = if (ext == "") "" else "." + ext val file = Files.createTempFile(base_dir.toPath, name, suffix).toFile file.deleteOnExit file } def with_tmp_file[A](name: String, ext: String = "")(body: Path => A): A = { val file = tmp_file(name, ext) try { body(File.path(file)) } finally { file.delete } } /* tmp dirs */ def rm_tree(root: JFile): Unit = { root.delete if (root.isDirectory) { Files.walkFileTree(root.toPath, new SimpleFileVisitor[JPath] { override def visitFile(file: JPath, attrs: BasicFileAttributes): FileVisitResult = { try { Files.deleteIfExists(file) } catch { case _: IOException => } FileVisitResult.CONTINUE } override def postVisitDirectory(dir: JPath, e: IOException): FileVisitResult = { if (e == null) { try { Files.deleteIfExists(dir) } catch { case _: IOException => } FileVisitResult.CONTINUE } else throw e } } ) } } def rm_tree(root: Path): Unit = rm_tree(root.file) object Rm_Tree extends Scala.Fun_Strings("rm_tree") { val here = Scala_Project.here def apply(args: List[String]): List[String] = apply_paths1(args, rm_tree) } def tmp_dir(name: String, base_dir: JFile = isabelle_tmp_prefix()): JFile = { val dir = Files.createTempDirectory(base_dir.toPath, name).toFile dir.deleteOnExit dir } def with_tmp_dir[A](name: String)(body: Path => A): A = { val dir = tmp_dir(name) try { body(File.path(dir)) } finally { rm_tree(dir) } } /* quasi-atomic update of directory */ def update_directory(dir: Path, f: Path => Unit): Unit = { val new_dir = dir.ext("new") val old_dir = dir.ext("old") rm_tree(new_dir) rm_tree(old_dir) f(new_dir) if (dir.is_dir) move_file(dir, old_dir) move_file(new_dir, dir) rm_tree(old_dir) } /** external processes **/ /* raw process */ def process(command_line: List[String], cwd: JFile = null, env: Map[String, String] = settings(), redirect: Boolean = false): Process = { val proc = new ProcessBuilder // fragile on Windows: // see https://docs.microsoft.com/en-us/cpp/cpp/main-function-command-line-args?view=msvc-160 proc.command(command_line.asJava) if (cwd != null) proc.directory(cwd) if (env != null) { proc.environment.clear() for ((x, y) <- env) proc.environment.put(x, y) } proc.redirectErrorStream(redirect) proc.start } def process_output(proc: Process): (String, Int) = { proc.getOutputStream.close() val output = File.read_stream(proc.getInputStream) val rc = try { proc.waitFor } finally { proc.getInputStream.close() proc.getErrorStream.close() proc.destroy() Exn.Interrupt.dispose() } (output, rc) } def process_signal(group_pid: String, signal: String = "0"): Boolean = { val bash = if (Platform.is_windows) List(cygwin_root() + "\\bin\\bash.exe") else List("/usr/bin/env", "bash") val (_, rc) = process_output(process(bash ::: List("-c", "kill -" + signal + " -" + group_pid))) rc == 0 } /* GNU bash */ def bash(script: String, cwd: JFile = null, env: Map[String, String] = settings(), redirect: Boolean = false, progress_stdout: String => Unit = (_: String) => (), progress_stderr: String => Unit = (_: String) => (), watchdog: Option[Bash.Watchdog] = None, strict: Boolean = true, cleanup: () => Unit = () => ()): Process_Result = { Bash.process(script, cwd = cwd, env = env, redirect = redirect, cleanup = cleanup). result(progress_stdout = progress_stdout, progress_stderr = progress_stderr, watchdog = watchdog, strict = strict) } private lazy val gnutar_check: Boolean = try { bash("tar --version").check.out.containsSlice("GNU tar") || error("") } catch { case ERROR(_) => false } def gnutar( args: String, dir: Path = Path.current, original_owner: Boolean = false, strip: Int = 0, redirect: Boolean = false): Process_Result = { val options = (if (dir.is_current) "" else "-C " + File.bash_path(dir) + " ") + (if (original_owner) "" else "--owner=root --group=staff ") + (if (strip <= 0) "" else "--strip-components=" + strip + " ") if (gnutar_check) bash("tar " + options + args, redirect = redirect) else error("Expected to find GNU tar executable") } def require_command(cmd: String, test: String = "--version"): Unit = { if (!bash(Bash.string(cmd) + " " + test).ok) error("Missing system command: " + quote(cmd)) } def hostname(): String = bash("hostname -s").check.out def open(arg: String): Unit = bash("exec \"$ISABELLE_OPEN\" " + Bash.string(arg) + " >/dev/null 2>/dev/null &") def pdf_viewer(arg: Path): Unit = bash("exec \"$PDF_VIEWER\" " + File.bash_path(arg) + " >/dev/null 2>/dev/null &") def open_external_file(name: String): Boolean = { val ext = Library.take_suffix((c: Char) => c != '.', name.toList)._2.mkString val external = ext.nonEmpty && Library.space_explode(':', getenv("ISABELLE_EXTERNAL_FILES")).contains(ext) if (external) { if (ext == "pdf" && Path.is_wellformed(name)) pdf_viewer(Path.explode(name)) else open(name) } external } def export_isabelle_identifier(isabelle_identifier: String): String = if (isabelle_identifier == "") "" else "export ISABELLE_IDENTIFIER=" + Bash.string(isabelle_identifier) + "\n" /** Isabelle resources **/ /* repository clone with Admin */ def admin(): Boolean = Path.explode("~~/Admin").is_dir /* components */ def components(): List[Path] = Path.split(getenv_strict("ISABELLE_COMPONENTS")) /* default logic */ def default_logic(args: String*): String = { args.find(_ != "") match { case Some(logic) => logic case None => getenv_strict("ISABELLE_LOGIC") } } /* download file */ def download(url_name: String, progress: Progress = new Progress): HTTP.Content = { val url = Url(url_name) progress.echo("Getting " + quote(url_name)) try { HTTP.Client.get(url) } catch { case ERROR(msg) => cat_error("Failed to download " + quote(url_name), msg) } } def download_file(url_name: String, file: Path, progress: Progress = new Progress): Unit = Bytes.write(file, download(url_name, progress = progress).bytes) object Download extends Scala.Fun("download", thread = true) { val here = Scala_Project.here override def invoke(args: List[Bytes]): List[Bytes] = args match { case List(url) => List(download(url.text).bytes) } } /* repositories */ val isabelle_repository: Mercurial.Server = Mercurial.Server("https://isabelle.sketis.net/repos/isabelle") val afp_repository: Mercurial.Server = Mercurial.Server("https://isabelle.sketis.net/repos/afp-devel") def official_releases(): List[String] = Library.trim_split_lines( isabelle_repository.read_file(Path.explode("Admin/Release/official"))) } diff --git a/src/Pure/System/isabelle_tool.ML b/src/Pure/System/isabelle_tool.ML --- a/src/Pure/System/isabelle_tool.ML +++ b/src/Pure/System/isabelle_tool.ML @@ -1,44 +1,44 @@ (* Title: Pure/System/isabelle_tool.ML Author: Makarius Support for Isabelle system tools. *) signature ISABELLE_TOOL = sig val isabelle_tools: unit -> (string * Position.T) list val check: Proof.context -> string * Position.T -> string end; structure Isabelle_Tool: ISABELLE_TOOL = struct (* list tools *) fun symbolic_file (a, b) = if a = Markup.fileN then (a, Path.explode b |> Path.implode_symbolic) else (a, b); fun isabelle_tools () = \<^scala>\isabelle_tools\ "" |> YXML.parse_body |> let open XML.Decode in list (pair string properties) end |> map (apsnd (map symbolic_file #> Position.of_properties)); (* check *) fun check ctxt arg = Completion.check_item Markup.toolN (fn (name, pos) => Markup.entity Markup.toolN name |> Markup.properties (Position.def_properties_of pos)) (isabelle_tools ()) ctxt arg; val _ = Theory.setup - (Thy_Output.antiquotation_verbatim_embedded \<^binding>\tool\ + (Document_Output.antiquotation_verbatim_embedded \<^binding>\tool\ (Scan.lift Parse.embedded_position) check); end; diff --git a/src/Pure/System/isabelle_tool.scala b/src/Pure/System/isabelle_tool.scala --- a/src/Pure/System/isabelle_tool.scala +++ b/src/Pure/System/isabelle_tool.scala @@ -1,235 +1,235 @@ /* Title: Pure/System/isabelle_tool.scala Author: Makarius Author: Lars Hupel Isabelle system tools: external executables or internal Scala functions. */ package isabelle import java.net.URLClassLoader import scala.reflect.runtime.universe import scala.tools.reflect.{ToolBox, ToolBoxError} object Isabelle_Tool { /* Scala source tools */ abstract class Body extends Function[List[String], Unit] private def compile(path: Path): Body = { def err(msg: String): Nothing = cat_error(msg, "The error(s) above occurred in Isabelle/Scala tool " + path) val source = File.read(path) val class_loader = new URLClassLoader(Array(), getClass.getClassLoader) val tool_box = universe.runtimeMirror(class_loader).mkToolBox() try { val tree = tool_box.parse(source) val module = try { tree.asInstanceOf[universe.ModuleDef] } catch { case _: java.lang.ClassCastException => err("Source does not describe a module (Scala object)") } tool_box.compile(universe.Ident(tool_box.define(module)))() match { case body: Body => body case _ => err("Ill-typed source: Isabelle_Tool.Body expected") } } catch { case e: ToolBoxError => if (tool_box.frontEnd.hasErrors) { val infos = tool_box.frontEnd.infos.toList val msgs = infos.map(info => "Error in line " + info.pos.line + ":\n" + info.msg) err(msgs.mkString("\n")) } else err(e.toString) } } /* external tools */ private def dirs(): List[Path] = Path.split(Isabelle_System.getenv_strict("ISABELLE_TOOLS")) private def is_external(dir: Path, file_name: String): Boolean = { val file = (dir + Path.explode(file_name)).file try { file.isFile && file.canRead && (file_name.endsWith(".scala") || file.canExecute) && !file_name.endsWith("~") && !file_name.endsWith(".orig") } catch { case _: SecurityException => false } } private def find_external(name: String): Option[List[String] => Unit] = dirs().collectFirst({ case dir if is_external(dir, name + ".scala") => compile(dir + Path.explode(name + ".scala")) case dir if is_external(dir, name) => (args: List[String]) => { val tool = dir + Path.explode(name) val result = Isabelle_System.bash(File.bash_path(tool) + " " + Bash.strings(args)) sys.exit(result.print_stdout.rc) } }) /* internal tools */ private lazy val internal_tools: List[Isabelle_Tool] = Isabelle_System.make_services(classOf[Isabelle_Scala_Tools]).flatMap(_.tools) private def find_internal(name: String): Option[List[String] => Unit] = internal_tools.collectFirst({ case tool if tool.name == name => args => Command_Line.tool { tool.body(args) } }) /* list tools */ abstract class Entry { def name: String def position: Properties.T def description: String def print: String = description match { case "" => name case descr => name + " - " + descr } } sealed case class External(name: String, path: Path) extends Entry { def position: Properties.T = Position.File(path.absolute.implode) def description: String = { val Pattern = """.*\bDESCRIPTION: *(.*)""".r split_lines(File.read(path)).collectFirst({ case Pattern(s) => s }) getOrElse "" } } def external_tools(): List[External] = { for { dir <- dirs() if dir.is_dir file_name <- File.read_dir(dir) if is_external(dir, file_name) } yield { val path = dir + Path.explode(file_name) val name = Library.perhaps_unsuffix(".scala", file_name) External(name, path) } } def isabelle_tools(): List[Entry] = (external_tools() ::: internal_tools).sortBy(_.name) object Isabelle_Tools extends Scala.Fun_String("isabelle_tools") { val here = Scala_Project.here def apply(arg: String): String = if (arg.nonEmpty) error("Bad argument: " + quote(arg)) else { val result = isabelle_tools().map(entry => (entry.name, entry.position)) val body = { import XML.Encode._; list(pair(string, properties))(result) } YXML.string_of_body(body) } } /* command line entry point */ def main(args: Array[String]): Unit = { Command_Line.tool { args.toList match { case Nil | List("-?") => val tool_descriptions = isabelle_tools().map(_.print) Getopts(""" Usage: isabelle TOOL [ARGS ...] Start Isabelle TOOL with ARGS; pass "-?" for tool-specific help. Available tools:""" + tool_descriptions.mkString("\n ", "\n ", "\n")).usage() case tool_name :: tool_args => find_external(tool_name) orElse find_internal(tool_name) match { case Some(tool) => tool(tool_args) case None => error("Unknown Isabelle tool: " + quote(tool_name)) } } } } } sealed case class Isabelle_Tool( name: String, description: String, here: Scala_Project.Here, body: List[String] => Unit) extends Isabelle_Tool.Entry { def position: Position.T = here.position } class Isabelle_Scala_Tools(val tools: Isabelle_Tool*) extends Isabelle_System.Service class Tools extends Isabelle_Scala_Tools( Build.isabelle_tool, Build_Docker.isabelle_tool, Build_Job.isabelle_tool, Doc.isabelle_tool, + Document_Build.isabelle_tool, Dump.isabelle_tool, Export.isabelle_tool, ML_Process.isabelle_tool, Mercurial.isabelle_tool, Mkroot.isabelle_tool, Logo.isabelle_tool, Options.isabelle_tool, Phabricator.isabelle_tool1, Phabricator.isabelle_tool2, Phabricator.isabelle_tool3, Phabricator.isabelle_tool4, - Presentation.isabelle_tool, Profiling_Report.isabelle_tool, Server.isabelle_tool, Sessions.isabelle_tool, Scala_Project.isabelle_tool, Update.isabelle_tool, Update_Cartouches.isabelle_tool, Update_Comments.isabelle_tool, Update_Header.isabelle_tool, Update_Then.isabelle_tool, Update_Theorems.isabelle_tool, isabelle.mirabelle.Mirabelle.isabelle_tool, isabelle.vscode.TextMate_Grammar.isabelle_tool, isabelle.vscode.Language_Server.isabelle_tool) class Admin_Tools extends Isabelle_Scala_Tools( Build_CSDP.isabelle_tool, Build_Cygwin.isabelle_tool, Build_Doc.isabelle_tool, Build_E.isabelle_tool, Build_Fonts.isabelle_tool, Build_JCEF.isabelle_tool, Build_JDK.isabelle_tool, Build_JEdit.isabelle_tool, Build_PolyML.isabelle_tool1, Build_PolyML.isabelle_tool2, Build_SPASS.isabelle_tool, Build_SQLite.isabelle_tool, Build_Status.isabelle_tool, Build_Vampire.isabelle_tool, Build_VeriT.isabelle_tool, Build_Zipperposition.isabelle_tool, Check_Sources.isabelle_tool, Components.isabelle_tool, isabelle.vscode.Build_VSCode.isabelle_tool) diff --git a/src/Pure/System/options.scala b/src/Pure/System/options.scala --- a/src/Pure/System/options.scala +++ b/src/Pure/System/options.scala @@ -1,454 +1,452 @@ /* Title: Pure/System/options.scala Author: Makarius System options with external string representation. */ package isabelle object Options { type Spec = (String, Option[String]) val empty: Options = new Options() /* representation */ sealed abstract class Type { def print: String = Word.lowercase(toString) } case object Bool extends Type case object Int extends Type case object Real extends Type case object String extends Type case object Unknown extends Type case class Opt( public: Boolean, pos: Position.T, name: String, typ: Type, value: String, default_value: String, description: String, section: String) { private def print(default: Boolean): String = { val x = if (default) default_value else value "option " + name + " : " + typ.print + " = " + (if (typ == Options.String) quote(x) else x) + (if (description == "") "" else "\n -- " + quote(description)) } def print: String = print(false) def print_default: String = print(true) def title(strip: String = ""): String = { val words = Word.explode('_', name) val words1 = words match { case word :: rest if word == strip => rest case _ => words } Word.implode(words1.map(Word.perhaps_capitalize)) } def unknown: Boolean = typ == Unknown } /* parsing */ private val SECTION = "section" private val PUBLIC = "public" private val OPTION = "option" private val OPTIONS = Path.explode("etc/options") private val PREFS = Path.explode("$ISABELLE_HOME_USER/etc/preferences") val options_syntax: Outer_Syntax = Outer_Syntax.empty + ":" + "=" + "--" + Symbol.comment + Symbol.comment_decoded + (SECTION, Keyword.DOCUMENT_HEADING) + (PUBLIC, Keyword.BEFORE_COMMAND) + (OPTION, Keyword.THY_DECL) val prefs_syntax: Outer_Syntax = Outer_Syntax.empty + "=" trait Parser extends Parse.Parser { val option_name: Parser[String] = atom("option name", _.is_name) val option_type: Parser[String] = atom("option type", _.is_name) val option_value: Parser[String] = opt(token("-", tok => tok.is_sym_ident && tok.content == "-")) ~ atom("nat", _.is_nat) ^^ { case s ~ n => if (s.isDefined) "-" + n else n } | atom("option value", tok => tok.is_name || tok.is_float) } private object Parser extends Parser { def comment_marker: Parser[String] = $$$("--") | $$$(Symbol.comment) | $$$(Symbol.comment_decoded) val option_entry: Parser[Options => Options] = { command(SECTION) ~! text ^^ { case _ ~ a => (options: Options) => options.set_section(a) } | opt($$$(PUBLIC)) ~ command(OPTION) ~! (position(option_name) ~ $$$(":") ~ option_type ~ $$$("=") ~ option_value ~ (comment_marker ~! text ^^ { case _ ~ x => x } | success(""))) ^^ { case a ~ _ ~ ((b, pos) ~ _ ~ c ~ _ ~ d ~ e) => (options: Options) => options.declare(a.isDefined, pos, b, c, d, e) } } val prefs_entry: Parser[Options => Options] = { option_name ~ ($$$("=") ~! option_value) ^^ { case a ~ (_ ~ b) => (options: Options) => options.add_permissive(a, b) } } def parse_file(options: Options, file_name: String, content: String, syntax: Outer_Syntax = options_syntax, parser: Parser[Options => Options] = option_entry): Options = { val toks = Token.explode(syntax.keywords, content) val ops = parse_all(rep(parser), Token.reader(toks, Token.Pos.file(file_name))) match { case Success(result, _) => result case bad => error(bad.toString) } try { ops.foldLeft(options.set_section("")) { case (opts, op) => op(opts) } } catch { case ERROR(msg) => error(msg + Position.here(Position.File(file_name))) } } def parse_prefs(options: Options, content: String): Options = parse_file(options, PREFS.file_name, content, syntax = prefs_syntax, parser = prefs_entry) } def read_prefs(file: Path = PREFS): String = if (file.is_file) File.read(file) else "" def init(prefs: String = read_prefs(PREFS), opts: List[String] = Nil): Options = { var options = empty for { dir <- Isabelle_System.components() file = dir + OPTIONS if file.is_file } { options = Parser.parse_file(options, file.implode, File.read(file)) } opts.foldLeft(Options.Parser.parse_prefs(options, prefs))(_ + _) } /* encode */ val encode: XML.Encode.T[Options] = (options => options.encode) /* Isabelle tool wrapper */ val isabelle_tool = Isabelle_Tool("options", "print Isabelle system options", Scala_Project.here, args => { var build_options = false var get_option = "" var list_options = false var export_file = "" val getopts = Getopts(""" Usage: isabelle options [OPTIONS] [MORE_OPTIONS ...] Options are: -b include $ISABELLE_BUILD_OPTIONS -g OPTION get value of OPTION -l list options -x FILE export options to FILE in YXML format Report Isabelle system options, augmented by MORE_OPTIONS given as arguments NAME=VAL or NAME. """, "b" -> (_ => build_options = true), "g:" -> (arg => get_option = arg), "l" -> (_ => list_options = true), "x:" -> (arg => export_file = arg)) val more_options = getopts(args) if (get_option == "" && !list_options && export_file == "") getopts.usage() val options = { val options0 = Options.init() val options1 = if (build_options) Word.explode(Isabelle_System.getenv("ISABELLE_BUILD_OPTIONS")).foldLeft(options0)(_ + _) else options0 more_options.foldLeft(options1)(_ + _) } if (get_option != "") Output.writeln(options.check_name(get_option).value, stdout = true) if (export_file != "") File.write(Path.explode(export_file), YXML.string_of_body(options.encode)) if (get_option == "" && export_file == "") Output.writeln(options.print, stdout = true) }) } final class Options private( val options: Map[String, Options.Opt] = Map.empty, val section: String = "") { override def toString: String = options.iterator.mkString("Options(", ",", ")") private def print_opt(opt: Options.Opt): String = if (opt.public) "public " + opt.print else opt.print def print: String = cat_lines(options.toList.sortBy(_._1).map(p => print_opt(p._2))) def description(name: String): String = check_name(name).description /* check */ def check_name(name: String): Options.Opt = options.get(name) match { case Some(opt) if !opt.unknown => opt case _ => error("Unknown option " + quote(name)) } private def check_type(name: String, typ: Options.Type): Options.Opt = { val opt = check_name(name) if (opt.typ == typ) opt else error("Ill-typed option " + quote(name) + " : " + opt.typ.print + " vs. " + typ.print) } /* basic operations */ private def put[A](name: String, typ: Options.Type, value: String): Options = { val opt = check_type(name, typ) new Options(options + (name -> opt.copy(value = value)), section) } private def get[A](name: String, typ: Options.Type, parse: String => Option[A]): A = { val opt = check_type(name, typ) parse(opt.value) match { case Some(x) => x case None => error("Malformed value for option " + quote(name) + " : " + typ.print + " =\n" + quote(opt.value)) } } /* internal lookup and update */ class Bool_Access { def apply(name: String): Boolean = get(name, Options.Bool, Value.Boolean.unapply) def update(name: String, x: Boolean): Options = put(name, Options.Bool, Value.Boolean(x)) } val bool = new Bool_Access class Int_Access { def apply(name: String): Int = get(name, Options.Int, Value.Int.unapply) def update(name: String, x: Int): Options = put(name, Options.Int, Value.Int(x)) } val int = new Int_Access class Real_Access { def apply(name: String): Double = get(name, Options.Real, Value.Double.unapply) def update(name: String, x: Double): Options = put(name, Options.Real, Value.Double(x)) } val real = new Real_Access class String_Access { def apply(name: String): String = get(name, Options.String, s => Some(s)) def update(name: String, x: String): Options = put(name, Options.String, x) } val string = new String_Access def proper_string(name: String): Option[String] = Library.proper_string(string(name)) def seconds(name: String): Time = Time.seconds(real(name)) /* external updates */ private def check_value(name: String): Options = { val opt = check_name(name) opt.typ match { case Options.Bool => bool(name); this case Options.Int => int(name); this case Options.Real => real(name); this case Options.String => string(name); this case Options.Unknown => this } } def declare( public: Boolean, pos: Position.T, name: String, typ_name: String, value: String, description: String): Options = { options.get(name) match { case Some(other) => error("Duplicate declaration of option " + quote(name) + Position.here(pos) + Position.here(other.pos)) case None => val typ = typ_name match { case "bool" => Options.Bool case "int" => Options.Int case "real" => Options.Real case "string" => Options.String case _ => error("Unknown type for option " + quote(name) + " : " + quote(typ_name) + Position.here(pos)) } val opt = Options.Opt(public, pos, name, typ, value, value, description, section) (new Options(options + (name -> opt), section)).check_value(name) } } def add_permissive(name: String, value: String): Options = { if (options.isDefinedAt(name)) this + (name, value) else { val opt = Options.Opt(false, Position.none, name, Options.Unknown, value, value, "", "") new Options(options + (name -> opt), section) } } def + (name: String, value: String): Options = { val opt = check_name(name) (new Options(options + (name -> opt.copy(value = value)), section)).check_value(name) } def + (name: String, opt_value: Option[String]): Options = { val opt = check_name(name) opt_value match { case Some(value) => this + (name, value) case None if opt.typ == Options.Bool => this + (name, "true") case None => error("Missing value for option " + quote(name) + " : " + opt.typ.print) } } def + (str: String): Options = - { - str.indexOf('=') match { - case -1 => this + (str, None) - case i => this + (str.substring(0, i), str.substring(i + 1)) + str match { + case Properties.Eq(a, b) => this + (a, b) + case _ => this + (str, None) } - } def ++ (specs: List[Options.Spec]): Options = specs.foldLeft(this) { case (x, (y, z)) => x + (y, z) } /* sections */ def set_section(new_section: String): Options = new Options(options, new_section) def sections: List[(String, List[Options.Opt])] = options.groupBy(_._2.section).toList.map({ case (a, opts) => (a, opts.toList.map(_._2)) }) /* encode */ def encode: XML.Body = { val opts = for ((_, opt) <- options.toList; if !opt.unknown) yield (opt.pos, (opt.name, (opt.typ.print, opt.value))) import XML.Encode.{string => string_, _} list(pair(properties, pair(string_, pair(string_, string_))))(opts) } /* save preferences */ def save_prefs(file: Path = Options.PREFS): Unit = { val defaults: Options = Options.init(prefs = "") val changed = (for { (name, opt2) <- options.iterator opt1 = defaults.options.get(name) if opt1.isEmpty || opt1.get.value != opt2.value } yield (name, opt2.value, if (opt1.isEmpty) " (* unknown *)" else "")).toList val prefs = changed.sortBy(_._1) .map({ case (x, y, z) => x + " = " + Outer_Syntax.quote_string(y) + z + "\n" }).mkString Isabelle_System.make_directory(file.dir) File.write_backup(file, "(* generated by Isabelle " + Date.now() + " *)\n\n" + prefs) } } class Options_Variable(init_options: Options) { private var options = init_options def value: Options = synchronized { options } private def upd(f: Options => Options): Unit = synchronized { options = f(options) } def += (name: String, x: String): Unit = upd(opts => opts + (name, x)) class Bool_Access { def apply(name: String): Boolean = value.bool(name) def update(name: String, x: Boolean): Unit = upd(opts => opts.bool.update(name, x)) } val bool = new Bool_Access class Int_Access { def apply(name: String): Int = value.int(name) def update(name: String, x: Int): Unit = upd(opts => opts.int.update(name, x)) } val int = new Int_Access class Real_Access { def apply(name: String): Double = value.real(name) def update(name: String, x: Double): Unit = upd(opts => opts.real.update(name, x)) } val real = new Real_Access class String_Access { def apply(name: String): String = value.string(name) def update(name: String, x: String): Unit = upd(opts => opts.string.update(name, x)) } val string = new String_Access def proper_string(name: String): Option[String] = Library.proper_string(string(name)) def seconds(name: String): Time = value.seconds(name) } diff --git a/src/Pure/System/scala_compiler.ML b/src/Pure/System/scala_compiler.ML --- a/src/Pure/System/scala_compiler.ML +++ b/src/Pure/System/scala_compiler.ML @@ -1,99 +1,99 @@ (* Title: Pure/System/scala_compiler.ML Author: Makarius Scala compiler operations. *) signature SCALA_COMPILER = sig val toplevel: bool -> string -> unit val static_check: string * Position.T -> unit end; structure Scala_Compiler: SCALA_COMPILER = struct (* check declaration *) fun toplevel interpret source = let val errors = (interpret, source) |> let open XML.Encode in pair bool string end |> YXML.string_of_body |> \<^scala>\scala_toplevel\ |> YXML.parse_body |> let open XML.Decode in list string end in if null errors then () else error (cat_lines errors) end; fun static_check (source, pos) = toplevel false ("package test\nclass __Dummy__ { __dummy__ => " ^ source ^ " }") handle ERROR msg => error (msg ^ Position.here pos); (* antiquotations *) local fun make_list bg en = space_implode "," #> enclose bg en; fun print_args [] = "" | print_args xs = make_list "(" ")" xs; fun print_types [] = "" | print_types Ts = make_list "[" "]" Ts; fun print_class (c, Ts) = c ^ print_types Ts; val types = Scan.optional (Parse.$$$ "[" |-- Parse.list1 Parse.name --| Parse.$$$ "]") []; val class = Scan.option (Parse.$$$ "(" |-- Parse.!!! (Parse.$$$ "in" |-- Parse.name -- types --| Parse.$$$ ")")); val arguments = (Parse.nat >> (fn n => replicate n "_") || Parse.list (Parse.underscore || Parse.name >> (fn T => "_ : " ^ T))) >> print_args; val args = Scan.optional (Parse.$$$ "(" |-- arguments --| Parse.$$$ ")") " _"; fun scala_name name = let val latex = Latex.string (Latex.output_ascii_breakable "." name) in Latex.enclose_block "\\isatt{" "}" [latex] end; in val _ = Theory.setup - (Thy_Output.antiquotation_verbatim_embedded \<^binding>\scala\ + (Document_Output.antiquotation_verbatim_embedded \<^binding>\scala\ (Scan.lift Args.embedded_position) (fn _ => fn (s, pos) => (static_check (s, pos); s)) #> - Thy_Output.antiquotation_raw_embedded \<^binding>\scala_type\ + Document_Output.antiquotation_raw_embedded \<^binding>\scala_type\ (Scan.lift (Args.embedded_position -- (types >> print_types))) (fn _ => fn ((t, pos), type_args) => (static_check ("type _Test_" ^ type_args ^ " = " ^ t ^ type_args, pos); scala_name (t ^ type_args))) #> - Thy_Output.antiquotation_raw_embedded \<^binding>\scala_object\ + Document_Output.antiquotation_raw_embedded \<^binding>\scala_object\ (Scan.lift Args.embedded_position) (fn _ => fn (x, pos) => (static_check ("val _test_ = " ^ x, pos); scala_name x)) #> - Thy_Output.antiquotation_raw_embedded \<^binding>\scala_method\ + Document_Output.antiquotation_raw_embedded \<^binding>\scala_method\ (Scan.lift (class -- Args.embedded_position -- types -- args)) (fn _ => fn (((class_context, (method, pos)), method_types), method_args) => let val class_types = (case class_context of SOME (_, Ts) => Ts | NONE => []); val def = "def _test_" ^ print_types (merge (op =) (method_types, class_types)); val def_context = (case class_context of NONE => def ^ " = " | SOME c => def ^ "(_this_ : " ^ print_class c ^ ") = _this_."); val source = def_context ^ method ^ method_args; val _ = static_check (source, pos); val text = (case class_context of NONE => method | SOME c => print_class c ^ "." ^ method); in scala_name text end)); end; end; diff --git a/src/Pure/Thy/bibtex.ML b/src/Pure/Thy/bibtex.ML --- a/src/Pure/Thy/bibtex.ML +++ b/src/Pure/Thy/bibtex.ML @@ -1,66 +1,66 @@ (* Title: Pure/Thy/bibtex.ML Author: Makarius BibTeX support. *) signature BIBTEX = sig val check_database: Position.T -> string -> (string * Position.T) list * (string * Position.T) list val check_database_output: Position.T -> string -> unit val cite_macro: string Config.T end; structure Bibtex: BIBTEX = struct (* check database *) type message = string * Position.T; fun check_database pos0 database = \<^scala>\bibtex_check_database\ database |> YXML.parse_body |> let open XML.Decode in pair (list (pair string properties)) (list (pair string properties)) end |> (apply2 o map o apsnd) (fn pos => Position.of_properties (pos @ Position.get_props pos0)); fun check_database_output pos0 database = let val (errors, warnings) = check_database pos0 database in errors |> List.app (fn (msg, pos) => Output.error_message ("Bibtex error" ^ Position.here pos ^ ":\n " ^ msg)); warnings |> List.app (fn (msg, pos) => warning ("Bibtex warning" ^ Position.here pos ^ ":\n " ^ msg)) end; (* document antiquotations *) val cite_macro = Attrib.setup_config_string \<^binding>\cite_macro\ (K "cite"); val _ = Theory.setup (Document_Antiquotation.setup_option \<^binding>\cite_macro\ (Config.put cite_macro) #> - Thy_Output.antiquotation_raw \<^binding>\cite\ + Document_Output.antiquotation_raw \<^binding>\cite\ (Scan.lift (Scan.option (Parse.verbatim || Parse.cartouche) -- Parse.and_list1 Args.name_position)) (fn ctxt => fn (opt, citations) => let val _ = Context_Position.reports ctxt (map (fn (name, pos) => (pos, Markup.citation name)) citations); val thy_name = Context.theory_long_name (Proof_Context.theory_of ctxt); val bibtex_entries = Resources.theory_bibtex_entries thy_name; val _ = if null bibtex_entries andalso thy_name <> Context.PureN then () else citations |> List.app (fn (name, pos) => if member (op =) bibtex_entries name then () else error ("Unknown Bibtex entry " ^ quote name ^ Position.here pos)); val opt_arg = (case opt of NONE => "" | SOME s => "[" ^ s ^ "]"); val arg = "{" ^ space_implode "," (map #1 citations) ^ "}"; in Latex.string ("\\" ^ Config.get ctxt cite_macro ^ opt_arg ^ arg) end)); end; diff --git a/src/Pure/Thy/bibtex.scala b/src/Pure/Thy/bibtex.scala --- a/src/Pure/Thy/bibtex.scala +++ b/src/Pure/Thy/bibtex.scala @@ -1,704 +1,706 @@ /* Title: Pure/Thy/bibtex.scala Author: Makarius BibTeX support. */ package isabelle import java.io.{File => JFile} import scala.collection.mutable import scala.util.parsing.combinator.RegexParsers import scala.util.parsing.input.Reader object Bibtex { /** file format **/ def is_bibtex(name: String): Boolean = name.endsWith(".bib") class File_Format extends isabelle.File_Format { val format_name: String = "bibtex" val file_ext: String = "bib" override def theory_suffix: String = "bibtex_file" override def theory_content(name: String): String = """theory "bib" imports Pure begin bibtex_file """ + Outer_Syntax.quote_string(name) + """ end""" override def html_document(snapshot: Document.Snapshot): Option[Presentation.HTML_Document] = { val name = snapshot.node_name if (detect(name.node)) { val title = "Bibliography " + quote(snapshot.node_name.path.file_name) val content = Isabelle_System.with_tmp_file("bib", "bib") { bib => File.write(bib, snapshot.node.source) Bibtex.html_output(List(bib), style = "unsort", title = title) } Some(Presentation.HTML_Document(title, content)) } else None } } /** bibtex errors **/ def bibtex_errors(dir: Path, root_name: String): List[String] = { val log_path = dir + Path.explode(root_name).ext("blg") if (log_path.is_file) { val Error1 = """^(I couldn't open database file .+)$""".r - val Error2 = """^(.+)---line (\d+) of file (.+)""".r + val Error2 = """^(I found no .+)$""".r + val Error3 = """^(.+)---line (\d+) of file (.+)""".r Line.logical_lines(File.read(log_path)).flatMap(line => line match { case Error1(msg) => Some("Bibtex error: " + msg) - case Error2(msg, Value.Int(l), file) => + case Error2(msg) => Some("Bibtex error: " + msg) + case Error3(msg, Value.Int(l), file) => val path = File.standard_path(file) if (Path.is_wellformed(path)) { val pos = Position.Line_File(l, (dir + Path.explode(path)).canonical.implode) Some("Bibtex error" + Position.here(pos) + ":\n " + msg) } else None case _ => None }) } else Nil } /** check database **/ def check_database(database: String): (List[(String, Position.T)], List[(String, Position.T)]) = { val chunks = parse(Line.normalize(database)) var chunk_pos = Map.empty[String, Position.T] val tokens = new mutable.ListBuffer[(Token, Position.T)] var line = 1 var offset = 1 def make_pos(length: Int): Position.T = Position.Offset(offset) ::: Position.End_Offset(offset + length) ::: Position.Line(line) def advance_pos(tok: Token): Unit = { for (s <- Symbol.iterator(tok.source)) { if (Symbol.is_newline(s)) line += 1 offset += 1 } } def get_line_pos(l: Int): Position.T = if (0 < l && l <= tokens.length) tokens(l - 1)._2 else Position.none for (chunk <- chunks) { val name = chunk.name if (name != "" && !chunk_pos.isDefinedAt(name)) { chunk_pos += (name -> make_pos(chunk.heading_length)) } for (tok <- chunk.tokens) { tokens += (tok.copy(source = tok.source.replace("\n", " ")) -> make_pos(tok.source.length)) advance_pos(tok) } } Isabelle_System.with_tmp_dir("bibtex")(tmp_dir => { File.write(tmp_dir + Path.explode("root.bib"), tokens.iterator.map(p => p._1.source).mkString("", "\n", "\n")) File.write(tmp_dir + Path.explode("root.aux"), "\\bibstyle{plain}\n\\bibdata{root}\n\\citation{*}") Isabelle_System.bash("\"$ISABELLE_BIBTEX\" root", cwd = tmp_dir.file) val Error = """^(.*)---line (\d+) of file root.bib$""".r val Warning = """^Warning--(.+)$""".r val Warning_Line = """--line (\d+) of file root.bib$""".r val Warning_in_Chunk = """^Warning--(.+) in (.+)$""".r val log_file = tmp_dir + Path.explode("root.blg") val lines = if (log_file.is_file) Line.logical_lines(File.read(log_file)) else Nil val (errors, warnings) = if (lines.isEmpty) (Nil, Nil) else { lines.zip(lines.tail ::: List("")).flatMap( { case (Error(msg, Value.Int(l)), _) => Some((true, (msg, get_line_pos(l)))) case (Warning_in_Chunk(msg, name), _) if chunk_pos.isDefinedAt(name) => Some((false, (Word.capitalize(msg + " in entry " + quote(name)), chunk_pos(name)))) case (Warning(msg), Warning_Line(Value.Int(l))) => Some((false, (Word.capitalize(msg), get_line_pos(l)))) case (Warning(msg), _) => Some((false, (Word.capitalize(msg), Position.none))) case _ => None } ).partition(_._1) } (errors.map(_._2), warnings.map(_._2)) }) } object Check_Database extends Scala.Fun_String("bibtex_check_database") { val here = Scala_Project.here def apply(database: String): String = { import XML.Encode._ YXML.string_of_body(pair(list(pair(string, properties)), list(pair(string, properties)))( check_database(database))) } } /** document model **/ /* entries */ def entries(text: String): List[Text.Info[String]] = { val result = new mutable.ListBuffer[Text.Info[String]] var offset = 0 for (chunk <- Bibtex.parse(text)) { val end_offset = offset + chunk.source.length if (chunk.name != "" && !chunk.is_command) result += Text.Info(Text.Range(offset, end_offset), chunk.name) offset = end_offset } result.toList } def entries_iterator[A, B <: Document.Model](models: Map[A, B]) : Iterator[Text.Info[(String, B)]] = { for { (_, model) <- models.iterator info <- model.bibtex_entries.iterator } yield info.map((_, model)) } /* completion */ def completion[A, B <: Document.Model]( history: Completion.History, rendering: Rendering, caret: Text.Offset, models: Map[A, B]): Option[Completion.Result] = { for { Text.Info(r, name) <- rendering.citations(rendering.before_caret_range(caret)).headOption name1 <- Completion.clean_name(name) original <- rendering.get_text(r) original1 <- Completion.clean_name(Library.perhaps_unquote(original)) entries = (for { Text.Info(_, (entry, _)) <- entries_iterator(models) if entry.toLowerCase.containsSlice(name1.toLowerCase) && entry != original1 } yield entry).toList if entries.nonEmpty items = entries.sorted.map({ case entry => val full_name = Long_Name.qualify(Markup.CITATION, entry) val description = List(entry, "(BibTeX entry)") val replacement = quote(entry) Completion.Item(r, original, full_name, description, replacement, 0, false) }).sorted(history.ordering).take(rendering.options.int("completion_limit")) } yield Completion.Result(r, original, false, items) } /** content **/ private val months = List( "jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec") def is_month(s: String): Boolean = months.contains(s.toLowerCase) private val commands = List("preamble", "string") def is_command(s: String): Boolean = commands.contains(s.toLowerCase) sealed case class Entry( kind: String, required: List[String], optional_crossref: List[String], optional_other: List[String]) { val optional_standard: List[String] = List("url", "doi", "ee") def is_required(s: String): Boolean = required.contains(s.toLowerCase) def is_optional(s: String): Boolean = optional_crossref.contains(s.toLowerCase) || optional_other.contains(s.toLowerCase) || optional_standard.contains(s.toLowerCase) def fields: List[String] = required ::: optional_crossref ::: optional_other ::: optional_standard def template: String = "@" + kind + "{,\n" + fields.map(x => " " + x + " = {},\n").mkString + "}\n" } val known_entries: List[Entry] = List( Entry("Article", List("author", "title"), List("journal", "year"), List("volume", "number", "pages", "month", "note")), Entry("InProceedings", List("author", "title"), List("booktitle", "year"), List("editor", "volume", "number", "series", "pages", "month", "address", "organization", "publisher", "note")), Entry("InCollection", List("author", "title", "booktitle"), List("publisher", "year"), List("editor", "volume", "number", "series", "type", "chapter", "pages", "edition", "month", "address", "note")), Entry("InBook", List("author", "editor", "title", "chapter"), List("publisher", "year"), List("volume", "number", "series", "type", "address", "edition", "month", "pages", "note")), Entry("Proceedings", List("title", "year"), List(), List("booktitle", "editor", "volume", "number", "series", "address", "month", "organization", "publisher", "note")), Entry("Book", List("author", "editor", "title"), List("publisher", "year"), List("volume", "number", "series", "address", "edition", "month", "note")), Entry("Booklet", List("title"), List(), List("author", "howpublished", "address", "month", "year", "note")), Entry("PhdThesis", List("author", "title", "school", "year"), List(), List("type", "address", "month", "note")), Entry("MastersThesis", List("author", "title", "school", "year"), List(), List("type", "address", "month", "note")), Entry("TechReport", List("author", "title", "institution", "year"), List(), List("type", "number", "address", "month", "note")), Entry("Manual", List("title"), List(), List("author", "organization", "address", "edition", "month", "year", "note")), Entry("Unpublished", List("author", "title", "note"), List(), List("month", "year")), Entry("Misc", List(), List(), List("author", "title", "howpublished", "month", "year", "note"))) def get_entry(kind: String): Option[Entry] = known_entries.find(entry => entry.kind.toLowerCase == kind.toLowerCase) def is_entry(kind: String): Boolean = get_entry(kind).isDefined /** tokens and chunks **/ object Token { object Kind extends Enumeration { val COMMAND = Value("command") val ENTRY = Value("entry") val KEYWORD = Value("keyword") val NAT = Value("natural number") val STRING = Value("string") val NAME = Value("name") val IDENT = Value("identifier") val SPACE = Value("white space") val COMMENT = Value("ignored text") val ERROR = Value("bad input") } } sealed case class Token(kind: Token.Kind.Value, source: String) { def is_kind: Boolean = kind == Token.Kind.COMMAND || kind == Token.Kind.ENTRY || kind == Token.Kind.IDENT def is_name: Boolean = kind == Token.Kind.NAME || kind == Token.Kind.IDENT def is_ignored: Boolean = kind == Token.Kind.SPACE || kind == Token.Kind.COMMENT def is_malformed: Boolean = kind == Token.Kind.ERROR def is_open: Boolean = kind == Token.Kind.KEYWORD && (source == "{" || source == "(") } case class Chunk(kind: String, tokens: List[Token]) { val source = tokens.map(_.source).mkString private val content: Option[List[Token]] = tokens match { case Token(Token.Kind.KEYWORD, "@") :: body if body.nonEmpty => (body.init.filterNot(_.is_ignored), body.last) match { case (tok :: Token(Token.Kind.KEYWORD, "{") :: toks, Token(Token.Kind.KEYWORD, "}")) if tok.is_kind => Some(toks) case (tok :: Token(Token.Kind.KEYWORD, "(") :: toks, Token(Token.Kind.KEYWORD, ")")) if tok.is_kind => Some(toks) case _ => None } case _ => None } def name: String = content match { case Some(tok :: _) if tok.is_name => tok.source case _ => "" } def heading_length: Int = if (name == "") 1 else { tokens.takeWhile(tok => !tok.is_open).foldLeft(0) { case (n, tok) => n + tok.source.length } } def is_ignored: Boolean = kind == "" && tokens.forall(_.is_ignored) def is_malformed: Boolean = kind == "" || tokens.exists(_.is_malformed) def is_command: Boolean = Bibtex.is_command(kind) && name != "" && content.isDefined def is_entry: Boolean = Bibtex.is_entry(kind) && name != "" && content.isDefined } /** parsing **/ // context of partial line-oriented scans abstract class Line_Context case object Ignored extends Line_Context case object At extends Line_Context case class Item_Start(kind: String) extends Line_Context case class Item_Open(kind: String, end: String) extends Line_Context case class Item(kind: String, end: String, delim: Delimited) extends Line_Context case class Delimited(quoted: Boolean, depth: Int) val Closed = Delimited(false, 0) private def token(kind: Token.Kind.Value)(source: String): Token = Token(kind, source) private def keyword(source: String): Token = Token(Token.Kind.KEYWORD, source) // See also https://ctan.org/tex-archive/biblio/bibtex/base/bibtex.web // module @. object Parsers extends RegexParsers { /* white space and comments */ override val whiteSpace = "".r private val space = """[ \t\n\r]+""".r ^^ token(Token.Kind.SPACE) private val spaces = rep(space) /* ignored text */ private val ignored: Parser[Chunk] = rep1("""(?i)([^@]+|@[ \t]*comment)""".r) ^^ { case ss => Chunk("", List(Token(Token.Kind.COMMENT, ss.mkString))) } private def ignored_line: Parser[(Chunk, Line_Context)] = ignored ^^ { case a => (a, Ignored) } /* delimited string: outermost "..." or {...} and body with balanced {...} */ // see also bibtex.web: scan_a_field_token_and_eat_white, scan_balanced_braces private def delimited_depth(delim: Delimited): Parser[(String, Delimited)] = new Parser[(String, Delimited)] { require(if (delim.quoted) delim.depth > 0 else delim.depth >= 0, "bad delimiter depth") def apply(in: Input) = { val start = in.offset val end = in.source.length var i = start var q = delim.quoted var d = delim.depth var finished = false while (!finished && i < end) { val c = in.source.charAt(i) if (c == '"' && d == 0) { i += 1; d = 1; q = true } else if (c == '"' && d == 1 && q) { i += 1; d = 0; q = false; finished = true } else if (c == '{') { i += 1; d += 1 } else if (c == '}') { if (d == 1 && !q || d > 1) { i += 1; d -= 1; if (d == 0) finished = true } else {i = start; finished = true } } else if (d > 0) i += 1 else finished = true } if (i == start) Failure("bad input", in) else { val s = in.source.subSequence(start, i).toString Success((s, Delimited(q, d)), in.drop(i - start)) } } }.named("delimited_depth") private def delimited: Parser[Token] = delimited_depth(Closed) ^? { case (s, delim) if delim == Closed => Token(Token.Kind.STRING, s) } private def recover_delimited: Parser[Token] = """["{][^@]*""".r ^^ token(Token.Kind.ERROR) def delimited_line(ctxt: Item): Parser[(Chunk, Line_Context)] = delimited_depth(ctxt.delim) ^^ { case (s, delim1) => (Chunk(ctxt.kind, List(Token(Token.Kind.STRING, s))), ctxt.copy(delim = delim1)) } | recover_delimited ^^ { case a => (Chunk(ctxt.kind, List(a)), Ignored) } /* other tokens */ private val at = "@" ^^ keyword private val nat = "[0-9]+".r ^^ token(Token.Kind.NAT) private val name = """[\x21-\x7f&&[^"#%'(),={}]]+""".r ^^ token(Token.Kind.NAME) private val identifier = """[\x21-\x7f&&[^"#%'(),={}0-9]][\x21-\x7f&&[^"#%'(),={}]]*""".r private val ident = identifier ^^ token(Token.Kind.IDENT) val other_token = "[=#,]".r ^^ keyword | (nat | (ident | space)) /* body */ private val body = delimited | (recover_delimited | other_token) private def body_line(ctxt: Item) = if (ctxt.delim.depth > 0) delimited_line(ctxt) else delimited_line(ctxt) | other_token ^^ { case a => (Chunk(ctxt.kind, List(a)), ctxt) } | ctxt.end ^^ { case a => (Chunk(ctxt.kind, List(keyword(a))), Ignored) } /* items: command or entry */ private val item_kind = identifier ^^ { case a => val kind = if (is_command(a)) Token.Kind.COMMAND else if (is_entry(a)) Token.Kind.ENTRY else Token.Kind.IDENT Token(kind, a) } private val item_begin = "{" ^^ { case a => ("}", keyword(a)) } | "(" ^^ { case a => (")", keyword(a)) } private def item_name(kind: String) = kind.toLowerCase match { case "preamble" => failure("") case "string" => identifier ^^ token(Token.Kind.NAME) case _ => name } private val item_start = at ~ spaces ~ item_kind ~ spaces ^^ { case a ~ b ~ c ~ d => (c.source, List(a) ::: b ::: List(c) ::: d) } private val item: Parser[Chunk] = (item_start ~ item_begin ~ spaces) into { case (kind, a) ~ ((end, b)) ~ c => opt(item_name(kind)) ~ rep(body) ~ opt(end ^^ keyword) ^^ { case d ~ e ~ f => Chunk(kind, a ::: List(b) ::: c ::: d.toList ::: e ::: f.toList) } } private val recover_item: Parser[Chunk] = at ~ "[^@]*".r ^^ { case a ~ b => Chunk("", List(a, Token(Token.Kind.ERROR, b))) } /* chunks */ val chunk: Parser[Chunk] = ignored | (item | recover_item) def chunk_line(ctxt: Line_Context): Parser[(Chunk, Line_Context)] = { ctxt match { case Ignored => ignored_line | at ^^ { case a => (Chunk("", List(a)), At) } case At => space ^^ { case a => (Chunk("", List(a)), ctxt) } | item_kind ^^ { case a => (Chunk(a.source, List(a)), Item_Start(a.source)) } | recover_item ^^ { case a => (a, Ignored) } | ignored_line case Item_Start(kind) => space ^^ { case a => (Chunk(kind, List(a)), ctxt) } | item_begin ^^ { case (end, a) => (Chunk(kind, List(a)), Item_Open(kind, end)) } | recover_item ^^ { case a => (a, Ignored) } | ignored_line case Item_Open(kind, end) => space ^^ { case a => (Chunk(kind, List(a)), ctxt) } | item_name(kind) ^^ { case a => (Chunk(kind, List(a)), Item(kind, end, Closed)) } | body_line(Item(kind, end, Closed)) | ignored_line case item_ctxt: Item => body_line(item_ctxt) | ignored_line case _ => failure("") } } } /* parse */ def parse(input: CharSequence): List[Chunk] = Parsers.parseAll(Parsers.rep(Parsers.chunk), Scan.char_reader(input)) match { case Parsers.Success(result, _) => result case _ => error("Unexpected failure to parse input:\n" + input.toString) } def parse_line(input: CharSequence, context: Line_Context): (List[Chunk], Line_Context) = { var in: Reader[Char] = Scan.char_reader(input) val chunks = new mutable.ListBuffer[Chunk] var ctxt = context while (!in.atEnd) { Parsers.parse(Parsers.chunk_line(ctxt), in) match { case Parsers.Success((x, c), rest) => chunks += x; ctxt = c; in = rest case Parsers.NoSuccess(_, rest) => error("Unepected failure to parse input:\n" + rest.source.toString) } } (chunks.toList, ctxt) } /** HTML output **/ private val output_styles = List( "" -> "html-n", "plain" -> "html-n", "alpha" -> "html-a", "named" -> "html-n", "paragraph" -> "html-n", "unsort" -> "html-u", "unsortlist" -> "html-u") def html_output(bib: List[Path], title: String = "Bibliography", body: Boolean = false, citations: List[String] = List("*"), style: String = "", chronological: Boolean = false): String = { Isabelle_System.with_tmp_dir("bibtex")(tmp_dir => { /* database files */ val bib_files = bib.map(_.drop_ext) val bib_names = { val names0 = bib_files.map(_.file_name) if (Library.duplicates(names0).isEmpty) names0 else names0.zipWithIndex.map({ case (name, i) => (i + 1).toString + "-" + name }) } for ((a, b) <- bib_files zip bib_names) { Isabelle_System.copy_file(a.ext("bib"), tmp_dir + Path.basic(b).ext("bib")) } /* style file */ val bst = output_styles.toMap.get(style) match { case Some(base) => base + (if (chronological) "c" else "") + ".bst" case None => error("Bad style for bibtex HTML output: " + quote(style) + "\n(expected: " + commas_quote(output_styles.map(_._1)) + ")") } Isabelle_System.copy_file(Path.explode("$BIB2XHTML_HOME/bst") + Path.explode(bst), tmp_dir) /* result */ val in_file = Path.explode("bib.aux") val out_file = Path.explode("bib.html") File.write(tmp_dir + in_file, bib_names.mkString("\\bibdata{", ",", "}\n") + citations.map(cite => "\\citation{" + cite + "}\n").mkString) Isabelle_System.bash( "\"$BIB2XHTML_HOME/main/bib2xhtml.pl\" -B \"$ISABELLE_BIBTEX\"" + " -u -s " + Bash.string(proper_string(style) getOrElse "empty") + (if (chronological) " -c" else "") + (if (title != "") " -h " + Bash.string(title) + " " else "") + " " + File.bash_path(in_file) + " " + File.bash_path(out_file), cwd = tmp_dir.file).check val html = File.read(tmp_dir + out_file) if (body) { cat_lines( split_lines(html). dropWhile(line => !line.startsWith("