diff --git a/NEWS b/NEWS --- a/NEWS +++ b/NEWS @@ -1,16513 +1,16516 @@ Isabelle NEWS -- history of user-relevant changes ================================================= (Note: Isabelle/jEdit shows a tree-view of the NEWS file in Sidekick.) New in this Isabelle version ---------------------------- *** General *** * Timeouts for Isabelle/ML tools are subject to system option "timeout_scale" --- this already used for the overall session build process before, and allows to adapt to slow machines. The underlying Timeout.apply in Isabelle/ML treats an original timeout specification 0 as no timeout; before it meant immediate timeout. Rare INCOMPATIBILITY in boundary cases. * Remote provers from SystemOnTPTP (notably for Sledgehammer) are now managed via Isabelle/Scala instead of perl; the dependency on libwww-perl has been eliminated (notably on Linux). Rare INCOMPATIBILITY: HTTP proxy configuration now works via JVM properties https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/net/doc-files/net-properties.html * More symbol definitions for the Z Notation (Isabelle fonts and LaTeX). See also the group "Z Notation" in the Symbols dockable of Isabelle/jEdit. *** Isabelle/jEdit Prover IDE *** * More robust 'proof' outline for method "induct": support nested cases. * Support for built-in font substitution of jEdit text area. * The main plugin for Isabelle/jEdit can be deactivated and reactivated as documented --- was broken at least since Isabelle2018. *** Document preparation *** * More predefined symbols: \ \ (package "stmaryrd"), \ \ (LaTeX package "pifont"). * High-quality blackboard-bold symbols from font "txmia" (LaTeX package "pxfonts"): \\\\\\\\\\\\\\\\\\\\\\\\\\. * Document antiquotations for ML text have been refined: "def" and "ref" variants support index entries, e.g. @{ML} (no entry) vs. @{ML_def} (bold entry) vs. @{ML_ref} (regular entry); @{ML_type} supports explicit type arguments for constructors (only relevant for index), e.g. @{ML_type \'a list\} vs. @{ML_type 'a \list\}; @{ML_op} has been renamed to @{ML_infix}. Minor INCOMPATIBILITY concerning name and syntax. * Option "document_logo" determines if an instance of the Isabelle logo should be created in the document output directory. The given string specifies the name of the logo variant, while "_" (underscore) refers to the unnamed variant. The output file name is always "isabelle_logo.pdf". * Option "document_preprocessor" specifies the name of an executable that is run within the document output directory, after preparing the document sources and before the actual build process. This allows to apply adhoc patches, without requiring a separate "build" script. * Option "document_build" determines the document build engine, as defined in Isabelle/Scala (as system service). The subsequent engines are provided by the Isabelle distribution: - "lualatex" (default): use ISABELLE_LUALATEX for a standard LaTeX build with optional ISABELLE_BIBTEX and ISABELLE_MAKEINDEX - "pdflatex": as above, but use ISABELLE_PDFLATEX (legacy mode for special LaTeX styles) - "build": delegate to the executable "./build pdf" The presence of a "build" command within the document output directory explicitly requires document_build=build. Minor INCOMPATIBILITY, need to adjust session ROOT options. * The command-line tool "isabelle latex" has been discontinued, INCOMPATIBILITY for old document build scripts. - Former "isabelle latex -o sty" has become obsolete: Isabelle .sty files are automatically generated within the document output directory. - Former "isabelle latex -o pdf" should be replaced by "$ISABELLE_LUALATEX root" or "$ISABELLE_PDFLATEX root" (without quotes), according to the intended LaTeX engine. - Former "isabelle latex -o bbl" should be replaced by "$ISABELLE_BIBTEX root" (without quotes). - Former "isabelle latex -o idx" should be replaced by "$ISABELLE_MAKEINDEX root" (without quotes). * Option "document_bibliography" explicitly enables the use of bibtex; the default is to check the presence of root.bib, but it could have a different name. * Improved LaTeX typesetting of \...\ using \guilsinglleft ... \guilsinglright. INCOMPATIBILITY, need to use \usepackage[T1]{fontenc} (which is now also the default in "isabelle mkroot"). * Simplified typesetting of \...\ using \guillemotleft ... \guillemotright from \usepackage[T1]{fontenc} --- \usepackage{babel} is no longer required. *** Pure *** * "global_interpretation" is applicable in instantiation and overloading targets and in any nested target built on top of a target supporting "global_interpretation". *** HOL *** * Theory "HOL-Library.Multiset": dedicated predicate "multiset" is gone, use explict expression instead. Minor INCOMPATIBILITY. * Theory "HOL-Library.Multiset": consolidated abbreviations Mempty, Melem, not_Melem to empty_mset, member_mset, not_member_mset respectively. Minor INCOMPATIBILITY. * Theory "HOL-Library.Multiset": consolidated operation and fact names: inf_subset_mset ~> inter_mset sup_subset_mset ~> union_mset multiset_inter_def ~> inter_mset_def sup_subset_mset_def ~> union_mset_def multiset_inter_count ~> count_inter_mset sup_subset_mset_count ~> count_union_mset * Theory "HOL-Library.Multiset": syntax precendence for membership operations has been adjusted to match the corresponding precendences on sets. Rare INCOMPATIBILITY. * Theory "HOL-Library.Cardinality": code generator setup based on the type classes finite_UNIV and card_UNIV has been moved to "HOL-Library.Code_Cardinality", to avoid incompatibilities with other code setups for sets in AFP/Containers. Applications relying on this code setup should import "HOL-Library.Code_Cardinality". Minor INCOMPATIBILITY. * Session "HOL-Analysis" and "HOL-Probability": indexed products of discrete distributions, negative binomial distribution, Hoeffding's inequality, Chernoff bounds, Cauchy–Schwarz inequality for nn_integral, and some more small lemmas. Some theorems that were stated awkwardly before were corrected. Minor INCOMPATIBILITY. * Session "HOL-Analysis": the complex Arg function has been identified with the function "arg" of Complex_Main, renaming arg->Arg also in the names of arg_bounded. Minor INCOMPATIBILITY. * Theorems "antisym" and "eq_iff" in class "order" have been renamed to "order.antisym" and "order.eq_iff", to coexist locally with "antisym" and "eq_iff" from locale "ordering". INCOMPATIBILITY: significant potential for change can be avoided if interpretations of type class "order" are replaced or augmented by interpretations of locale "ordering". * Theorem "swap_def" now is always qualified as "Fun.swap_def". Minor INCOMPATIBILITY; note that for most applications less elementary lemmas exists. * Theory "HOL-Library.Permutation" has been renamed to the more specific "HOL-Library.List_Permutation". Note that most notions from that theory are already present in theory "HOL-Combinatorics.Permutations". INCOMPATIBILITY. * Dedicated session "HOL-Combinatorics". INCOMPATIBILITY: theories "Permutations", "List_Permutation" (formerly "Permutation"), "Stirling", "Multiset_Permutations", "Perm" have been moved there from session HOL-Library. * Theory "HOL-Combinatorics.Transposition" provides elementary swap operation "transpose". * Lemma "permutes_induct" has been given stronger hypotheses and named premises. INCOMPATIBILITY. * Combinator "Fun.swap" resolved into a mere input abbreviation in separate theory "Transposition" in HOL-Combinatorics. INCOMPATIBILITY. * Infix syntax for bit operations AND, OR, XOR is now organized in -bundle bit_operations_syntax. INCOMPATIBILITY. +bundle bit_operations_syntax. INCOMPATIBILITY. * Bit operations set_bit, unset_bit and flip_bit are now class operations. INCOMPATIBILITY. +* Theory Bit_Operations is now part of HOL-Main. Minor INCOMPATIBILITY. + * Abbreviation "max_word" has been moved to session Word_Lib in the AFP, as also have constants "shiftl1", "shiftr1", "sshiftr1", "bshiftr1", "setBit", "clearBit". See there further the changelog in theory Guide. INCOMPATIBILITY. * New simp rules: less_exp, min.absorb1, min.absorb2, min.absorb3, -min.absorb4, max.absorb1, max.absorb2, max.absorb3, max.absorb4. Minor +min.absorb4, max.absorb1, max.absorb2, max.absorb3, max.absorb4. Minor INCOMPATIBILITY. * Sledgehammer: - Removed legacy "lam_lifting" (synonym for "liftinng") from option "lam_trans". Minor INCOMPATIBILITY. - Renamed "hide_lams" to "opaque_lifting" in option "lam_trans". Minor INCOMPATIBILITY. - Added "opaque_combs" to option "lam_trans": lambda expressions are rewritten using combinators, but the combinators are kept opaque, i.e. without definitions. * Metis: - Renamed option "hide_lams" to "opaque_lifting". Minor INCOMPATIBILITY. + *** ML *** * ML antiquotations \<^try>\expr\ and \<^can>\expr\ operate directly on the given ML expression, in contrast to functions "try" and "can" that modify application of a function. * ML antiquotations for conditional ML text: \<^if_linux>\...\ \<^if_macos>\...\ \<^if_windows>\...\ \<^if_unix>\...\ * External bash processes are always managed by Isabelle/Scala, in contrast to Isabelle2021 where this was only done for macOS on Apple Silicon. The main Isabelle/ML interface is Isabelle_System.bash_process with result type Process_Result.T (resembling class Process_Result in Scala); derived operations Isabelle_System.bash and Isabelle_System.bash_output provide similar functionality as before. Rare INCOMPATIBILITY due to subtle semantic differences: - Processes invoked from Isabelle/ML actually run in the context of the Java VM of Isabelle/Scala. The settings environment and current working directory are usually the same on both sides, but there can be subtle corner cases (e.g. unexpected uses of "cd" or "putenv" in ML). - Output via stdout and stderr is line-oriented: Unix vs. Windows line-endings are normalized towards Unix; presence or absence of a final newline is irrelevant. The original lines are available as Process_Result.out_lines/err_lines; the concatenated versions Process_Result.out/err *omit* a trailing newline (using Library.trim_line, which was occasional seen in applications before, but is no longer necessary). - Output needs to be plain text encoded in UTF-8: Isabelle/Scala recodes it temporarily as UTF-16. This works for well-formed Unicode text, but not for arbitrary byte strings. In such cases, the bash script should write tempory files, managed by Isabelle/ML operations like Isabelle_System.with_tmp_file to create a file name and File.read to retrieve its content. - Just like any other Scala function invoked from ML, Isabelle_System.bash_process requires a proper PIDE session context. This could be a regular batch session (e.g. "isabelle build"), a PIDE editor session (e.g. "isabelle jedit"), or headless PIDE (e.g. "isabelle dump" or "isabelle server"). Note that old "isabelle console" or raw "isabelle process" don't have that. New Process_Result.timing works as in Isabelle/Scala, based on direct measurements of the bash_process wrapper in C: elapsed time is always available, CPU time is only available on Linux and macOS, GC time is unavailable. * Likewise, the following Isabelle/ML system operations are run in the context of Isabelle/Scala: - Isabelle_System.make_directory - Isabelle_System.copy_dir - Isabelle_System.copy_file - Isabelle_System.copy_base_file - Isabelle_System.rm_tree - Isabelle_System.download *** System *** * Each Isabelle component may specify a Scala/Java jar module declaratively via etc/build.props (file names are relative to the component directory). E.g. see $ISABELLE_HOME/etc/build.props with further explanations in the "system" manual. * Command-line tool "isabelle scala_build" allows to invoke the build process of all Scala/Java modules explicitly. Normally this is done implicitly on demand, e.g. for "isabelle scala" or "isabelle jedit". * Command-line tool "isabelle scala_project" is has been improved in various ways: - sources from all components with etc/build.props are included, - sources of for the jEdit text editor and the Isabelle/jEdit plugins (jedit_base and jedit_main) are included by default, - more sources may be given on the command-line, - options -f and -D make the tool more convenient. * Isabelle/jEdit is now composed more conventionally from the original jEdit text editor in $JEDIT_HOME (with minor patches), plus two Isabelle plugins that are produced in $JEDIT_SETTINGS/jars on demand. The main isabelle.jedit module is now part of Isabelle/Scala (as one big $ISABELLE_SCALA_JAR). * ML profiling has been updated and reactivated, after some degration in Isabelle2021: - "isabelle build -o threads=1 -o profiling=..." works properly within the PIDE session context; - "isabelle profiling_report" now uses the session build database (like "isabelle log"); - output uses non-intrusive tracing messages, instead of warnings. * System option "system_log" specifies an optional log file for internal messages produced by Output.system_message in Isabelle/ML; the value "true" refers to console progress of the build job. This works for "isabelle build" or any derivative of it. * System options of type string may be set to "true" using the short notation of type bool. E.g. "isabelle build -o system_log". * System option "document=true" is an alias for "document=pdf" and thus can be used in the short form. E.g. "isabelle build -o document". * Command-line tool "isabelle version" supports repository archives (without full .hg directory). More options. * Obsolete settings variable ISABELLE_PLATFORM32 has been discontinued. Note that only Windows supports old 32 bit executables, via settings variable ISABELLE_WINDOWS_PLATFORM32. Everything else should be ISABELLE_PLATFORM64 (generic Posix) or ISABELLE_WINDOWS_PLATFORM64 (native Windows) or ISABELLE_APPLE_PLATFORM64 (Apple Silicon). New in Isabelle2021 (February 2021) ----------------------------------- *** General *** * On macOS, the IsabelleXYZ.app directory layout now follows the other platforms, without indirection via Contents/Resources/. INCOMPATIBILITY, use e.g. IsabelleXYZ.app/bin/isabelle instead of former IsabelleXYZ.app/Isabelle/bin/isabelle or IsabelleXYZ.app/Isabelle/Contents/Resources/IsabelleXYZ/bin/isabelle. * HTML presentation uses rich markup produced by Isabelle/PIDE, resulting in more colors and links. * HTML presentation includes auxiliary files (e.g. ML) for each theory. * Proof method "subst" is confined to the original subgoal range: its included distinct_subgoals_tac no longer affects unrelated subgoals. Rare INCOMPATIBILITY. * Theory_Data extend operation is obsolete and needs to be the identity function; merge should be conservative and not reset to the empty value. Subtle INCOMPATIBILITY and change of semantics (due to Theory.join_theory from Isabelle2020). Special extend/merge behaviour at the begin of a new theory can be achieved via Theory.at_begin. *** Isabelle/jEdit Prover IDE *** * Improved GUI look-and-feel: the portable and scalable "FlatLaf Light" is used by default on all platforms (appearance similar to IntelliJ IDEA). * Improved markup for theory header imports: hyperlinks for theory files work without formal checking of content. * The prover process can download auxiliary files (e.g. 'ML_file') for theories with remote URL. This requires the external "curl" program. * Action "isabelle.goto-entity" (shortcut CS+d) jumps to the definition of the formal entity at the caret position. * The visual feedback on caret entity focus is normally restricted to definitions within the visible text area. The keyboard modifier "CS" overrides this: then all defining and referencing positions are shown. See also option "jedit_focus_modifier". * The jEdit status line includes widgets both for JVM and ML heap usage. Ongoing ML ongoing garbage collection is shown as "ML cleanup". * The Monitor dockable provides buttons to request a full garbage collection and sharing of live data on the ML heap. It also includes information about the Java Runtime system. * PIDE support for session ROOTS: markup for directories. * Update to jedit-5.6.0, the latest release. This version works properly on macOS by default, without the special MacOSX plugin. * Action "full-screen-mode" (shortcut F11 or S+F11) has been modified for better approximate window size on macOS and Linux/X11. * Improved GUI support for macOS 11.1 Big Sur: native fullscreen mode, but non-native look-and-feel (FlatLaf). * Hyperlinks to various file-formats (.pdf, .png, etc.) open an external viewer, instead of re-using the jEdit text editor. * IDE support for Naproche-SAD: Proof Checking of Natural Mathematical Documents. See also $NAPROCHE_HOME/examples for files with .ftl or .ftl.tex extension. The corresponding Naproche-SAD server process can be disabled by setting the system option naproche_server=false and restarting the Isabelle application. *** Document preparation *** * Keyword 'document_theories' within ROOT specifies theories from other sessions that should be included in the generated document source directory. This does not affect the generated session.tex: \input{...} needs to be used separately. * The standard LaTeX engine is now lualatex, according to settings variable ISABELLE_PDFLATEX. This is mostly upwards compatible with old pdflatex, but text encoding needs to conform strictly to utf8. Rare INCOMPATIBILITY. * Discontinued obsolete DVI format and ISABELLE_LATEX settings variable: document output is always PDF. * Antiquotation @{tool} refers to Isabelle command-line tools, with completion and formal reference to the source (external script or internal Scala function). * Antiquotation @{bash_function} refers to GNU bash functions that are checked within the Isabelle settings environment. * Antiquotations @{scala}, @{scala_object}, @{scala_type}, @{scala_method} refer to checked Isabelle/Scala entities. *** Pure *** * Session Pure-Examples contains notable examples for Isabelle/Pure (former entries of HOL-Isar_Examples). * Named contexts (locale and class specifications, locale and class context blocks) allow bundle mixins for the surface context. This allows syntax notations to be organized within bundles conveniently. See theory "HOL-ex.Specifications_with_bundle_mixins" for examples and the isar-ref manual for syntax descriptions. * Definitions in locales produce rule which can be added as congruence rule to protect foundational terms during simplification. * Consolidated terminology and function signatures for nested targets: - Local_Theory.begin_nested replaces Local_Theory.open_target - Local_Theory.end_nested replaces Local_Theory.close_target - Combination of Local_Theory.begin_nested and Local_Theory.end_nested(_result) replaces Local_Theory.subtarget(_result) INCOMPATIBILITY. * Local_Theory.init replaces Generic_Target.init. Minor INCOMPATIBILITY. *** HOL *** * Session HOL-Examples contains notable examples for Isabelle/HOL (former entries of HOL-Isar_Examples, HOL-ex etc.). * An updated version of the veriT solver is now included as Isabelle component. It can be used in the "smt" proof method via "smt (verit)" or via "declare [[smt_solver = verit]]" in the context; see also session HOL-Word-SMT_Examples. * Zipperposition 2.0 is now included as Isabelle component for experimentation, e.g. in "sledgehammer [prover = zipperposition]". * Sledgehammer: - support veriT in proof preplay - take adventage of more cores in proof preplay * Updated the Metis prover underlying the "metis" proof method to version 2.4 (release 20180810). The new version fixes one soundness defect and two incompleteness defects. Very slight INCOMPATIBILITY. * Nitpick/Kodkod may be invoked directly within the running Isabelle/Scala session (instead of an external Java process): this improves reactivity and saves resources. This experimental feature is guarded by system option "kodkod_scala" (default: true in PIDE interaction, false in batch builds). * Simproc "defined_all" and rewrite rule "subst_all" perform more aggressive substitution with variables from assumptions. INCOMPATIBILITY, consider repairing proofs locally like this: supply subst_all [simp del] [[simproc del: defined_all]] * Simproc "datatype_no_proper_subterm" rewrites equalities "lhs = rhs" on datatypes to "False" if either side is a proper subexpression of the other (for any datatype with a reasonable size function). * Syntax for state monad combinators fcomp and scomp is organized in bundle state_combinator_syntax. Minor INCOMPATIBILITY. * Syntax for reflected term syntax is organized in bundle term_syntax, discontinuing previous locale term_syntax. Minor INCOMPATIBILITY. * New constant "power_int" for exponentiation with integer exponent, written as "x powi n". * Added the "at most 1" quantifier, Uniq. * For the natural numbers, "Sup {} = 0". * New constant semiring_char gives the characteristic of any type of class semiring_1, with the convenient notation CHAR('a). For example, CHAR(nat) = CHAR(int) = CHAR(real) = 0, CHAR(17) = 17. * HOL-Computational_Algebra.Polynomial: Definition and basic properties of algebraic integers. * Library theory "Bit_Operations" with generic bit operations. * Library theory "Signed_Division" provides operations for signed division, instantiated for type int. * Theory "Multiset": removed misleading notation \# for sum_mset; replaced with \\<^sub>#. Analogous notation for prod_mset also exists now. * New theory "HOL-Library.Word" takes over material from former session "HOL-Word". INCOMPATIBILITY: need to adjust imports. * Theory "HOL-Library.Word": Type word is restricted to bit strings consisting of at least one bit. INCOMPATIBILITY. * Theory "HOL-Library.Word": Bit operations NOT, AND, OR, XOR are based on generic algebraic bit operations from theory "HOL-Library.Bit_Operations". INCOMPATIBILITY. * Theory "HOL-Library.Word": Most operations on type word are set up for transfer and lifting. INCOMPATIBILITY. * Theory "HOL-Library.Word": Generic type conversions. INCOMPATIBILITY, sometimes additional rewrite rules must be added to applications to get a confluent system again. * Theory "HOL-Library.Word": Uniform polymorphic "mask" operation for both types int and word. INCOMPATIBILITY. * Theory "HOL-Library.Word": Syntax for signed compare operators has been consolidated with syntax of regular compare operators. Minor INCOMPATIBILITY. * Former session "HOL-Word": Various operations dealing with bit values represented as reversed lists of bools are separated into theory Reversed_Bit_Lists in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Theory "Word_Bitwise" has been moved to AFP entry Word_Lib as theory "Bitwise". INCOMPATIBILITY. * Former session "HOL-Word": Compound operation "bin_split" simplifies by default into its components "drop_bit" and "take_bit". INCOMPATIBILITY. * Former session "HOL-Word": Operations lsb, msb and set_bit are separated into theories Least_significant_bit, Most_significant_bit and Generic_set_bit respectively in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Ancient int numeral representation has been factored out in separate theory "Ancient_Numeral" in session Word_Lib in the AFP. INCOMPATIBILITY. * Former session "HOL-Word": Operations "bin_last", "bin_rest", "bin_nth", "bintrunc", "sbintrunc", "norm_sint", "bin_cat" and "max_word" are now mere input abbreviations. Minor INCOMPATIBILITY. * Former session "HOL-Word": Misc ancient material has been factored out into separate theories and moved to session Word_Lib in the AFP. See theory "Guide" there for further information. INCOMPATIBILITY. * Session HOL-TPTP: The "tptp_isabelle" and "tptp_sledgehammer" commands are in working order again, as opposed to outputting "GaveUp" on nearly all problems. * Session "HOL-Hoare": concrete syntax only for Hoare triples, not abstract language constructors. * Session "HOL-Hoare": now provides a total correctness logic as well. *** FOL *** * Added the "at most 1" quantifier, Uniq, as in HOL. * Simproc "defined_all" and rewrite rule "subst_all" have been changed as in HOL. *** ML *** * Antiquotations @{scala_function}, @{scala}, @{scala_thread} refer to registered Isabelle/Scala functions (of type String => String): invocation works via the PIDE protocol. * Path.append is available as overloaded "+" operator, similar to corresponding Isabelle/Scala operation. * ML statistics via an external Poly/ML process: this allows monitoring the runtime system while the ML program sleeps. *** System *** * Isabelle server allows user-defined commands via isabelle_scala_service. * Update/rebuild external provers on currently supported OS platforms, notably CVC4 1.8, E prover 2.5, SPASS 3.8ds, CSDP 6.1.1. * The command-line tool "isabelle log" prints prover messages from the build database of the given session, following the the order of theory sources, instead of erratic parallel evaluation. Consequently, the session log file is restricted to system messages of the overall build process, and thus becomes more informative. * Discontinued obsolete isabelle display tool, and DVI_VIEWER settings variable. * The command-line tool "isabelle logo" only outputs PDF; obsolete EPS (for DVI documents) has been discontinued. Former option -n has been turned into -o with explicit file name. Minor INCOMPATIBILITY. * The command-line tool "isabelle components" supports new options -u and -x to manage $ISABELLE_HOME_USER/etc/components without manual editing of Isabelle configuration files. * The shell function "isabelle_directory" (within etc/settings of components) augments the list of special directories for persistent symbolic path names. This improves portability of heap images and session databases. It used to be hard-wired for Isabelle + AFP, but other projects may now participate on equal terms. * The command-line tool "isabelle process" now prints output to stdout/stderr separately and incrementally, instead of just one bulk to stdout after termination. Potential INCOMPATIBILITY for external tools. * The command-line tool "isabelle console" now supports interrupts properly (on Linux and macOS). * Batch-builds via "isabelle build" use a PIDE session with special protocol: this allows to invoke Isabelle/Scala operations from Isabelle/ML. Big build jobs (e.g. AFP) require extra heap space for the java process, e.g. like this in $ISABELLE_HOME_USER/etc/settings: ISABELLE_TOOL_JAVA_OPTIONS="$ISABELLE_TOOL_JAVA_OPTIONS -Xmx8g" This includes full PIDE markup, if option "build_pide_reports" is enabled. * The command-line tool "isabelle build" provides option -P DIR to produce PDF/HTML presentation in the specified directory; -P: refers to the standard directory according to ISABELLE_BROWSER_INFO / ISABELLE_BROWSER_INFO_SYSTEM settings. Generated PDF documents are taken from the build database -- from this or earlier builds with option document=pdf. * The command-line tool "isabelle document" generates theory documents on the spot, using the underlying session build database (exported LaTeX sources or existing PDF files). INCOMPATIBILITY, the former "isabelle document" tool was rather different and has been discontinued. * The command-line tool "isabelle sessions" explores the structure of Isabelle sessions and prints result names in topological order (on stdout). * The Isabelle/Scala "Progress" interface changed slightly and "No_Progress" has been discontinued. INCOMPATIBILITY, use "new Progress" instead. * General support for Isabelle/Scala system services, configured via the shell function "isabelle_scala_service" in etc/settings (e.g. of an Isabelle component); see implementations of class Isabelle_System.Service in Isabelle/Scala. This supersedes former "isabelle_scala_tools" and "isabelle_file_format": minor INCOMPATIBILITY. * The syntax of theory load commands (for auxiliary files) is now specified in Isabelle/Scala, as instance of class isabelle.Command_Span.Load_Command registered via isabelle_scala_service in etc/settings. This allows more flexible schemes than just a list of file extensions. Minor INCOMPATIBILITY, e.g. see theory HOL-SPARK.SPARK_Setup to emulate the old behaviour. * JVM system property "isabelle.laf" has been discontinued; the default Swing look-and-feel is ""FlatLaf Light". * Isabelle/Phabricator supports Ubuntu 20.04 LTS. * Isabelle/Phabricator setup has been updated to follow ongoing development: libphutil has been discontinued. Minor INCOMPATIBILITY: existing server installations should remove libphutil from /usr/local/bin/isabelle-phabricator-upgrade and each installation root directory (e.g. /var/www/phabricator-vcs/libphutil). * Experimental support for arm64-linux platform. The reference platform is Raspberry Pi 4 with 8 GB RAM running Pi OS (64 bit). * Support for Apple Silicon, using mostly x86_64-darwin runtime translation via Rosetta 2 (e.g. Poly/ML and external provers), but also some native arm64-darwin executables (e.g. Java). New in Isabelle2020 (April 2020) -------------------------------- *** General *** * Session ROOT files need to specify explicit 'directories' for import of theory files. Directories cannot be shared by different sessions. (Recall that import of theories from other sessions works via session-qualified theory names, together with suitable 'sessions' declarations in the ROOT.) * Internal derivations record dependencies on oracles and other theorems accurately, including the implicit type-class reasoning wrt. proven class relations and type arities. In particular, the formal tagging with "Pure.skip_proofs" of results stemming from "instance ... sorry" is now propagated properly to theorems depending on such type instances. * Command 'sorry' (oracle "Pure.skip_proofs") is more precise about the actual proposition that is assumed in the goal and proof context. This requires at least Proofterm.proofs = 1 to show up in theorem dependencies. * Command 'thm_oracles' prints all oracles used in given theorems, covering the full graph of transitive dependencies. * Command 'thm_deps' prints immediate theorem dependencies of the given facts. The former graph visualization has been discontinued, because it was hardly usable. * Refined treatment of proof terms, including type-class proofs for minor object-logics (FOL, FOLP, Sequents). * The inference kernel is now confined to one main module: structure Thm, without the former circular dependency on structure Axclass. * Mixfix annotations may use "' " (single quote followed by space) to separate delimiters (as documented in the isar-ref manual), without requiring an auxiliary empty block. A literal single quote needs to be escaped properly. Minor INCOMPATIBILITY. *** Isar *** * The proof method combinator (subproofs m) applies the method expression m consecutively to each subgoal, constructing individual subproofs internally. This impacts the internal construction of proof terms: it makes a cascade of let-expressions within the derivation tree and may thus improve scalability. * Attribute "trace_locales" activates tracing of locale instances during roundup. It replaces the diagnostic command 'print_dependencies', which has been discontinued. *** Isabelle/jEdit Prover IDE *** * Prover IDE startup is now much faster, because theory dependencies are no longer explored in advance. The overall session structure with its declarations of 'directories' is sufficient to locate theory files. Thus the "session focus" of option "isabelle jedit -S" has become obsolete (likewise for "isabelle vscode_server -S"). Existing option "-R" is both sufficient and more convenient to start editing a particular session. * Actions isabelle.tooltip (CS+b) and isabelle.message (CS+m) display tooltip message popups, corresponding to mouse hovering with/without the CONTROL/COMMAND key pressed. * The following actions allow to navigate errors within the current document snapshot: isabelle.first-error (CS+a) isabelle.last-error (CS+z) isabelle.next-error (CS+n) isabelle.prev-error (CS+p) * Support more brackets: \ \ (intended for implicit argument syntax). * Action isabelle.jconsole (menu item Plugins / Isabelle / Java/VM Monitor) applies the jconsole tool on the running Isabelle/jEdit process. This allows to monitor resource usage etc. * More adequate default font sizes for Linux on HD / UHD displays: automatic font scaling is usually absent on Linux, in contrast to Windows and macOS. * The default value for the jEdit property "view.antiAlias" (menu item Utilities / Global Options / Text Area / Anti Aliased smooth text) is now "subpixel HRGB", instead of former "standard". Especially on Linux this often leads to faster text rendering, but can also cause problems with odd color shades. An alternative is to switch back to "standard" here, and set the following Java system property: isabelle jedit -Dsun.java2d.opengl=true This can be made persistent via JEDIT_JAVA_OPTIONS in $ISABELLE_HOME_USER/etc/settings. For the "Isabelle2020" desktop application there is a corresponding options file in the same directory. *** Isabelle/VSCode Prover IDE *** * Update of State and Preview panels to use new WebviewPanel API of VSCode. *** HOL *** * Improvements of the 'lift_bnf' command: - Add support for quotient types. - Generate transfer rules for the lifted map/set/rel/pred constants (theorems "._transfer_raw"). * Term_XML.Encode/Decode.term uses compact representation of Const "typargs" from the given declaration environment. This also makes more sense for translations to lambda-calculi with explicit polymorphism. INCOMPATIBILITY, use Term_XML.Encode/Decode.term_raw in special applications. * ASCII membership syntax concerning big operators for infimum and supremum has been discontinued. INCOMPATIBILITY. * Removed multiplicativity assumption from class "normalization_semidom". Introduced various new intermediate classes with the multiplicativity assumption; many theorem statements (especially involving GCD/LCM) had to be adapted. This allows for a more natural instantiation of the algebraic typeclasses for e.g. Gaussian integers. INCOMPATIBILITY. * Clear distinction between types for bits (False / True) and Z2 (0 / 1): theory HOL-Library.Bit has been renamed accordingly. INCOMPATIBILITY. * Dynamic facts "algebra_split_simps" and "field_split_simps" correspond to algebra_simps and field_simps but contain more aggressive rules potentially splitting goals; algebra_split_simps roughly replaces sign_simps and field_split_simps can be used instead of divide_simps. INCOMPATIBILITY. * Theory HOL.Complete_Lattices: renamed Inf_Sup -> Inf_eq_Sup and Sup_Inf -> Sup_eq_Inf * Theory HOL-Library.Monad_Syntax: infix operation "bind" (\) associates to the left now as is customary. * Theory HOL-Library.Ramsey: full finite Ramsey's theorem with multiple colours and arbitrary exponents. * Session HOL-Proofs: build faster thanks to better treatment of proof terms in Isabelle/Pure. * Session HOL-Word: bitwise NOT-operator has proper prefix syntax. Minor INCOMPATIBILITY. * Session HOL-Analysis: proof method "metric" implements a decision procedure for simple linear statements in metric spaces. * Session HOL-Complex_Analysis has been split off from HOL-Analysis. *** ML *** * Theory construction may be forked internally, the operation Theory.join_theory recovers a single result theory. See also the example in theory "HOL-ex.Join_Theory". * Antiquotation @{oracle_name} inlines a formally checked oracle name. * Minimal support for a soft-type system within the Isabelle logical framework (module Soft_Type_System). * Former Variable.auto_fixes has been replaced by slightly more general Proof_Context.augment: it is subject to an optional soft-type system of the underlying object-logic. Minor INCOMPATIBILITY. * More scalable Export.export using XML.tree to avoid premature string allocations, with convenient shortcut XML.blob. Minor INCOMPATIBILITY. * Prover IDE support for the underlying Poly/ML compiler (not the basis library). Open $ML_SOURCES/ROOT.ML in Isabelle/jEdit to browse the implementation with full markup. *** System *** * Standard rendering for more Isabelle symbols: \ \ \ \ * The command-line tool "isabelle scala_project" creates a Gradle project configuration for Isabelle/Scala/jEdit, to support Scala IDEs such as IntelliJ IDEA. * The command-line tool "isabelle phabricator_setup" facilitates self-hosting of the Phabricator software-development platform, with support for Git, Mercurial, Subversion repositories. This helps to avoid monoculture and to escape the gravity of centralized version control by Github and/or Bitbucket. For further documentation, see chapter "Phabricator server administration" in the "system" manual. A notable example installation is https://isabelle-dev.sketis.net/. * The command-line tool "isabelle hg_setup" simplifies the setup of Mercurial repositories, with hosting via Phabricator or SSH file server access. * The command-line tool "isabelle imports" has been discontinued: strict checking of session directories enforces session-qualified theory names in applications -- users are responsible to specify session ROOT entries properly. * The command-line tool "isabelle dump" and its underlying Isabelle/Scala module isabelle.Dump has become more scalable, by splitting sessions and supporting a base logic image. Minor INCOMPATIBILITY in options and parameters. * The command-line tool "isabelle build_docker" has been slightly improved: it is now properly documented in the "system" manual. * Isabelle/Scala support for the Linux platform (Ubuntu): packages, users, system services. * Isabelle/Scala support for proof terms (with full type/term information) in module isabelle.Term. * Isabelle/Scala: more scalable output of YXML files, e.g. relevant for "isabelle dump". * Theory export via Isabelle/Scala has been reworked. The former "fact" name space is now split into individual "thm" items: names are potentially indexed, such as "foo" for singleton facts, or "bar(1)", "bar(2)", "bar(3)" for multi-facts. Theorem dependencies are now exported as well: this spans an overall dependency graph of internal inferences; it might help to reconstruct the formal structure of theory libraries. See also the module isabelle.Export_Theory in Isabelle/Scala. * Theory export of structured specifications, based on internal declarations of Spec_Rules by packages like 'definition', 'inductive', 'primrec', 'function'. * Old settings variables ISABELLE_PLATFORM and ISABELLE_WINDOWS_PLATFORM have been discontinued -- deprecated since Isabelle2018. * More complete x86_64 platform support on macOS, notably Catalina where old x86 has been discontinued. * Update to GHC stack 2.1.3 with stackage lts-13.19/ghc-8.6.4. * Update to OCaml Opam 2.0.6 (using ocaml 4.05.0 as before). New in Isabelle2019 (June 2019) ------------------------------- *** General *** * The font collection "Isabelle DejaVu" is systematically derived from the existing "DejaVu" fonts, with variants "Sans Mono", "Sans", "Serif" and styles "Normal", "Bold", "Italic/Oblique", "Bold-Italic/Oblique". The DejaVu base fonts are retricted to well-defined Unicode ranges and augmented by special Isabelle symbols, taken from the former "IsabelleText" font (which is no longer provided separately). The line metrics and overall rendering quality is closer to original DejaVu. INCOMPATIBILITY with display configuration expecting the old "IsabelleText" font: use e.g. "Isabelle DejaVu Sans Mono" instead. * The Isabelle fonts render "\" properly as superscript "-1". * Old-style inner comments (* ... *) within the term language are no longer supported (legacy feature in Isabelle2018). * Old-style {* verbatim *} tokens are explicitly marked as legacy feature and will be removed soon. Use \cartouche\ syntax instead, e.g. via "isabelle update_cartouches -t" (available since Isabelle2015). * Infix operators that begin or end with a "*" are now parenthesized without additional spaces, e.g. "(*)" instead of "( * )". Minor INCOMPATIBILITY. * Mixfix annotations may use cartouches instead of old-style double quotes, e.g. (infixl \+\ 60). The command-line tool "isabelle update -u mixfix_cartouches" allows to update existing theory sources automatically. * ML setup commands (e.g. 'setup', 'method_setup', 'parse_translation') need to provide a closed expression -- without trailing semicolon. Minor INCOMPATIBILITY. * Commands 'generate_file', 'export_generated_files', and 'compile_generated_files' support a stateless (PIDE-conformant) model for generated sources and compiled binaries of other languages. The compilation process is managed in Isabelle/ML, and results exported to the session database for further use (e.g. with "isabelle export" or "isabelle build -e"). *** Isabelle/jEdit Prover IDE *** * Fonts for the text area, gutter, GUI elements etc. use the "Isabelle DejaVu" collection by default, which provides uniform rendering quality with the usual Isabelle symbols. Line spacing no longer needs to be adjusted: properties for the old IsabelleText font had "Global Options / Text Area / Extra vertical line spacing (in pixels): -2", it now defaults to 1, but 0 works as well. * The jEdit File Browser is more prominent in the default GUI layout of Isabelle/jEdit: various virtual file-systems provide access to Isabelle resources, notably via "favorites:" (or "Edit Favorites"). * Further markup and rendering for "plain text" (e.g. informal prose) and "raw text" (e.g. verbatim sources). This improves the visual appearance of formal comments inside the term language, or in general for repeated alternation of formal and informal text. * Action "isabelle-export-browser" points the File Browser to the theory exports of the current buffer, based on the "isabelle-export:" virtual file-system. The directory view needs to be reloaded manually to follow ongoing document processing. * Action "isabelle-session-browser" points the File Browser to session information, based on the "isabelle-session:" virtual file-system. Its entries are structured according to chapter / session names, the open operation is redirected to the session ROOT file. * Support for user-defined file-formats via class isabelle.File_Format in Isabelle/Scala (e.g. see isabelle.Bibtex.File_Format), configured via the shell function "isabelle_file_format" in etc/settings (e.g. of an Isabelle component). * System option "jedit_text_overview" allows to disable the text overview column. * Command-line options "-s" and "-u" of "isabelle jedit" override the default for system option "system_heaps" that determines the heap storage directory for "isabelle build". Option "-n" is now clearly separated from option "-s". * The Isabelle/jEdit desktop application uses the same options as "isabelle jedit" for its internal "isabelle build" process: the implicit option "-o system_heaps" (or "-s") has been discontinued. This reduces the potential for surprise wrt. command-line tools. * The official download of the Isabelle/jEdit application already contains heap images for Isabelle/HOL within its main directory: thus the first encounter becomes faster and more robust (e.g. when run from a read-only directory). * Isabelle DejaVu fonts are available with hinting by default, which is relevant for low-resolution displays. This may be disabled via system option "isabelle_fonts_hinted = false" in $ISABELLE_HOME_USER/etc/preferences -- it occasionally yields better results. * OpenJDK 11 has quite different font rendering, with better glyph shapes and improved sub-pixel anti-aliasing. In some situations results might be *worse* than Oracle Java 8, though -- a proper HiDPI / UHD display is recommended. * OpenJDK 11 supports GTK version 2.2 and 3 (according to system property jdk.gtk.version). The factory default is version 3, but ISABELLE_JAVA_SYSTEM_OPTIONS includes "-Djdk.gtk.version=2.2" to make this more conservative (as in Java 8). Depending on the GTK theme configuration, "-Djdk.gtk.version=3" might work better or worse. *** Document preparation *** * Document markers are formal comments of the form \<^marker>\marker_body\ that are stripped from document output: the effect is to modify the semantic presentation context or to emit markup to the PIDE document. Some predefined markers are taken from the Dublin Core Metadata Initiative, e.g. \<^marker>\contributor arg\ or \<^marker>\license arg\ and produce PIDE markup that can be retrieved from the document database. * Old-style command tags %name are re-interpreted as markers with proof-scope \<^marker>\tag (proof) name\ and produce LaTeX environments as before. Potential INCOMPATIBILITY: multiple markers are composed in canonical order, resulting in a reversed list of tags in the presentation context. * Marker \<^marker>\tag name\ does not apply to the proof of a top-level goal statement by default (e.g. 'theorem', 'lemma'). This is a subtle change of semantics wrt. old-style %name. * In Isabelle/jEdit, the string "\tag" may be completed to a "\<^marker>\tag \" template. * Document antiquotation option "cartouche" indicates if the output should be delimited as cartouche; this takes precedence over the analogous option "quotes". * Many document antiquotations are internally categorized as "embedded" and expect one cartouche argument, which is typically used with the \<^control>\cartouche\ notation (e.g. \<^term>\\x y. x\). The cartouche delimiters are stripped in output of the source (antiquotation option "source"), but it is possible to enforce delimiters via option "source_cartouche", e.g. @{term [source_cartouche] \\x y. x\}. *** Isar *** * Implicit cases goal1, goal2, goal3, etc. have been discontinued (legacy feature since Isabelle2016). * More robust treatment of structural errors: begin/end blocks take precedence over goal/proof. This is particularly relevant for the headless PIDE session and server. * Command keywords of kind thy_decl / thy_goal may be more specifically fit into the traditional document model of "definition-statement-proof" via thy_defn / thy_stmt / thy_goal_defn / thy_goal_stmt. *** HOL *** * Command 'export_code' produces output as logical files within the theory context, as well as formal session exports that can be materialized via command-line tools "isabelle export" or "isabelle build -e" (with 'export_files' in the session ROOT). Isabelle/jEdit also provides a virtual file-system "isabelle-export:" that can be explored in the regular file-browser. A 'file_prefix' argument allows to specify an explicit name prefix for the target file (SML, OCaml, Scala) or directory (Haskell); the default is "export" with a consecutive number within each theory. * Command 'export_code': the 'file' argument is now legacy and will be removed soon: writing to the physical file-system is not well-defined in a reactive/parallel application like Isabelle. The empty 'file' argument has been discontinued already: it is superseded by the file-browser in Isabelle/jEdit on "isabelle-export:". Minor INCOMPATIBILITY. * Command 'code_reflect' no longer supports the 'file' argument: it has been superseded by 'file_prefix' for stateless file management as in 'export_code'. Minor INCOMPATIBILITY. * Code generation for OCaml: proper strings are used for literals. Minor INCOMPATIBILITY. * Code generation for OCaml: Zarith supersedes Nums as library for proper integer arithmetic. The library is located via standard invocations of "ocamlfind" (via ISABELLE_OCAMLFIND settings variable). The environment provided by "isabelle ocaml_setup" already contains this tool and the required packages. Minor INCOMPATIBILITY. * Code generation for Haskell: code includes for Haskell must contain proper module frame, nothing is added magically any longer. INCOMPATIBILITY. * Code generation: slightly more conventional syntax for 'code_stmts' antiquotation. Minor INCOMPATIBILITY. * Theory List: the precedence of the list_update operator has changed: "f a [n := x]" now needs to be written "(f a)[n := x]". * The functions \, \, \, \ (not the corresponding binding operators) now have the same precedence as any other prefix function symbol. Minor INCOMPATIBILITY. * Simplified syntax setup for big operators under image. In rare situations, type conversions are not inserted implicitly any longer and need to be given explicitly. Auxiliary abbreviations INFIMUM, SUPREMUM, UNION, INTER should now rarely occur in output and are just retained as migration auxiliary. Abbreviations MINIMUM and MAXIMUM are gone INCOMPATIBILITY. * The simplifier uses image_cong_simp as a congruence rule. The historic and not really well-formed congruence rules INF_cong*, SUP_cong*, are not used by default any longer. INCOMPATIBILITY; consider using declare image_cong_simp [cong del] in extreme situations. * INF_image and SUP_image are no default simp rules any longer. INCOMPATIBILITY, prefer image_comp as simp rule if needed. * Strong congruence rules (with =simp=> in the premises) for constant f are now uniformly called f_cong_simp, in accordance with congruence rules produced for mappers by the datatype package. INCOMPATIBILITY. * Retired lemma card_Union_image; use the simpler card_UN_disjoint instead. INCOMPATIBILITY. * Facts sum_mset.commute and prod_mset.commute have been renamed to sum_mset.swap and prod_mset.swap, similarly to sum.swap and prod.swap. INCOMPATIBILITY. * ML structure Inductive: slightly more conventional naming schema. Minor INCOMPATIBILITY. * ML: Various _global variants of specification tools have been removed. Minor INCOMPATIBILITY, prefer combinators Named_Target.theory_map[_result] to lift specifications to the global theory level. * Theory HOL-Library.Simps_Case_Conv: 'case_of_simps' now supports overlapping and non-exhaustive patterns and handles arbitrarily nested patterns. It uses on the same algorithm as HOL-Library.Code_Lazy, which assumes sequential left-to-right pattern matching. The generated equation no longer tuples the arguments on the right-hand side. INCOMPATIBILITY. * Theory HOL-Library.Multiset: the \# operator now has the same precedence as any other prefix function symbol. * Theory HOL-Library.Cardinal_Notations has been discontinued in favor of the bundle cardinal_syntax (available in theory Main). Minor INCOMPATIBILITY. * Session HOL-Library and HOL-Number_Theory: Exponentiation by squaring, used for computing powers in class "monoid_mult" and modular exponentiation. * Session HOL-Computational_Algebra: Formal Laurent series and overhaul of Formal power series. * Session HOL-Number_Theory: More material on residue rings in Carmichael's function, primitive roots, more properties for "ord". * Session HOL-Analysis: Better organization and much more material at the level of abstract topological spaces. * Session HOL-Algebra: Free abelian groups, etc., ported from HOL Light; algebraic closure of a field by de Vilhena and Baillon. * Session HOL-Homology has been added. It is a port of HOL Light's homology library, with new proofs of "invariance of domain" and related results. * Session HOL-SPARK: .prv files are no longer written to the file-system, but exported to the session database. Results may be retrieved via "isabelle build -e HOL-SPARK-Examples" on the command-line. * Sledgehammer: - The URL for SystemOnTPTP, which is used by remote provers, has been updated. - The machine-learning-based filter MaSh has been optimized to take less time (in most cases). * SMT: reconstruction is now possible using the SMT solver veriT. * Session HOL-Word: * New theory More_Word as comprehensive entrance point. * Merged type class bitss into type class bits. INCOMPATIBILITY. *** ML *** * Command 'generate_file' allows to produce sources for other languages, with antiquotations in the Isabelle context (only the control-cartouche form). The default "cartouche" antiquotation evaluates an ML expression of type string and inlines the result as a string literal of the target language. For example, this works for Haskell as follows: generate_file "Pure.hs" = \ module Isabelle.Pure where allConst, impConst, eqConst :: String allConst = \\<^const_name>\Pure.all\\ impConst = \\<^const_name>\Pure.imp\\ eqConst = \\<^const_name>\Pure.eq\\ \ See also commands 'export_generated_files' and 'compile_generated_files' to use the results. * ML evaluation (notably via command 'ML' or 'ML_file') is subject to option ML_environment to select a named environment, such as "Isabelle" for Isabelle/ML, or "SML" for official Standard ML. * ML antiquotation @{master_dir} refers to the master directory of the underlying theory, i.e. the directory of the theory file. * ML antiquotation @{verbatim} inlines its argument as string literal, preserving newlines literally. The short form \<^verbatim>\abc\ is particularly useful. * Local_Theory.reset is no longer available in user space. Regular definitional packages should use balanced blocks of Local_Theory.open_target versus Local_Theory.close_target instead, or the Local_Theory.subtarget(_result) combinator. Rare INCOMPATIBILITY. * Original PolyML.pointerEq is retained as a convenience for tools that don't use Isabelle/ML (where this is called "pointer_eq"). *** System *** * Update to OpenJDK 11: the current long-term support version of Java. * Update to Poly/ML 5.8 allows to use the native x86_64 platform without the full overhead of 64-bit values everywhere. This special x86_64_32 mode provides up to 16GB ML heap, while program code and stacks are allocated elsewhere. Thus approx. 5 times more memory is available for applications compared to old x86 mode (which is no longer used by Isabelle). The switch to the x86_64 CPU architecture also avoids compatibility problems with Linux and macOS, where 32-bit applications are gradually phased out. * System option "checkpoint" has been discontinued: obsolete thanks to improved memory management in Poly/ML. * System option "system_heaps" determines where to store the session image of "isabelle build" (and other tools using that internally). Former option "-s" is superseded by option "-o system_heaps". INCOMPATIBILITY in command-line syntax. * Session directory $ISABELLE_HOME/src/Tools/Haskell provides some source modules for Isabelle tools implemented in Haskell, notably for Isabelle/PIDE. * The command-line tool "isabelle build -e" retrieves theory exports from the session build database, using 'export_files' in session ROOT entries. * The command-line tool "isabelle update" uses Isabelle/PIDE in batch-mode to update theory sources based on semantic markup produced in Isabelle/ML. Actual updates depend on system options that may be enabled via "-u OPT" (for "update_OPT"), see also $ISABELLE_HOME/etc/options section "Theory update". Theory sessions are specified as in "isabelle dump". * The command-line tool "isabelle update -u control_cartouches" changes antiquotations into control-symbol format (where possible): @{NAME} becomes \<^NAME> and @{NAME ARG} becomes \<^NAME>\ARG\. * Support for Isabelle command-line tools defined in Isabelle/Scala. Instances of class Isabelle_Scala_Tools may be configured via the shell function "isabelle_scala_tools" in etc/settings (e.g. of an Isabelle component). * Isabelle Server command "use_theories" supports "nodes_status_delay" for continuous output of node status information. The time interval is specified in seconds; a negative value means it is disabled (default). * Isabelle Server command "use_theories" terminates more robustly in the presence of structurally broken sources: full consolidation of theories is no longer required. * OCaml tools and libraries are now accesed via ISABELLE_OCAMLFIND, which needs to point to a suitable version of "ocamlfind" (e.g. via OPAM, see below). INCOMPATIBILITY: settings variables ISABELLE_OCAML and ISABELLE_OCAMLC are no longer supported. * Support for managed installations of Glasgow Haskell Compiler and OCaml via the following command-line tools: isabelle ghc_setup isabelle ghc_stack isabelle ocaml_setup isabelle ocaml_opam The global installation state is determined by the following settings (and corresponding directory contents): ISABELLE_STACK_ROOT ISABELLE_STACK_RESOLVER ISABELLE_GHC_VERSION ISABELLE_OPAM_ROOT ISABELLE_OCAML_VERSION After setup, the following Isabelle settings are automatically redirected (overriding existing user settings): ISABELLE_GHC ISABELLE_OCAMLFIND The old meaning of these settings as locally installed executables may be recovered by purging the directories ISABELLE_STACK_ROOT / ISABELLE_OPAM_ROOT, or by resetting these variables in $ISABELLE_HOME_USER/etc/settings. New in Isabelle2018 (August 2018) --------------------------------- *** General *** * Session-qualified theory names are mandatory: it is no longer possible to refer to unqualified theories from the parent session. INCOMPATIBILITY for old developments that have not been updated to Isabelle2017 yet (using the "isabelle imports" tool). * Only the most fundamental theory names are global, usually the entry points to major logic sessions: Pure, Main, Complex_Main, HOLCF, IFOL, FOL, ZF, ZFC etc. INCOMPATIBILITY, need to use qualified names for formerly global "HOL-Probability.Probability" and "HOL-SPARK.SPARK". * Global facts need to be closed: no free variables and no hypotheses. Rare INCOMPATIBILITY. * Facts stemming from locale interpretation are subject to lazy evaluation for improved performance. Rare INCOMPATIBILITY: errors stemming from interpretation morphisms might be deferred and thus difficult to locate; enable system option "strict_facts" temporarily to avoid this. * Marginal comments need to be written exclusively in the new-style form "\ \text\", old ASCII variants like "-- {* ... *}" are no longer supported. INCOMPATIBILITY, use the command-line tool "isabelle update_comments" to update existing theory files. * Old-style inner comments (* ... *) within the term language are legacy and will be discontinued soon: use formal comments "\ \...\" or "\<^cancel>\...\" instead. * The "op " syntax for infix operators has been replaced by "()". If begins or ends with a "*", there needs to be a space between the "*" and the corresponding parenthesis. INCOMPATIBILITY, use the command-line tool "isabelle update_op" to convert theory and ML files to the new syntax. Because it is based on regular expression matching, the result may need a bit of manual postprocessing. Invoking "isabelle update_op" converts all files in the current directory (recursively). In case you want to exclude conversion of ML files (because the tool frequently also converts ML's "op" syntax), use option "-m". * Theory header 'abbrevs' specifications need to be separated by 'and'. INCOMPATIBILITY. * Command 'external_file' declares the formal dependency on the given file name, such that the Isabelle build process knows about it, but without specific Prover IDE management. * Session ROOT entries no longer allow specification of 'files'. Rare INCOMPATIBILITY, use command 'external_file' within a proper theory context. * Session root directories may be specified multiple times: each accessible ROOT file is processed only once. This facilitates specification of $ISABELLE_HOME_USER/ROOTS or command-line options like -d or -D for "isabelle build" and "isabelle jedit". Example: isabelle build -D '~~/src/ZF' * The command 'display_drafts' has been discontinued. INCOMPATIBILITY, use action "isabelle.draft" (or "print") in Isabelle/jEdit instead. * In HTML output, the Isabelle symbol "\" is rendered as explicit Unicode hyphen U+2010, to avoid unclear meaning of the old "soft hyphen" U+00AD. Rare INCOMPATIBILITY, e.g. copy-paste of historic Isabelle HTML output. *** Isabelle/jEdit Prover IDE *** * The command-line tool "isabelle jedit" provides more flexible options for session management: - option -R builds an auxiliary logic image with all theories from other sessions that are not already present in its parent - option -S is like -R, with a focus on the selected session and its descendants (this reduces startup time for big projects like AFP) - option -A specifies an alternative ancestor session for options -R and -S - option -i includes additional sessions into the name-space of theories Examples: isabelle jedit -R HOL-Number_Theory isabelle jedit -R HOL-Number_Theory -A HOL isabelle jedit -d '$AFP' -S Formal_SSA -A HOL isabelle jedit -d '$AFP' -S Formal_SSA -A HOL-Analysis isabelle jedit -d '$AFP' -S Formal_SSA -A HOL-Analysis -i CryptHOL * PIDE markup for session ROOT files: allows to complete session names, follow links to theories and document files etc. * Completion supports theory header imports, using theory base name. E.g. "Prob" may be completed to "HOL-Probability.Probability". * Named control symbols (without special Unicode rendering) are shown as bold-italic keyword. This is particularly useful for the short form of antiquotations with control symbol: \<^name>\argument\. The action "isabelle.antiquoted_cartouche" turns an antiquotation with 0 or 1 arguments into this format. * Completion provides templates for named symbols with arguments, e.g. "\ \ARGUMENT\" or "\<^emph>\ARGUMENT\". * Slightly more parallel checking, notably for high priority print functions (e.g. State output). * The view title is set dynamically, according to the Isabelle distribution and the logic session name. The user can override this via set-view-title (stored persistently in $JEDIT_SETTINGS/perspective.xml). * System options "spell_checker_include" and "spell_checker_exclude" supersede former "spell_checker_elements" to determine regions of text that are subject to spell-checking. Minor INCOMPATIBILITY. * Action "isabelle.preview" is able to present more file formats, notably bibtex database files and ML files. * Action "isabelle.draft" is similar to "isabelle.preview", but shows a plain-text document draft. Both are available via the menu "Plugins / Isabelle". * When loading text files, the Isabelle symbols encoding UTF-8-Isabelle is only used if there is no conflict with existing Unicode sequences in the file. Otherwise, the fallback encoding is plain UTF-8 and Isabelle symbols remain in literal \ form. This avoids accidental loss of Unicode content when saving the file. * Bibtex database files (.bib) are semantically checked. * Update to jedit-5.5.0, the latest release. *** Isabelle/VSCode Prover IDE *** * HTML preview of theories and other file-formats similar to Isabelle/jEdit. * Command-line tool "isabelle vscode_server" accepts the same options -A, -R, -S, -i for session selection as "isabelle jedit". This is relevant for isabelle.args configuration settings in VSCode. The former option -A (explore all known session files) has been discontinued: it is enabled by default, unless option -S is used to focus on a particular spot in the session structure. INCOMPATIBILITY. *** Document preparation *** * Formal comments work uniformly in outer syntax, inner syntax (term language), Isabelle/ML and some other embedded languages of Isabelle. See also "Document comments" in the isar-ref manual. The following forms are supported: - marginal text comment: \ \\\ - canceled source: \<^cancel>\\\ - raw LaTeX: \<^latex>\\\ * Outside of the inner theory body, the default presentation context is theory Pure. Thus elementary antiquotations may be used in markup commands (e.g. 'chapter', 'section', 'text') and formal comments. * System option "document_tags" specifies alternative command tags. This is occasionally useful to control the global visibility of commands via session options (e.g. in ROOT). * Document markup commands ('section', 'text' etc.) are implicitly tagged as "document" and visible by default. This avoids the application of option "document_tags" to these commands. * Isabelle names are mangled into LaTeX macro names to allow the full identifier syntax with underscore, prime, digits. This is relevant for antiquotations in control symbol notation, e.g. \<^const_name> becomes \isactrlconstUNDERSCOREname. * Document preparation with skip_proofs option now preserves the content more accurately: only terminal proof steps ('by' etc.) are skipped. * Document antiquotation @{theory name} requires the long session-qualified theory name: this is what users reading the text normally need to import. * Document antiquotation @{session name} checks and prints the given session name verbatim. * Document antiquotation @{cite} now checks the given Bibtex entries against the Bibtex database files -- only in batch-mode session builds. * Command-line tool "isabelle document" has been re-implemented in Isabelle/Scala, with simplified arguments and explicit errors from the latex and bibtex process. Minor INCOMPATIBILITY. * Session ROOT entry: empty 'document_files' means there is no document for this session. There is no need to specify options [document = false] anymore. *** Isar *** * Command 'interpret' no longer exposes resulting theorems as literal facts, notably for the \prop\ notation or the "fact" proof method. This improves modularity of proofs and scalability of locale interpretation. Rare INCOMPATIBILITY, need to refer to explicitly named facts instead (e.g. use 'find_theorems' or 'try' to figure this out). * The old 'def' command has been discontinued (legacy since Isbelle2016-1). INCOMPATIBILITY, use 'define' instead -- usually with object-logic equality or equivalence. *** Pure *** * The inner syntax category "sort" now includes notation "_" for the dummy sort: it is effectively ignored in type-inference. * Rewrites clauses (keyword 'rewrites') were moved into the locale expression syntax, where they are part of locale instances. In interpretation commands rewrites clauses now need to occur before 'for' and 'defines'. Rare INCOMPATIBILITY; definitions immediately subject to rewriting may need to be pulled up into the surrounding theory. * For 'rewrites' clauses, if activating a locale instance fails, fall back to reading the clause first. This helps avoid qualification of locale instances where the qualifier's sole purpose is avoiding duplicate constant declarations. * Proof method "simp" now supports a new modifier "flip:" followed by a list of theorems. Each of these theorems is removed from the simpset (without warning if it is not there) and the symmetric version of the theorem (i.e. lhs and rhs exchanged) is added to the simpset. For "auto" and friends the modifier is "simp flip:". *** HOL *** * Sledgehammer: bundled version of "vampire" (for non-commercial users) helps to avoid fragility of "remote_vampire" service. * Clarified relationship of characters, strings and code generation: - Type "char" is now a proper datatype of 8-bit values. - Conversions "nat_of_char" and "char_of_nat" are gone; use more general conversions "of_char" and "char_of" with suitable type constraints instead. - The zero character is just written "CHR 0x00", not "0" any longer. - Type "String.literal" (for code generation) is now isomorphic to lists of 7-bit (ASCII) values; concrete values can be written as "STR ''...''" for sequences of printable characters and "STR 0x..." for one single ASCII code point given as hexadecimal numeral. - Type "String.literal" supports concatenation "... + ..." for all standard target languages. - Theory HOL-Library.Code_Char is gone; study the explanations concerning "String.literal" in the tutorial on code generation to get an idea how target-language string literals can be converted to HOL string values and vice versa. - Session Imperative-HOL: operation "raise" directly takes a value of type "String.literal" as argument, not type "string". INCOMPATIBILITY. * Code generation: Code generation takes an explicit option "case_insensitive" to accomodate case-insensitive file systems. * Abstract bit operations as part of Main: push_bit, take_bit, drop_bit. * New, more general, axiomatization of complete_distrib_lattice. The former axioms: "sup x (Inf X) = Inf (sup x ` X)" and "inf x (Sup X) = Sup (inf x ` X)" are replaced by: "Inf (Sup ` A) <= Sup (Inf ` {f ` A | f . (! Y \ A . f Y \ Y)})" The instantiations of sets and functions as complete_distrib_lattice are moved to Hilbert_Choice.thy because their proofs need the Hilbert choice operator. The dual of this property is also proved in theory HOL.Hilbert_Choice. * New syntax for the minimum/maximum of a function over a finite set: MIN x\A. B and even MIN x. B (only useful for finite types), also MAX. * Clarifed theorem names: Min.antimono ~> Min.subset_imp Max.antimono ~> Max.subset_imp Minor INCOMPATIBILITY. * SMT module: - The 'smt_oracle' option is now necessary when using the 'smt' method with a solver other than Z3. INCOMPATIBILITY. - The encoding to first-order logic is now more complete in the presence of higher-order quantifiers. An 'smt_explicit_application' option has been added to control this. INCOMPATIBILITY. * Facts sum.commute(_restrict) and prod.commute(_restrict) renamed to sum.swap(_restrict) and prod.swap(_restrict), to avoid name clashes on interpretation of abstract locales. INCOMPATIBILITY. * Predicate coprime is now a real definition, not a mere abbreviation. INCOMPATIBILITY. * Predicate pairwise_coprime abolished, use "pairwise coprime" instead. INCOMPATIBILITY. * The relator rel_filter on filters has been strengthened to its canonical categorical definition with better properties. INCOMPATIBILITY. * Generalized linear algebra involving linear, span, dependent, dim from type class real_vector to locales module and vector_space. Renamed: span_inc ~> span_superset span_superset ~> span_base span_eq ~> span_eq_iff INCOMPATIBILITY. * Class linordered_semiring_1 covers zero_less_one also, ruling out pathologic instances. Minor INCOMPATIBILITY. * Theory HOL.List: functions "sorted_wrt" and "sorted" now compare every element in a list to all following elements, not just the next one. * Theory HOL.List syntax: - filter-syntax "[x <- xs. P]" is no longer output syntax, but only input syntax - list comprehension syntax now supports tuple patterns in "pat <- xs" * Theory Map: "empty" must now be qualified as "Map.empty". * Removed nat-int transfer machinery. Rare INCOMPATIBILITY. * Fact mod_mult_self4 (on nat) renamed to Suc_mod_mult_self3, to avoid clash with fact mod_mult_self4 (on more generic semirings). INCOMPATIBILITY. * Eliminated some theorem aliasses: even_times_iff ~> even_mult_iff mod_2_not_eq_zero_eq_one_nat ~> not_mod_2_eq_0_eq_1 even_of_nat ~> even_int_iff INCOMPATIBILITY. * Eliminated some theorem duplicate variations: - dvd_eq_mod_eq_0_numeral can be replaced by dvd_eq_mod_eq_0 - mod_Suc_eq_Suc_mod can be replaced by mod_Suc - mod_Suc_eq_Suc_mod [symmetrict] can be replaced by mod_simps - mod_eq_0_iff can be replaced by mod_eq_0_iff_dvd and dvd_def - the witness of mod_eqD can be given directly as "_ div _" INCOMPATIBILITY. * Classical setup: Assumption "m mod d = 0" (for m d :: nat) is no longer aggresively destroyed to "\q. m = d * q". INCOMPATIBILITY, adding "elim!: dvd" to classical proof methods in most situations restores broken proofs. * Theory HOL-Library.Conditional_Parametricity provides command 'parametric_constant' for proving parametricity of non-recursive definitions. For constants that are not fully parametric the command will infer conditions on relations (e.g., bi_unique, bi_total, or type class conditions such as "respects 0") sufficient for parametricity. See theory HOL-ex.Conditional_Parametricity_Examples for some examples. * Theory HOL-Library.Code_Lazy provides a new preprocessor for the code generator to generate code for algebraic types with lazy evaluation semantics even in call-by-value target languages. See the theories HOL-ex.Code_Lazy_Demo and HOL-Codegenerator_Test.Code_Lazy_Test for some examples. * Theory HOL-Library.Landau_Symbols has been moved here from AFP. * Theory HOL-Library.Old_Datatype no longer provides the legacy command 'old_datatype'. INCOMPATIBILITY. * Theory HOL-Computational_Algebra.Polynomial_Factorial does not provide instances of rat, real, complex as factorial rings etc. Import HOL-Computational_Algebra.Field_as_Ring explicitly in case of need. INCOMPATIBILITY. * Session HOL-Algebra: renamed (^) to [^] to avoid conflict with new infix/prefix notation. * Session HOL-Algebra: revamped with much new material. The set of isomorphisms between two groups is now denoted iso rather than iso_set. INCOMPATIBILITY. * Session HOL-Analysis: the Arg function now respects the same interval as Ln, namely (-pi,pi]; the old Arg function has been renamed Arg2pi. INCOMPATIBILITY. * Session HOL-Analysis: the functions zorder, zer_poly, porder and pol_poly have been redefined. All related lemmas have been reworked. INCOMPATIBILITY. * Session HOL-Analysis: infinite products, Moebius functions, the Riemann mapping theorem, the Vitali covering theorem, change-of-variables results for integration and measures. * Session HOL-Real_Asymp: proof method "real_asymp" proves asymptotics or real-valued functions (limits, "Big-O", etc.) automatically. See also ~~/src/HOL/Real_Asymp/Manual for some documentation. * Session HOL-Types_To_Sets: more tool support (unoverload_type combines internalize_sorts and unoverload) and larger experimental application (type based linear algebra transferred to linear algebra on subspaces). *** ML *** * Operation Export.export emits theory exports (arbitrary blobs), which are stored persistently in the session build database. * Command 'ML_export' exports ML toplevel bindings to the global bootstrap environment of the ML process. This allows ML evaluation without a formal theory context, e.g. in command-line tools like "isabelle process". *** System *** * Mac OS X 10.10 Yosemite is now the baseline version; Mavericks is no longer supported. * Linux and Windows/Cygwin is for x86_64 only, old 32bit platform support has been discontinued. * Java runtime is for x86_64 only. Corresponding Isabelle settings have been renamed to ISABELLE_TOOL_JAVA_OPTIONS and JEDIT_JAVA_OPTIONS, instead of former 32/64 variants. INCOMPATIBILITY. * Old settings ISABELLE_PLATFORM and ISABELLE_WINDOWS_PLATFORM should be phased out due to unclear preference of 32bit vs. 64bit architecture. Explicit GNU bash expressions are now preferred, for example (with quotes): #Posix executables (Unix or Cygwin), with preference for 64bit "${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}" #native Windows or Unix executables, with preference for 64bit "${ISABELLE_WINDOWS_PLATFORM64:-${ISABELLE_WINDOWS_PLATFORM32:-${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}}}" #native Windows (32bit) or Unix executables (preference for 64bit) "${ISABELLE_WINDOWS_PLATFORM32:-${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM32}}" * Command-line tool "isabelle build" supports new options: - option -B NAME: include session NAME and all descendants - option -S: only observe changes of sources, not heap images - option -f: forces a fresh build * Command-line tool "isabelle build" options -c -x -B refer to descendants wrt. the session parent or import graph. Subtle INCOMPATIBILITY: options -c -x used to refer to the session parent graph only. * Command-line tool "isabelle build" takes "condition" options with the corresponding environment values into account, when determining the up-to-date status of a session. * The command-line tool "dump" dumps information from the cumulative PIDE session database: many sessions may be loaded into a given logic image, results from all loaded theories are written to the output directory. * Command-line tool "isabelle imports -I" also reports actual session imports. This helps to minimize the session dependency graph. * The command-line tool "export" and 'export_files' in session ROOT entries retrieve theory exports from the session build database. * The command-line tools "isabelle server" and "isabelle client" provide access to the Isabelle Server: it supports responsive session management and concurrent use of theories, based on Isabelle/PIDE infrastructure. See also the "system" manual. * The command-line tool "isabelle update_comments" normalizes formal comments in outer syntax as follows: \ \text\ (whith a single space to approximate the appearance in document output). This is more specific than former "isabelle update_cartouches -c": the latter tool option has been discontinued. * The command-line tool "isabelle mkroot" now always produces a document outline: its options have been adapted accordingly. INCOMPATIBILITY. * The command-line tool "isabelle mkroot -I" initializes a Mercurial repository for the generated session files. * Settings ISABELLE_HEAPS + ISABELLE_BROWSER_INFO (or ISABELLE_HEAPS_SYSTEM + ISABELLE_BROWSER_INFO_SYSTEM in "system build mode") determine the directory locations of the main build artefacts -- instead of hard-wired directories in ISABELLE_HOME_USER (or ISABELLE_HOME). * Settings ISABELLE_PATH and ISABELLE_OUTPUT have been discontinued: heap images and session databases are always stored in $ISABELLE_HEAPS/$ML_IDENTIFIER (command-line default) or $ISABELLE_HEAPS_SYSTEM/$ML_IDENTIFIER (main Isabelle application or "isabelle jedit -s" or "isabelle build -s"). * ISABELLE_LATEX and ISABELLE_PDFLATEX now include platform-specific options for improved error reporting. Potential INCOMPATIBILITY with unusual LaTeX installations, may have to adapt these settings. * Update to Poly/ML 5.7.1 with slightly improved performance and PIDE markup for identifier bindings. It now uses The GNU Multiple Precision Arithmetic Library (libgmp) on all platforms, notably Mac OS X with 32/64 bit. New in Isabelle2017 (October 2017) ---------------------------------- *** General *** * Experimental support for Visual Studio Code (VSCode) as alternative Isabelle/PIDE front-end, see also https://marketplace.visualstudio.com/items?itemName=makarius.Isabelle2017 VSCode is a new type of application that continues the concepts of "programmer's editor" and "integrated development environment" towards fully semantic editing and debugging -- in a relatively light-weight manner. Thus it fits nicely on top of the Isabelle/PIDE infrastructure. Technically, VSCode is based on the Electron application framework (Node.js + Chromium browser + V8), which is implemented in JavaScript and TypeScript, while Isabelle/VSCode mainly consists of Isabelle/Scala modules around a Language Server implementation. * Theory names are qualified by the session name that they belong to. This affects imports, but not the theory name space prefix (which is just the theory base name as before). In order to import theories from other sessions, the ROOT file format provides a new 'sessions' keyword. In contrast, a theory that is imported in the old-fashioned manner via an explicit file-system path belongs to the current session, and might cause theory name conflicts later on. Theories that are imported from other sessions are excluded from the current session document. The command-line tool "isabelle imports" helps to update theory imports. * The main theory entry points for some non-HOL sessions have changed, to avoid confusion with the global name "Main" of the session HOL. This leads to the follow renamings: CTT/Main.thy ~> CTT/CTT.thy ZF/Main.thy ~> ZF/ZF.thy ZF/Main_ZF.thy ~> ZF/ZF.thy ZF/Main_ZFC.thy ~> ZF/ZFC.thy ZF/ZF.thy ~> ZF/ZF_Base.thy INCOMPATIBILITY. * Commands 'alias' and 'type_alias' introduce aliases for constants and type constructors, respectively. This allows adhoc changes to name-space accesses within global or local theory contexts, e.g. within a 'bundle'. * Document antiquotations @{prf} and @{full_prf} output proof terms (again) in the same way as commands 'prf' and 'full_prf'. * Computations generated by the code generator can be embedded directly into ML, alongside with @{code} antiquotations, using the following antiquotations: @{computation ... terms: ... datatypes: ...} : ((term -> term) -> 'ml option -> 'a) -> Proof.context -> term -> 'a @{computation_conv ... terms: ... datatypes: ...} : (Proof.context -> 'ml -> conv) -> Proof.context -> conv @{computation_check terms: ... datatypes: ...} : Proof.context -> conv See src/HOL/ex/Computations.thy, src/HOL/Decision_Procs/Commutative_Ring.thy and src/HOL/Decision_Procs/Reflective_Field.thy for examples and the tutorial on code generation. *** Prover IDE -- Isabelle/Scala/jEdit *** * Session-qualified theory imports allow the Prover IDE to process arbitrary theory hierarchies independently of the underlying logic session image (e.g. option "isabelle jedit -l"), but the directory structure needs to be known in advance (e.g. option "isabelle jedit -d" or a line in the file $ISABELLE_HOME_USER/ROOTS). * The PIDE document model maintains file content independently of the status of jEdit editor buffers. Reloading jEdit buffers no longer causes changes of formal document content. Theory dependencies are always resolved internally, without the need for corresponding editor buffers. The system option "jedit_auto_load" has been discontinued: it is effectively always enabled. * The Theories dockable provides a "Purge" button, in order to restrict the document model to theories that are required for open editor buffers. * The Theories dockable indicates the overall status of checking of each entry. When all forked tasks of a theory are finished, the border is painted with thick lines; remaining errors in this situation are represented by a different border color. * Automatic indentation is more careful to avoid redundant spaces in intermediate situations. Keywords are indented after input (via typed characters or completion); see also option "jedit_indent_input". * Action "isabelle.preview" opens an HTML preview of the current theory document in the default web browser. * Command-line invocation "isabelle jedit -R -l LOGIC" opens the ROOT entry of the specified logic session in the editor, while its parent is used for formal checking. * The main Isabelle/jEdit plugin may be restarted manually (using the jEdit Plugin Manager), as long as the "Isabelle Base" plugin remains enabled at all times. * Update to current jedit-5.4.0. *** Pure *** * Deleting the last code equations for a particular function using [code del] results in function with no equations (runtime abort) rather than an unimplemented function (generation time abort). Use explicit [[code drop:]] to enforce the latter. Minor INCOMPATIBILITY. * Proper concept of code declarations in code.ML: - Regular code declarations act only on the global theory level, being ignored with warnings if syntactically malformed. - Explicitly global code declarations yield errors if syntactically malformed. - Default code declarations are silently ignored if syntactically malformed. Minor INCOMPATIBILITY. * Clarified and standardized internal data bookkeeping of code declarations: history of serials allows to track potentially non-monotonous declarations appropriately. Minor INCOMPATIBILITY. *** HOL *** * The Nunchaku model finder is now part of "Main". * SMT module: - A new option, 'smt_nat_as_int', has been added to translate 'nat' to 'int' and benefit from the SMT solver's theory reasoning. It is disabled by default. - The legacy module "src/HOL/Library/Old_SMT.thy" has been removed. - Several small issues have been rectified in the 'smt' command. * (Co)datatype package: The 'size_gen_o_map' lemma is no longer generated for datatypes with type class annotations. As a result, the tactic that derives it no longer fails on nested datatypes. Slight INCOMPATIBILITY. * Command and antiquotation "value" with modified default strategy: terms without free variables are always evaluated using plain evaluation only, with no fallback on normalization by evaluation. Minor INCOMPATIBILITY. * Theories "GCD" and "Binomial" are already included in "Main" (instead of "Complex_Main"). * Constant "surj" is a full input/output abbreviation (again). Minor INCOMPATIBILITY. * Dropped aliasses RangeP, DomainP for Rangep, Domainp respectively. INCOMPATIBILITY. * Renamed ii to imaginary_unit in order to free up ii as a variable name. The syntax \ remains available. INCOMPATIBILITY. * Dropped abbreviations transP, antisymP, single_valuedP; use constants transp, antisymp, single_valuedp instead. INCOMPATIBILITY. * Constant "subseq" in Topological_Spaces has been removed -- it is subsumed by "strict_mono". Some basic lemmas specific to "subseq" have been renamed accordingly, e.g. "subseq_o" -> "strict_mono_o" etc. * Theory List: "sublist" renamed to "nths" in analogy with "nth", and "sublisteq" renamed to "subseq". Minor INCOMPATIBILITY. * Theory List: new generic function "sorted_wrt". * Named theorems mod_simps covers various congruence rules concerning mod, replacing former zmod_simps. INCOMPATIBILITY. * Swapped orientation of congruence rules mod_add_left_eq, mod_add_right_eq, mod_add_eq, mod_mult_left_eq, mod_mult_right_eq, mod_mult_eq, mod_minus_eq, mod_diff_left_eq, mod_diff_right_eq, mod_diff_eq. INCOMPATIBILITY. * Generalized some facts: measure_induct_rule measure_induct zminus_zmod ~> mod_minus_eq zdiff_zmod_left ~> mod_diff_left_eq zdiff_zmod_right ~> mod_diff_right_eq zmod_eq_dvd_iff ~> mod_eq_dvd_iff INCOMPATIBILITY. * Algebraic type class hierarchy of euclidean (semi)rings in HOL: euclidean_(semi)ring, euclidean_(semi)ring_cancel, unique_euclidean_(semi)ring; instantiation requires provision of a euclidean size. * Theory "HOL-Number_Theory.Euclidean_Algorithm" has been reworked: - Euclidean induction is available as rule eucl_induct. - Constants Euclidean_Algorithm.gcd, Euclidean_Algorithm.lcm, Euclidean_Algorithm.Gcd and Euclidean_Algorithm.Lcm allow easy instantiation of euclidean (semi)rings as GCD (semi)rings. - Coefficients obtained by extended euclidean algorithm are available as "bezout_coefficients". INCOMPATIBILITY. * Theory "Number_Theory.Totient" introduces basic notions about Euler's totient function previously hidden as solitary example in theory Residues. Definition changed so that "totient 1 = 1" in agreement with the literature. Minor INCOMPATIBILITY. * New styles in theory "HOL-Library.LaTeXsugar": - "dummy_pats" for printing equations with "_" on the lhs; - "eta_expand" for printing eta-expanded terms. * Theory "HOL-Library.Permutations": theorem bij_swap_ompose_bij has been renamed to bij_swap_compose_bij. INCOMPATIBILITY. * New theory "HOL-Library.Going_To_Filter" providing the "f going_to F" filter for describing points x such that f(x) is in the filter F. * Theory "HOL-Library.Formal_Power_Series": constants X/E/L/F have been renamed to fps_X/fps_exp/fps_ln/fps_hypergeo to avoid polluting the name space. INCOMPATIBILITY. * Theory "HOL-Library.FinFun" has been moved to AFP (again). INCOMPATIBILITY. * Theory "HOL-Library.FuncSet": some old and rarely used ASCII replacement syntax has been removed. INCOMPATIBILITY, standard syntax with symbols should be used instead. The subsequent commands help to reproduce the old forms, e.g. to simplify porting old theories: syntax (ASCII) "_PiE" :: "pttrn \ 'a set \ 'b set \ ('a \ 'b) set" ("(3PIE _:_./ _)" 10) "_Pi" :: "pttrn \ 'a set \ 'b set \ ('a \ 'b) set" ("(3PI _:_./ _)" 10) "_lam" :: "pttrn \ 'a set \ 'a \ 'b \ ('a \ 'b)" ("(3%_:_./ _)" [0,0,3] 3) * Theory "HOL-Library.Multiset": the simprocs on subsets operators of multisets have been renamed: msetless_cancel_numerals ~> msetsubset_cancel msetle_cancel_numerals ~> msetsubset_eq_cancel INCOMPATIBILITY. * Theory "HOL-Library.Pattern_Aliases" provides input and output syntax for pattern aliases as known from Haskell, Scala and ML. * Theory "HOL-Library.Uprod" formalizes the type of unordered pairs. * Session HOL-Analysis: more material involving arcs, paths, covering spaces, innessential maps, retracts, infinite products, simplicial complexes. Baire Category theorem. Major results include the Jordan Curve Theorem and the Great Picard Theorem. * Session HOL-Algebra has been extended by additional lattice theory: the Knaster-Tarski fixed point theorem and Galois Connections. * Sessions HOL-Computational_Algebra and HOL-Number_Theory: new notions of squarefreeness, n-th powers, and prime powers. * Session "HOL-Computional_Algebra" covers many previously scattered theories, notably Euclidean_Algorithm, Factorial_Ring, Formal_Power_Series, Fraction_Field, Fundamental_Theorem_Algebra, Normalized_Fraction, Polynomial_FPS, Polynomial, Primes. Minor INCOMPATIBILITY. *** System *** * Isabelle/Scala: the SQL module supports access to relational databases, either as plain file (SQLite) or full-scale server (PostgreSQL via local port or remote ssh connection). * Results of "isabelle build" are recorded as SQLite database (i.e. "Application File Format" in the sense of https://www.sqlite.org/appfileformat.html). This allows systematic access via operations from module Sessions.Store in Isabelle/Scala. * System option "parallel_proofs" is 1 by default (instead of more aggressive 2). This requires less heap space and avoids burning parallel CPU cycles, while full subproof parallelization is enabled for repeated builds (according to parallel_subproofs_threshold). * System option "record_proofs" allows to change the global Proofterm.proofs variable for a session. Regular values are are 0, 1, 2; a negative value means the current state in the ML heap image remains unchanged. * Isabelle settings variable ISABELLE_SCALA_BUILD_OPTIONS has been renamed to ISABELLE_SCALAC_OPTIONS. Rare INCOMPATIBILITY. * Isabelle settings variables ISABELLE_WINDOWS_PLATFORM, ISABELLE_WINDOWS_PLATFORM32, ISABELLE_WINDOWS_PLATFORM64 indicate the native Windows platform (independently of the Cygwin installation). This is analogous to ISABELLE_PLATFORM, ISABELLE_PLATFORM32, ISABELLE_PLATFORM64. * Command-line tool "isabelle build_docker" builds a Docker image from the Isabelle application bundle for Linux. See also https://hub.docker.com/r/makarius/isabelle * Command-line tool "isabelle vscode_server" provides a Language Server Protocol implementation, e.g. for the Visual Studio Code editor. It serves as example for alternative PIDE front-ends. * Command-line tool "isabelle imports" helps to maintain theory imports wrt. session structure. Examples for the main Isabelle distribution: isabelle imports -I -a isabelle imports -U -a isabelle imports -U -i -a isabelle imports -M -a -d '~~/src/Benchmarks' New in Isabelle2016-1 (December 2016) ------------------------------------- *** General *** * Splitter in proof methods "simp", "auto" and friends: - The syntax "split add" has been discontinued, use plain "split", INCOMPATIBILITY. - For situations with many conditional or case expressions, there is an alternative splitting strategy that can be much faster. It is selected by writing "split!" instead of "split". It applies safe introduction and elimination rules after each split rule. As a result the subgoal may be split into several subgoals. * Command 'bundle' provides a local theory target to define a bundle from the body of specification commands (such as 'declare', 'declaration', 'notation', 'lemmas', 'lemma'). For example: bundle foo begin declare a [simp] declare b [intro] end * Command 'unbundle' is like 'include', but works within a local theory context. Unlike "context includes ... begin", the effect of 'unbundle' on the target context persists, until different declarations are given. * Simplified outer syntax: uniform category "name" includes long identifiers. Former "xname" / "nameref" / "name reference" has been discontinued. * Embedded content (e.g. the inner syntax of types, terms, props) may be delimited uniformly via cartouches. This works better than old-fashioned quotes when sub-languages are nested. * Mixfix annotations support general block properties, with syntax "(\x=a y=b z \\". Notable property names are "indent", "consistent", "unbreakable", "markup". The existing notation "(DIGITS" is equivalent to "(\indent=DIGITS\". The former notation "(00" for unbreakable blocks is superseded by "(\unbreabable\" --- rare INCOMPATIBILITY. * Proof method "blast" is more robust wrt. corner cases of Pure statements without object-logic judgment. * Commands 'prf' and 'full_prf' are somewhat more informative (again): proof terms are reconstructed and cleaned from administrative thm nodes. * Code generator: config option "code_timing" triggers measurements of different phases of code generation. See src/HOL/ex/Code_Timing.thy for examples. * Code generator: implicits in Scala (stemming from type class instances) are generated into companion object of corresponding type class, to resolve some situations where ambiguities may occur. * Solve direct: option "solve_direct_strict_warnings" gives explicit warnings for lemma statements with trivial proofs. *** Prover IDE -- Isabelle/Scala/jEdit *** * More aggressive flushing of machine-generated input, according to system option editor_generated_input_delay (in addition to existing editor_input_delay for regular user edits). This may affect overall PIDE reactivity and CPU usage. * Syntactic indentation according to Isabelle outer syntax. Action "indent-lines" (shortcut C+i) indents the current line according to command keywords and some command substructure. Action "isabelle.newline" (shortcut ENTER) indents the old and the new line according to command keywords only; see also option "jedit_indent_newline". * Semantic indentation for unstructured proof scripts ('apply' etc.) via number of subgoals. This requires information of ongoing document processing and may thus lag behind, when the user is editing too quickly; see also option "jedit_script_indent" and "jedit_script_indent_limit". * Refined folding mode "isabelle" based on Isar syntax: 'next' and 'qed' are treated as delimiters for fold structure; 'begin' and 'end' structure of theory specifications is treated as well. * Command 'proof' provides information about proof outline with cases, e.g. for proof methods "cases", "induct", "goal_cases". * Completion templates for commands involving "begin ... end" blocks, e.g. 'context', 'notepad'. * Sidekick parser "isabelle-context" shows nesting of context blocks according to 'begin' and 'end' structure. * Highlighting of entity def/ref positions wrt. cursor. * Action "isabelle.select-entity" (shortcut CS+ENTER) selects all occurrences of the formal entity at the caret position. This facilitates systematic renaming. * PIDE document markup works across multiple Isar commands, e.g. the results established at the end of a proof are properly identified in the theorem statement. * Cartouche abbreviations work both for " and ` to accomodate typical situations where old ASCII notation may be updated. * Dockable window "Symbols" also provides access to 'abbrevs' from the outer syntax of the current theory buffer. This provides clickable syntax templates, including entries with empty abbrevs name (which are inaccessible via keyboard completion). * IDE support for the Isabelle/Pure bootstrap process, with the following independent stages: src/Pure/ROOT0.ML src/Pure/ROOT.ML src/Pure/Pure.thy src/Pure/ML_Bootstrap.thy The ML ROOT files act like quasi-theories in the context of theory ML_Bootstrap: this allows continuous checking of all loaded ML files. The theory files are presented with a modified header to import Pure from the running Isabelle instance. Results from changed versions of each stage are *not* propagated to the next stage, and isolated from the actual Isabelle/Pure that runs the IDE itself. The sequential dependencies of the above files are only observed for batch build. * Isabelle/ML and Standard ML files are presented in Sidekick with the tree structure of section headings: this special comment format is described in "implementation" chapter 0, e.g. (*** section ***). * Additional abbreviations for syntactic completion may be specified within the theory header as 'abbrevs'. The theory syntax for 'keywords' has been simplified accordingly: optional abbrevs need to go into the new 'abbrevs' section. * Global abbreviations via $ISABELLE_HOME/etc/abbrevs and $ISABELLE_HOME_USER/etc/abbrevs are no longer supported. Minor INCOMPATIBILITY, use 'abbrevs' within theory header instead. * Action "isabelle.keymap-merge" asks the user to resolve pending Isabelle keymap changes that are in conflict with the current jEdit keymap; non-conflicting changes are always applied implicitly. This action is automatically invoked on Isabelle/jEdit startup and thus increases chances that users see new keyboard shortcuts when re-using old keymaps. * ML and document antiquotations for file-systems paths are more uniform and diverse: @{path NAME} -- no file-system check @{file NAME} -- check for plain file @{dir NAME} -- check for directory Minor INCOMPATIBILITY, former uses of @{file} and @{file_unchecked} may have to be changed. *** Document preparation *** * New symbol \, e.g. for temporal operator. * New document and ML antiquotation @{locale} for locales, similar to existing antiquotation @{class}. * Mixfix annotations support delimiters like \<^control>\cartouche\ -- this allows special forms of document output. * Raw LaTeX output now works via \<^latex>\...\ instead of raw control symbol \<^raw:...>. INCOMPATIBILITY, notably for LaTeXsugar.thy and its derivatives. * \<^raw:...> symbols are no longer supported. * Old 'header' command is no longer supported (legacy since Isabelle2015). *** Isar *** * Many specification elements support structured statements with 'if' / 'for' eigen-context, e.g. 'axiomatization', 'abbreviation', 'definition', 'inductive', 'function'. * Toplevel theorem statements support eigen-context notation with 'if' / 'for' (in postfix), which corresponds to 'assumes' / 'fixes' in the traditional long statement form (in prefix). Local premises are called "that" or "assms", respectively. Empty premises are *not* bound in the context: INCOMPATIBILITY. * Command 'define' introduces a local (non-polymorphic) definition, with optional abstraction over local parameters. The syntax resembles 'definition' and 'obtain'. It fits better into the Isar language than old 'def', which is now a legacy feature. * Command 'obtain' supports structured statements with 'if' / 'for' context. * Command '\' is an alias for 'sorry', with different typesetting. E.g. to produce proof holes in examples and documentation. * The defining position of a literal fact \prop\ is maintained more carefully, and made accessible as hyperlink in the Prover IDE. * Commands 'finally' and 'ultimately' used to expose the result as literal fact: this accidental behaviour has been discontinued. Rare INCOMPATIBILITY, use more explicit means to refer to facts in Isar. * Command 'axiomatization' has become more restrictive to correspond better to internal axioms as singleton facts with mandatory name. Minor INCOMPATIBILITY. * Proof methods may refer to the main facts via the dynamic fact "method_facts". This is particularly useful for Eisbach method definitions. * Proof method "use" allows to modify the main facts of a given method expression, e.g. (use facts in simp) (use facts in \simp add: ...\) * The old proof method "default" has been removed (legacy since Isabelle2016). INCOMPATIBILITY, use "standard" instead. *** Pure *** * Pure provides basic versions of proof methods "simp" and "simp_all" that only know about meta-equality (==). Potential INCOMPATIBILITY in theory imports that merge Pure with e.g. Main of Isabelle/HOL: the order is relevant to avoid confusion of Pure.simp vs. HOL.simp. * The command 'unfolding' and proof method "unfold" include a second stage where given equations are passed through the attribute "abs_def" before rewriting. This ensures that definitions are fully expanded, regardless of the actual parameters that are provided. Rare INCOMPATIBILITY in some corner cases: use proof method (simp only:) instead, or declare [[unfold_abs_def = false]] in the proof context. * Type-inference improves sorts of newly introduced type variables for the object-logic, using its base sort (i.e. HOL.type for Isabelle/HOL). Thus terms like "f x" or "\x. P x" without any further syntactic context produce x::'a::type in HOL instead of x::'a::{} in Pure. Rare INCOMPATIBILITY, need to provide explicit type constraints for Pure types where this is really intended. *** HOL *** * New proof method "argo" using the built-in Argo solver based on SMT technology. The method can be used to prove goals of quantifier-free propositional logic, goals based on a combination of quantifier-free propositional logic with equality, and goals based on a combination of quantifier-free propositional logic with linear real arithmetic including min/max/abs. See HOL/ex/Argo_Examples.thy for examples. * The new "nunchaku" command integrates the Nunchaku model finder. The tool is experimental. See ~~/src/HOL/Nunchaku/Nunchaku.thy for details. * Metis: The problem encoding has changed very slightly. This might break existing proofs. INCOMPATIBILITY. * Sledgehammer: - The MaSh relevance filter is now faster than before. - Produce syntactically correct Vampire 4.0 problem files. * (Co)datatype package: - New commands for defining corecursive functions and reasoning about them in "~~/src/HOL/Library/BNF_Corec.thy": 'corec', 'corecursive', 'friend_of_corec', and 'corecursion_upto'; and 'corec_unique' proof method. See 'isabelle doc corec'. - The predicator :: ('a \ bool) \ 'a F \ bool is now a first-class citizen in bounded natural functors. - 'primrec' now allows nested calls through the predicator in addition to the map function. - 'bnf' automatically discharges reflexive proof obligations. - 'bnf' outputs a slightly modified proof obligation expressing rel in terms of map and set (not giving a specification for rel makes this one reflexive). - 'bnf' outputs a new proof obligation expressing pred in terms of set (not giving a specification for pred makes this one reflexive). INCOMPATIBILITY: manual 'bnf' declarations may need adjustment. - Renamed lemmas: rel_prod_apply ~> rel_prod_inject pred_prod_apply ~> pred_prod_inject INCOMPATIBILITY. - The "size" plugin has been made compatible again with locales. - The theorems about "rel" and "set" may have a slightly different (but equivalent) form. INCOMPATIBILITY. * The 'coinductive' command produces a proper coinduction rule for mutual coinductive predicates. This new rule replaces the old rule, which exposed details of the internal fixpoint construction and was hard to use. INCOMPATIBILITY. * New abbreviations for negated existence (but not bounded existence): \x. P x \ \ (\x. P x) \!x. P x \ \ (\!x. P x) * The print mode "HOL" for ASCII syntax of binders "!", "?", "?!", "@" has been removed for output. It is retained for input only, until it is eliminated altogether. * The unique existence quantifier no longer provides 'binder' syntax, but uses syntax translations (as for bounded unique existence). Thus iterated quantification \!x y. P x y with its slightly confusing sequential meaning \!x. \!y. P x y is no longer possible. Instead, pattern abstraction admits simultaneous unique existence \!(x, y). P x y (analogous to existing notation \!(x, y)\A. P x y). Potential INCOMPATIBILITY in rare situations. * Conventional syntax "%(). t" for unit abstractions. Slight syntactic INCOMPATIBILITY. * Renamed constants and corresponding theorems: setsum ~> sum setprod ~> prod listsum ~> sum_list listprod ~> prod_list INCOMPATIBILITY. * Sligthly more standardized theorem names: sgn_times ~> sgn_mult sgn_mult' ~> Real_Vector_Spaces.sgn_mult divide_zero_left ~> div_0 zero_mod_left ~> mod_0 divide_zero ~> div_by_0 divide_1 ~> div_by_1 nonzero_mult_divide_cancel_left ~> nonzero_mult_div_cancel_left div_mult_self1_is_id ~> nonzero_mult_div_cancel_left nonzero_mult_divide_cancel_right ~> nonzero_mult_div_cancel_right div_mult_self2_is_id ~> nonzero_mult_div_cancel_right is_unit_divide_mult_cancel_left ~> is_unit_div_mult_cancel_left is_unit_divide_mult_cancel_right ~> is_unit_div_mult_cancel_right mod_div_equality ~> div_mult_mod_eq mod_div_equality2 ~> mult_div_mod_eq mod_div_equality3 ~> mod_div_mult_eq mod_div_equality4 ~> mod_mult_div_eq minus_div_eq_mod ~> minus_div_mult_eq_mod minus_div_eq_mod2 ~> minus_mult_div_eq_mod minus_mod_eq_div ~> minus_mod_eq_div_mult minus_mod_eq_div2 ~> minus_mod_eq_mult_div div_mod_equality' ~> minus_mod_eq_div_mult [symmetric] mod_div_equality' ~> minus_div_mult_eq_mod [symmetric] zmod_zdiv_equality ~> mult_div_mod_eq [symmetric] zmod_zdiv_equality' ~> minus_div_mult_eq_mod [symmetric] Divides.mult_div_cancel ~> minus_mod_eq_mult_div [symmetric] mult_div_cancel ~> minus_mod_eq_mult_div [symmetric] zmult_div_cancel ~> minus_mod_eq_mult_div [symmetric] div_1 ~> div_by_Suc_0 mod_1 ~> mod_by_Suc_0 INCOMPATIBILITY. * New type class "idom_abs_sgn" specifies algebraic properties of sign and absolute value functions. Type class "sgn_if" has disappeared. Slight INCOMPATIBILITY. * Dedicated syntax LENGTH('a) for length of types. * Characters (type char) are modelled as finite algebraic type corresponding to {0..255}. - Logical representation: * 0 is instantiated to the ASCII zero character. * All other characters are represented as "Char n" with n being a raw numeral expression less than 256. * Expressions of the form "Char n" with n greater than 255 are non-canonical. - Printing and parsing: * Printable characters are printed and parsed as "CHR ''\''" (as before). * The ASCII zero character is printed and parsed as "0". * All other canonical characters are printed as "CHR 0xXX" with XX being the hexadecimal character code. "CHR n" is parsable for every numeral expression n. * Non-canonical characters have no special syntax and are printed as their logical representation. - Explicit conversions from and to the natural numbers are provided as char_of_nat, nat_of_char (as before). - The auxiliary nibble type has been discontinued. INCOMPATIBILITY. * Type class "div" with operation "mod" renamed to type class "modulo" with operation "modulo", analogously to type class "divide". This eliminates the need to qualify any of those names in the presence of infix "mod" syntax. INCOMPATIBILITY. * Statements and proofs of Knaster-Tarski fixpoint combinators lfp/gfp have been clarified. The fixpoint properties are lfp_fixpoint, its symmetric lfp_unfold (as before), and the duals for gfp. Auxiliary items for the proof (lfp_lemma2 etc.) are no longer exported, but can be easily recovered by composition with eq_refl. Minor INCOMPATIBILITY. * Constant "surj" is a mere input abbreviation, to avoid hiding an equation in term output. Minor INCOMPATIBILITY. * Command 'code_reflect' accepts empty constructor lists for datatypes, which renders those abstract effectively. * Command 'export_code' checks given constants for abstraction violations: a small guarantee that given constants specify a safe interface for the generated code. * Code generation for Scala: ambiguous implicts in class diagrams are spelt out explicitly. * Static evaluators (Code_Evaluation.static_* in Isabelle/ML) rely on explicitly provided auxiliary definitions for required type class dictionaries rather than half-working magic. INCOMPATIBILITY, see the tutorial on code generation for details. * Theory Set_Interval: substantial new theorems on indexed sums and products. * Locale bijection establishes convenient default simp rules such as "inv f (f a) = a" for total bijections. * Abstract locales semigroup, abel_semigroup, semilattice, semilattice_neutr, ordering, ordering_top, semilattice_order, semilattice_neutr_order, comm_monoid_set, semilattice_set, semilattice_neutr_set, semilattice_order_set, semilattice_order_neutr_set monoid_list, comm_monoid_list, comm_monoid_list_set, comm_monoid_mset, comm_monoid_fun use boldified syntax uniformly that does not clash with corresponding global syntax. INCOMPATIBILITY. * Former locale lifting_syntax is now a bundle, which is easier to include in a local context or theorem statement, e.g. "context includes lifting_syntax begin ... end". Minor INCOMPATIBILITY. * Some old / obsolete theorems have been renamed / removed, potential INCOMPATIBILITY. nat_less_cases -- removed, use linorder_cases instead inv_image_comp -- removed, use image_inv_f_f instead image_surj_f_inv_f ~> image_f_inv_f * Some theorems about groups and orders have been generalised from groups to semi-groups that are also monoids: le_add_same_cancel1 le_add_same_cancel2 less_add_same_cancel1 less_add_same_cancel2 add_le_same_cancel1 add_le_same_cancel2 add_less_same_cancel1 add_less_same_cancel2 * Some simplifications theorems about rings have been removed, since superseeded by a more general version: less_add_cancel_left_greater_zero ~> less_add_same_cancel1 less_add_cancel_right_greater_zero ~> less_add_same_cancel2 less_eq_add_cancel_left_greater_eq_zero ~> le_add_same_cancel1 less_eq_add_cancel_right_greater_eq_zero ~> le_add_same_cancel2 less_eq_add_cancel_left_less_eq_zero ~> add_le_same_cancel1 less_eq_add_cancel_right_less_eq_zero ~> add_le_same_cancel2 less_add_cancel_left_less_zero ~> add_less_same_cancel1 less_add_cancel_right_less_zero ~> add_less_same_cancel2 INCOMPATIBILITY. * Renamed split_if -> if_split and split_if_asm -> if_split_asm to resemble the f.split naming convention, INCOMPATIBILITY. * Added class topological_monoid. * The following theorems have been renamed: setsum_left_distrib ~> sum_distrib_right setsum_right_distrib ~> sum_distrib_left INCOMPATIBILITY. * Compound constants INFIMUM and SUPREMUM are mere abbreviations now. INCOMPATIBILITY. * "Gcd (f ` A)" and "Lcm (f ` A)" are printed with optional comprehension-like syntax analogously to "Inf (f ` A)" and "Sup (f ` A)". * Class semiring_Lcd merged into semiring_Gcd. INCOMPATIBILITY. * The type class ordered_comm_monoid_add is now called ordered_cancel_comm_monoid_add. A new type class ordered_comm_monoid_add is introduced as the combination of ordered_ab_semigroup_add + comm_monoid_add. INCOMPATIBILITY. * Introduced the type classes canonically_ordered_comm_monoid_add and dioid. * Introduced the type class ordered_ab_semigroup_monoid_add_imp_le. When instantiating linordered_semiring_strict and ordered_ab_group_add, an explicit instantiation of ordered_ab_semigroup_monoid_add_imp_le might be required. INCOMPATIBILITY. * Dropped various legacy fact bindings, whose replacements are often of a more general type also: lcm_left_commute_nat ~> lcm.left_commute lcm_left_commute_int ~> lcm.left_commute gcd_left_commute_nat ~> gcd.left_commute gcd_left_commute_int ~> gcd.left_commute gcd_greatest_iff_nat ~> gcd_greatest_iff gcd_greatest_iff_int ~> gcd_greatest_iff coprime_dvd_mult_nat ~> coprime_dvd_mult coprime_dvd_mult_int ~> coprime_dvd_mult zpower_numeral_even ~> power_numeral_even gcd_mult_cancel_nat ~> gcd_mult_cancel gcd_mult_cancel_int ~> gcd_mult_cancel div_gcd_coprime_nat ~> div_gcd_coprime div_gcd_coprime_int ~> div_gcd_coprime zpower_numeral_odd ~> power_numeral_odd zero_less_int_conv ~> of_nat_0_less_iff gcd_greatest_nat ~> gcd_greatest gcd_greatest_int ~> gcd_greatest coprime_mult_nat ~> coprime_mult coprime_mult_int ~> coprime_mult lcm_commute_nat ~> lcm.commute lcm_commute_int ~> lcm.commute int_less_0_conv ~> of_nat_less_0_iff gcd_commute_nat ~> gcd.commute gcd_commute_int ~> gcd.commute Gcd_insert_nat ~> Gcd_insert Gcd_insert_int ~> Gcd_insert of_int_int_eq ~> of_int_of_nat_eq lcm_least_nat ~> lcm_least lcm_least_int ~> lcm_least lcm_assoc_nat ~> lcm.assoc lcm_assoc_int ~> lcm.assoc int_le_0_conv ~> of_nat_le_0_iff int_eq_0_conv ~> of_nat_eq_0_iff Gcd_empty_nat ~> Gcd_empty Gcd_empty_int ~> Gcd_empty gcd_assoc_nat ~> gcd.assoc gcd_assoc_int ~> gcd.assoc zero_zle_int ~> of_nat_0_le_iff lcm_dvd2_nat ~> dvd_lcm2 lcm_dvd2_int ~> dvd_lcm2 lcm_dvd1_nat ~> dvd_lcm1 lcm_dvd1_int ~> dvd_lcm1 gcd_zero_nat ~> gcd_eq_0_iff gcd_zero_int ~> gcd_eq_0_iff gcd_dvd2_nat ~> gcd_dvd2 gcd_dvd2_int ~> gcd_dvd2 gcd_dvd1_nat ~> gcd_dvd1 gcd_dvd1_int ~> gcd_dvd1 int_numeral ~> of_nat_numeral lcm_ac_nat ~> ac_simps lcm_ac_int ~> ac_simps gcd_ac_nat ~> ac_simps gcd_ac_int ~> ac_simps abs_int_eq ~> abs_of_nat zless_int ~> of_nat_less_iff zdiff_int ~> of_nat_diff zadd_int ~> of_nat_add int_mult ~> of_nat_mult int_Suc ~> of_nat_Suc inj_int ~> inj_of_nat int_1 ~> of_nat_1 int_0 ~> of_nat_0 Lcm_empty_nat ~> Lcm_empty Lcm_empty_int ~> Lcm_empty Lcm_insert_nat ~> Lcm_insert Lcm_insert_int ~> Lcm_insert comp_fun_idem_gcd_nat ~> comp_fun_idem_gcd comp_fun_idem_gcd_int ~> comp_fun_idem_gcd comp_fun_idem_lcm_nat ~> comp_fun_idem_lcm comp_fun_idem_lcm_int ~> comp_fun_idem_lcm Lcm_eq_0 ~> Lcm_eq_0_I Lcm0_iff ~> Lcm_0_iff Lcm_dvd_int ~> Lcm_least divides_mult_nat ~> divides_mult divides_mult_int ~> divides_mult lcm_0_nat ~> lcm_0_right lcm_0_int ~> lcm_0_right lcm_0_left_nat ~> lcm_0_left lcm_0_left_int ~> lcm_0_left dvd_gcd_D1_nat ~> dvd_gcdD1 dvd_gcd_D1_int ~> dvd_gcdD1 dvd_gcd_D2_nat ~> dvd_gcdD2 dvd_gcd_D2_int ~> dvd_gcdD2 coprime_dvd_mult_iff_nat ~> coprime_dvd_mult_iff coprime_dvd_mult_iff_int ~> coprime_dvd_mult_iff realpow_minus_mult ~> power_minus_mult realpow_Suc_le_self ~> power_Suc_le_self dvd_Gcd, dvd_Gcd_nat, dvd_Gcd_int removed in favour of Gcd_greatest INCOMPATIBILITY. * Renamed HOL/Quotient_Examples/FSet.thy to HOL/Quotient_Examples/Quotient_FSet.thy INCOMPATIBILITY. * Session HOL-Library: theory FinFun bundles "finfun_syntax" and "no_finfun_syntax" allow to control optional syntax in local contexts; this supersedes former theory FinFun_Syntax. INCOMPATIBILITY, e.g. use "unbundle finfun_syntax" to imitate import of "~~/src/HOL/Library/FinFun_Syntax". * Session HOL-Library: theory Multiset_Permutations (executably) defines the set of permutations of a given set or multiset, i.e. the set of all lists that contain every element of the carrier (multi-)set exactly once. * Session HOL-Library: multiset membership is now expressed using set_mset rather than count. - Expressions "count M a > 0" and similar simplify to membership by default. - Converting between "count M a = 0" and non-membership happens using equations count_eq_zero_iff and not_in_iff. - Rules count_inI and in_countE obtain facts of the form "count M a = n" from membership. - Rules count_in_diffI and in_diff_countE obtain facts of the form "count M a = n + count N a" from membership on difference sets. INCOMPATIBILITY. * Session HOL-Library: theory LaTeXsugar uses new-style "dummy_pats" for displaying equations in functional programming style --- variables present on the left-hand but not on the righ-hand side are replaced by underscores. * Session HOL-Library: theory Combinator_PER provides combinator to build partial equivalence relations from a predicate and an equivalence relation. * Session HOL-Library: theory Perm provides basic facts about almost everywhere fix bijections. * Session HOL-Library: theory Normalized_Fraction allows viewing an element of a field of fractions as a normalized fraction (i.e. a pair of numerator and denominator such that the two are coprime and the denominator is normalized wrt. unit factors). * Session HOL-NSA has been renamed to HOL-Nonstandard_Analysis. * Session HOL-Multivariate_Analysis has been renamed to HOL-Analysis. * Session HOL-Analysis: measure theory has been moved here from HOL-Probability. When importing HOL-Analysis some theorems need additional name spaces prefixes due to name clashes. INCOMPATIBILITY. * Session HOL-Analysis: more complex analysis including Cauchy's inequality, Liouville theorem, open mapping theorem, maximum modulus principle, Residue theorem, Schwarz Lemma. * Session HOL-Analysis: Theory of polyhedra: faces, extreme points, polytopes, and the Krein–Milman Minkowski theorem. * Session HOL-Analysis: Numerous results ported from the HOL Light libraries: homeomorphisms, continuous function extensions, invariance of domain. * Session HOL-Probability: the type of emeasure and nn_integral was changed from ereal to ennreal, INCOMPATIBILITY. emeasure :: 'a measure \ 'a set \ ennreal nn_integral :: 'a measure \ ('a \ ennreal) \ ennreal * Session HOL-Probability: Code generation and QuickCheck for Probability Mass Functions. * Session HOL-Probability: theory Random_Permutations contains some theory about choosing a permutation of a set uniformly at random and folding over a list in random order. * Session HOL-Probability: theory SPMF formalises discrete subprobability distributions. * Session HOL-Library: the names of multiset theorems have been normalised to distinguish which ordering the theorems are about mset_less_eqI ~> mset_subset_eqI mset_less_insertD ~> mset_subset_insertD mset_less_eq_count ~> mset_subset_eq_count mset_less_diff_self ~> mset_subset_diff_self mset_le_exists_conv ~> mset_subset_eq_exists_conv mset_le_mono_add_right_cancel ~> mset_subset_eq_mono_add_right_cancel mset_le_mono_add_left_cancel ~> mset_subset_eq_mono_add_left_cancel mset_le_mono_add ~> mset_subset_eq_mono_add mset_le_add_left ~> mset_subset_eq_add_left mset_le_add_right ~> mset_subset_eq_add_right mset_le_single ~> mset_subset_eq_single mset_le_multiset_union_diff_commute ~> mset_subset_eq_multiset_union_diff_commute diff_le_self ~> diff_subset_eq_self mset_leD ~> mset_subset_eqD mset_lessD ~> mset_subsetD mset_le_insertD ~> mset_subset_eq_insertD mset_less_of_empty ~> mset_subset_of_empty mset_less_size ~> mset_subset_size wf_less_mset_rel ~> wf_subset_mset_rel count_le_replicate_mset_le ~> count_le_replicate_mset_subset_eq mset_remdups_le ~> mset_remdups_subset_eq ms_lesseq_impl ~> subset_eq_mset_impl Some functions have been renamed: ms_lesseq_impl -> subset_eq_mset_impl * HOL-Library: multisets are now ordered with the multiset ordering #\# ~> \ #\# ~> < le_multiset ~> less_eq_multiset less_multiset ~> le_multiset INCOMPATIBILITY. * Session HOL-Library: the prefix multiset_order has been discontinued: the theorems can be directly accessed. As a consequence, the lemmas "order_multiset" and "linorder_multiset" have been discontinued, and the interpretations "multiset_linorder" and "multiset_wellorder" have been replaced by instantiations. INCOMPATIBILITY. * Session HOL-Library: some theorems about the multiset ordering have been renamed: le_multiset_def ~> less_eq_multiset_def less_multiset_def ~> le_multiset_def less_eq_imp_le_multiset ~> subset_eq_imp_le_multiset mult_less_not_refl ~> mset_le_not_refl mult_less_trans ~> mset_le_trans mult_less_not_sym ~> mset_le_not_sym mult_less_asym ~> mset_le_asym mult_less_irrefl ~> mset_le_irrefl union_less_mono2{,1,2} ~> union_le_mono2{,1,2} le_multiset\<^sub>H\<^sub>O ~> less_eq_multiset\<^sub>H\<^sub>O le_multiset_total ~> less_eq_multiset_total less_multiset_right_total ~> subset_eq_imp_le_multiset le_multiset_empty_left ~> less_eq_multiset_empty_left le_multiset_empty_right ~> less_eq_multiset_empty_right less_multiset_empty_right ~> le_multiset_empty_left less_multiset_empty_left ~> le_multiset_empty_right union_less_diff_plus ~> union_le_diff_plus ex_gt_count_imp_less_multiset ~> ex_gt_count_imp_le_multiset less_multiset_plus_left_nonempty ~> le_multiset_plus_left_nonempty le_multiset_plus_right_nonempty ~> le_multiset_plus_right_nonempty INCOMPATIBILITY. * Session HOL-Library: the lemma mset_map has now the attribute [simp]. INCOMPATIBILITY. * Session HOL-Library: some theorems about multisets have been removed. INCOMPATIBILITY, use the following replacements: le_multiset_plus_plus_left_iff ~> add_less_cancel_right less_multiset_plus_plus_left_iff ~> add_less_cancel_right le_multiset_plus_plus_right_iff ~> add_less_cancel_left less_multiset_plus_plus_right_iff ~> add_less_cancel_left add_eq_self_empty_iff ~> add_cancel_left_right mset_subset_add_bothsides ~> subset_mset.add_less_cancel_right mset_less_add_bothsides ~> subset_mset.add_less_cancel_right mset_le_add_bothsides ~> subset_mset.add_less_cancel_right empty_inter ~> subset_mset.inf_bot_left inter_empty ~> subset_mset.inf_bot_right empty_sup ~> subset_mset.sup_bot_left sup_empty ~> subset_mset.sup_bot_right bdd_below_multiset ~> subset_mset.bdd_above_bot subset_eq_empty ~> subset_mset.le_zero_eq le_empty ~> subset_mset.le_zero_eq mset_subset_empty_nonempty ~> subset_mset.zero_less_iff_neq_zero mset_less_empty_nonempty ~> subset_mset.zero_less_iff_neq_zero * Session HOL-Library: some typeclass constraints about multisets have been reduced from ordered or linordered to preorder. Multisets have the additional typeclasses order_bot, no_top, ordered_ab_semigroup_add_imp_le, ordered_cancel_comm_monoid_add, linordered_cancel_ab_semigroup_add, and ordered_ab_semigroup_monoid_add_imp_le. INCOMPATIBILITY. * Session HOL-Library: there are some new simplification rules about multisets, the multiset ordering, and the subset ordering on multisets. INCOMPATIBILITY. * Session HOL-Library: the subset ordering on multisets has now the interpretations ordered_ab_semigroup_monoid_add_imp_le and bounded_lattice_bot. INCOMPATIBILITY. * Session HOL-Library, theory Multiset: single has been removed in favor of add_mset that roughly corresponds to Set.insert. Some theorems have removed or changed: single_not_empty ~> add_mset_not_empty or empty_not_add_mset fold_mset_insert ~> fold_mset_add_mset image_mset_insert ~> image_mset_add_mset union_single_eq_diff multi_self_add_other_not_self diff_single_eq_union INCOMPATIBILITY. * Session HOL-Library, theory Multiset: some theorems have been changed to use add_mset instead of single: mset_add multi_self_add_other_not_self diff_single_eq_union union_single_eq_diff union_single_eq_member add_eq_conv_diff insert_noteq_member add_eq_conv_ex multi_member_split multiset_add_sub_el_shuffle mset_subset_eq_insertD mset_subset_insertD insert_subset_eq_iff insert_union_subset_iff multi_psub_of_add_self inter_add_left1 inter_add_left2 inter_add_right1 inter_add_right2 sup_union_left1 sup_union_left2 sup_union_right1 sup_union_right2 size_eq_Suc_imp_eq_union multi_nonempty_split mset_insort mset_update mult1I less_add mset_zip_take_Cons_drop_twice rel_mset_Zero msed_map_invL msed_map_invR msed_rel_invL msed_rel_invR le_multiset_right_total multiset_induct multiset_induct2_size multiset_induct2 INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the definitions of some constants have changed to use add_mset instead of adding a single element: image_mset mset replicate_mset mult1 pred_mset rel_mset' mset_insort INCOMPATIBILITY. * Session HOL-Library, theory Multiset: due to the above changes, the attributes of some multiset theorems have been changed: insert_DiffM [] ~> [simp] insert_DiffM2 [simp] ~> [] diff_add_mset_swap [simp] fold_mset_add_mset [simp] diff_diff_add [simp] (for multisets only) diff_cancel [simp] ~> [] count_single [simp] ~> [] set_mset_single [simp] ~> [] size_multiset_single [simp] ~> [] size_single [simp] ~> [] image_mset_single [simp] ~> [] mset_subset_eq_mono_add_right_cancel [simp] ~> [] mset_subset_eq_mono_add_left_cancel [simp] ~> [] fold_mset_single [simp] ~> [] subset_eq_empty [simp] ~> [] empty_sup [simp] ~> [] sup_empty [simp] ~> [] inter_empty [simp] ~> [] empty_inter [simp] ~> [] INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the order of the variables in the second cases of multiset_induct, multiset_induct2_size, multiset_induct2 has been changed (e.g. Add A a ~> Add a A). INCOMPATIBILITY. * Session HOL-Library, theory Multiset: there is now a simplification procedure on multisets. It mimics the behavior of the procedure on natural numbers. INCOMPATIBILITY. * Session HOL-Library, theory Multiset: renamed sums and products of multisets: msetsum ~> sum_mset msetprod ~> prod_mset * Session HOL-Library, theory Multiset: the notation for intersection and union of multisets have been changed: #\ ~> \# #\ ~> \# INCOMPATIBILITY. * Session HOL-Library, theory Multiset: the lemma one_step_implies_mult_aux on multisets has been removed, use one_step_implies_mult instead. INCOMPATIBILITY. * Session HOL-Library: theory Complete_Partial_Order2 provides reasoning support for monotonicity and continuity in chain-complete partial orders and about admissibility conditions for fixpoint inductions. * Session HOL-Library: theory Library/Polynomial contains also derivation of polynomials (formerly in Library/Poly_Deriv) but not gcd/lcm on polynomials over fields. This has been moved to a separate theory Library/Polynomial_GCD_euclidean.thy, to pave way for a possible future different type class instantiation for polynomials over factorial rings. INCOMPATIBILITY. * Session HOL-Library: theory Sublist provides function "prefixes" with the following renaming prefixeq -> prefix prefix -> strict_prefix suffixeq -> suffix suffix -> strict_suffix Added theory of longest common prefixes. * Session HOL-Number_Theory: algebraic foundation for primes: Generalisation of predicate "prime" and introduction of predicates "prime_elem", "irreducible", a "prime_factorization" function, and the "factorial_ring" typeclass with instance proofs for nat, int, poly. Some theorems now have different names, most notably "prime_def" is now "prime_nat_iff". INCOMPATIBILITY. * Session Old_Number_Theory has been removed, after porting remaining theories. * Session HOL-Types_To_Sets provides an experimental extension of Higher-Order Logic to allow translation of types to sets. *** ML *** * Integer.gcd and Integer.lcm use efficient operations from the Poly/ML library (notably for big integers). Subtle change of semantics: Integer.gcd and Integer.lcm both normalize the sign, results are never negative. This coincides with the definitions in HOL/GCD.thy. INCOMPATIBILITY. * Structure Rat for rational numbers is now an integral part of Isabelle/ML, with special notation @int/nat or @int for numerals (an abbreviation for antiquotation @{Pure.rat argument}) and ML pretty printing. Standard operations on type Rat.rat are provided via ad-hoc overloading of + - * / < <= > >= ~ abs. INCOMPATIBILITY, need to use + instead of +/ etc. Moreover, exception Rat.DIVZERO has been superseded by General.Div. * ML antiquotation @{path} is superseded by @{file}, which ensures that the argument is a plain file. Minor INCOMPATIBILITY. * Antiquotation @{make_string} is available during Pure bootstrap -- with approximative output quality. * Low-level ML system structures (like PolyML and RunCall) are no longer exposed to Isabelle/ML user-space. Potential INCOMPATIBILITY. * The ML function "ML" provides easy access to run-time compilation. This is particularly useful for conditional compilation, without requiring separate files. * Option ML_exception_debugger controls detailed exception trace via the Poly/ML debugger. Relevant ML modules need to be compiled beforehand with ML_file_debug, or with ML_file and option ML_debugger enabled. Note debugger information requires consirable time and space: main Isabelle/HOL with full debugger support may need ML_system_64. * Local_Theory.restore has been renamed to Local_Theory.reset to emphasize its disruptive impact on the cumulative context, notably the scope of 'private' or 'qualified' names. Note that Local_Theory.reset is only appropriate when targets are managed, e.g. starting from a global theory and returning to it. Regular definitional packages should use balanced blocks of Local_Theory.open_target versus Local_Theory.close_target instead. Rare INCOMPATIBILITY. * Structure TimeLimit (originally from the SML/NJ library) has been replaced by structure Timeout, with slightly different signature. INCOMPATIBILITY. * Discontinued cd and pwd operations, which are not well-defined in a multi-threaded environment. Note that files are usually located relatively to the master directory of a theory (see also File.full_path). Potential INCOMPATIBILITY. * Binding.empty_atts supersedes Thm.empty_binding and Attrib.empty_binding. Minor INCOMPATIBILITY. *** System *** * SML/NJ and old versions of Poly/ML are no longer supported. * Poly/ML heaps now follow the hierarchy of sessions, and thus require much less disk space. * The Isabelle ML process is now managed directly by Isabelle/Scala, and shell scripts merely provide optional command-line access. In particular: . Scala module ML_Process to connect to the raw ML process, with interaction via stdin/stdout/stderr or in batch mode; . command-line tool "isabelle console" as interactive wrapper; . command-line tool "isabelle process" as batch mode wrapper. * The executable "isabelle_process" has been discontinued. Tools and prover front-ends should use ML_Process or Isabelle_Process in Isabelle/Scala. INCOMPATIBILITY. * New command-line tool "isabelle process" supports ML evaluation of literal expressions (option -e) or files (option -f) in the context of a given heap image. Errors lead to premature exit of the ML process with return code 1. * The command-line tool "isabelle build" supports option -N for cyclic shuffling of NUMA CPU nodes. This may help performance tuning on Linux servers with separate CPU/memory modules. * System option "threads" (for the size of the Isabelle/ML thread farm) is also passed to the underlying ML runtime system as --gcthreads, unless there is already a default provided via ML_OPTIONS settings. * System option "checkpoint" helps to fine-tune the global heap space management of isabelle build. This is relevant for big sessions that may exhaust the small 32-bit address space of the ML process (which is used by default). * System option "profiling" specifies the mode for global ML profiling in "isabelle build". Possible values are "time", "allocations". The command-line tool "isabelle profiling_report" helps to digest the resulting log files. * System option "ML_process_policy" specifies an optional command prefix for the underlying ML process, e.g. to control CPU affinity on multiprocessor systems. The "isabelle jedit" tool allows to override the implicit default via option -p. * Command-line tool "isabelle console" provides option -r to help to bootstrapping Isabelle/Pure interactively. * Command-line tool "isabelle yxml" has been discontinued. INCOMPATIBILITY, use operations from the modules "XML" and "YXML" in Isabelle/ML or Isabelle/Scala. * Many Isabelle tools that require a Java runtime system refer to the settings ISABELLE_TOOL_JAVA_OPTIONS32 / ISABELLE_TOOL_JAVA_OPTIONS64, depending on the underlying platform. The settings for "isabelle build" ISABELLE_BUILD_JAVA_OPTIONS32 / ISABELLE_BUILD_JAVA_OPTIONS64 have been discontinued. Potential INCOMPATIBILITY. * The Isabelle system environment always ensures that the main executables are found within the shell search $PATH: "isabelle" and "isabelle_scala_script". * Isabelle tools may consist of .scala files: the Scala compiler is invoked on the spot. The source needs to define some object that extends Isabelle_Tool.Body. * File.bash_string, File.bash_path etc. represent Isabelle/ML and Isabelle/Scala strings authentically within GNU bash. This is useful to produce robust shell scripts under program control, without worrying about spaces or special characters. Note that user output works via Path.print (ML) or Path.toString (Scala). INCOMPATIBILITY, the old (and less versatile) operations File.shell_quote, File.shell_path etc. have been discontinued. * The isabelle_java executable allows to run a Java process within the name space of Java and Scala components that are bundled with Isabelle, but without the Isabelle settings environment. * Isabelle/Scala: the SSH module supports ssh and sftp connections, for remote command-execution and file-system access. This resembles operations from module File and Isabelle_System to some extent. Note that Path specifications need to be resolved remotely via ssh.remote_path instead of File.standard_path: the implicit process environment is different, Isabelle settings are not available remotely. * Isabelle/Scala: the Mercurial module supports repositories via the regular hg command-line interface. The repositroy clone and working directory may reside on a local or remote file-system (via ssh connection). New in Isabelle2016 (February 2016) ----------------------------------- *** General *** * Eisbach is now based on Pure instead of HOL. Objects-logics may import either the theory ~~/src/HOL/Eisbach/Eisbach (for HOL etc.) or ~~/src/HOL/Eisbach/Eisbach_Old_Appl_Syntax (for FOL, ZF etc.). Note that the HOL-Eisbach session located in ~~/src/HOL/Eisbach/ contains further examples that do require HOL. * Better resource usage on all platforms (Linux, Windows, Mac OS X) for both Isabelle/ML and Isabelle/Scala. Slightly reduced heap space usage. * Former "xsymbols" syntax with Isabelle symbols is used by default, without any special print mode. Important ASCII replacement syntax remains available under print mode "ASCII", but less important syntax has been removed (see below). * Support for more arrow symbols, with rendering in LaTeX and Isabelle fonts: \ \ \ \ \ \. * Special notation \ for the first implicit 'structure' in the context has been discontinued. Rare INCOMPATIBILITY, use explicit structure name instead, notably in indexed notation with block-subscript (e.g. \\<^bsub>A\<^esub>). * The glyph for \ in the IsabelleText font now corresponds better to its counterpart \ as quantifier-like symbol. A small diamond is available as \; the old symbol \ loses this rendering and any special meaning. * Syntax for formal comments "-- text" now also supports the symbolic form "\ text". Command-line tool "isabelle update_cartouches -c" helps to update old sources. * Toplevel theorem statements have been simplified as follows: theorems ~> lemmas schematic_lemma ~> schematic_goal schematic_theorem ~> schematic_goal schematic_corollary ~> schematic_goal Command-line tool "isabelle update_theorems" updates theory sources accordingly. * Toplevel theorem statement 'proposition' is another alias for 'theorem'. * The old 'defs' command has been removed (legacy since Isabelle2014). INCOMPATIBILITY, use regular 'definition' instead. Overloaded and/or deferred definitions require a surrounding 'overloading' block. *** Prover IDE -- Isabelle/Scala/jEdit *** * IDE support for the source-level debugger of Poly/ML, to work with Isabelle/ML and official Standard ML. Option "ML_debugger" and commands 'ML_file_debug', 'ML_file_no_debug', 'SML_file_debug', 'SML_file_no_debug' control compilation of sources with or without debugging information. The Debugger panel allows to set breakpoints (via context menu), step through stopped threads, evaluate local ML expressions etc. At least one Debugger view needs to be active to have any effect on the running ML program. * The State panel manages explicit proof state output, with dynamic auto-update according to cursor movement. Alternatively, the jEdit action "isabelle.update-state" (shortcut S+ENTER) triggers manual update. * The Output panel no longer shows proof state output by default, to avoid GUI overcrowding. INCOMPATIBILITY, use the State panel instead or enable option "editor_output_state". * The text overview column (status of errors, warnings etc.) is updated asynchronously, leading to much better editor reactivity. Moreover, the full document node content is taken into account. The width of the column is scaled according to the main text area font, for improved visibility. * The main text area no longer changes its color hue in outdated situations. The text overview column takes over the role to indicate unfinished edits in the PIDE pipeline. This avoids flashing text display due to ad-hoc updates by auxiliary GUI components, such as the State panel. * Slightly improved scheduling for urgent print tasks (e.g. command state output, interactive queries) wrt. long-running background tasks. * Completion of symbols via prefix of \ or \<^name> or \name is always possible, independently of the language context. It is never implicit: a popup will show up unconditionally. * Additional abbreviations for syntactic completion may be specified in $ISABELLE_HOME/etc/abbrevs and $ISABELLE_HOME_USER/etc/abbrevs, with support for simple templates using ASCII 007 (bell) as placeholder. * Symbols \, \, \, \, \, \, \, \ no longer provide abbreviations for completion like "+o", "*o", ".o" etc. -- due to conflicts with other ASCII syntax. INCOMPATIBILITY, use plain backslash-completion or define suitable abbreviations in $ISABELLE_HOME_USER/etc/abbrevs. * Action "isabelle-emph" (with keyboard shortcut C+e LEFT) controls emphasized text style; the effect is visible in document output, not in the editor. * Action "isabelle-reset" now uses keyboard shortcut C+e BACK_SPACE, instead of former C+e LEFT. * The command-line tool "isabelle jedit" and the isabelle.Main application wrapper treat the default $USER_HOME/Scratch.thy more uniformly, and allow the dummy file argument ":" to open an empty buffer instead. * New command-line tool "isabelle jedit_client" allows to connect to an already running Isabelle/jEdit process. This achieves the effect of single-instance applications seen on common GUI desktops. * The default look-and-feel for Linux is the traditional "Metal", which works better with GUI scaling for very high-resolution displays (e.g. 4K). Moreover, it is generally more robust than "Nimbus". * Update to jedit-5.3.0, with improved GUI scaling and support of high-resolution displays (e.g. 4K). * The main Isabelle executable is managed as single-instance Desktop application uniformly on all platforms: Linux, Windows, Mac OS X. *** Document preparation *** * Commands 'paragraph' and 'subparagraph' provide additional section headings. Thus there are 6 levels of standard headings, as in HTML. * Command 'text_raw' has been clarified: input text is processed as in 'text' (with antiquotations and control symbols). The key difference is the lack of the surrounding isabelle markup environment in output. * Text is structured in paragraphs and nested lists, using notation that is similar to Markdown. The control symbols for list items are as follows: \<^item> itemize \<^enum> enumerate \<^descr> description * There is a new short form for antiquotations with a single argument that is a cartouche: \<^name>\...\ is equivalent to @{name \...\} and \...\ without control symbol is equivalent to @{cartouche \...\}. \<^name> without following cartouche is equivalent to @{name}. The standard Isabelle fonts provide glyphs to render important control symbols, e.g. "\<^verbatim>", "\<^emph>", "\<^bold>". * Antiquotations @{noindent}, @{smallskip}, @{medskip}, @{bigskip} with corresponding control symbols \<^noindent>, \<^smallskip>, \<^medskip>, \<^bigskip> specify spacing formally, using standard LaTeX macros of the same names. * Antiquotation @{cartouche} in Isabelle/Pure is the same as @{text}. Consequently, \...\ without any decoration prints literal quasi-formal text. Command-line tool "isabelle update_cartouches -t" helps to update old sources, by approximative patching of the content of string and cartouche tokens seen in theory sources. * The @{text} antiquotation now ignores the antiquotation option "source". The given text content is output unconditionally, without any surrounding quotes etc. Subtle INCOMPATIBILITY, put quotes into the argument where they are really intended, e.g. @{text \"foo"\}. Initial or terminal spaces are ignored. * Antiquotations @{emph} and @{bold} output LaTeX source recursively, adding appropriate text style markup. These may be used in the short form \<^emph>\...\ and \<^bold>\...\. * Document antiquotation @{footnote} outputs LaTeX source recursively, marked as \footnote{}. This may be used in the short form \<^footnote>\...\. * Antiquotation @{verbatim [display]} supports option "indent". * Antiquotation @{theory_text} prints uninterpreted theory source text (Isar outer syntax with command keywords etc.). This may be used in the short form \<^theory_text>\...\. @{theory_text [display]} supports option "indent". * Antiquotation @{doc ENTRY} provides a reference to the given documentation, with a hyperlink in the Prover IDE. * Antiquotations @{command}, @{method}, @{attribute} print checked entities of the Isar language. * HTML presentation uses the standard IsabelleText font and Unicode rendering of Isabelle symbols like Isabelle/Scala/jEdit. The former print mode "HTML" loses its special meaning. *** Isar *** * Local goals ('have', 'show', 'hence', 'thus') allow structured rule statements like fixes/assumes/shows in theorem specifications, but the notation is postfix with keywords 'if' (or 'when') and 'for'. For example: have result: "C x y" if "A x" and "B y" for x :: 'a and y :: 'a The local assumptions are bound to the name "that". The result is exported from context of the statement as usual. The above roughly corresponds to a raw proof block like this: { fix x :: 'a and y :: 'a assume that: "A x" "B y" have "C x y" } note result = this The keyword 'when' may be used instead of 'if', to indicate 'presume' instead of 'assume' above. * Assumptions ('assume', 'presume') allow structured rule statements using 'if' and 'for', similar to 'have' etc. above. For example: assume result: "C x y" if "A x" and "B y" for x :: 'a and y :: 'a This assumes "\x y::'a. A x \ B y \ C x y" and produces a general result as usual: "A ?x \ B ?y \ C ?x ?y". Vacuous quantification in assumptions is omitted, i.e. a for-context only effects propositions according to actual use of variables. For example: assume "A x" and "B y" for x and y is equivalent to: assume "\x. A x" and "\y. B y" * The meaning of 'show' with Pure rule statements has changed: premises are treated in the sense of 'assume', instead of 'presume'. This means, a goal like "\x. A x \ B x \ C x" can be solved completely as follows: show "\x. A x \ B x \ C x" or: show "C x" if "A x" "B x" for x Rare INCOMPATIBILITY, the old behaviour may be recovered as follows: show "C x" when "A x" "B x" for x * New command 'consider' states rules for generalized elimination and case splitting. This is like a toplevel statement "theorem obtains" used within a proof body; or like a multi-branch 'obtain' without activation of the local context elements yet. * Proof method "cases" allows to specify the rule as first entry of chained facts. This is particularly useful with 'consider': consider (a) A | (b) B | (c) C then have something proof cases case a then show ?thesis next case b then show ?thesis next case c then show ?thesis qed * Command 'case' allows fact name and attribute specification like this: case a: (c xs) case a [attributes]: (c xs) Facts that are introduced by invoking the case context are uniformly qualified by "a"; the same name is used for the cumulative fact. The old form "case (c xs) [attributes]" is no longer supported. Rare INCOMPATIBILITY, need to adapt uses of case facts in exotic situations, and always put attributes in front. * The standard proof method of commands 'proof' and '..' is now called "standard" to make semantically clear what it is; the old name "default" is still available as legacy for some time. Documentation now explains '..' more accurately as "by standard" instead of "by rule". * Nesting of Isar goal structure has been clarified: the context after the initial backwards refinement is retained for the whole proof, within all its context sections (as indicated via 'next'). This is e.g. relevant for 'using', 'including', 'supply': have "A \ A" if a: A for A supply [simp] = a proof show A by simp next show A by simp qed * Command 'obtain' binds term abbreviations (via 'is' patterns) in the proof body as well, abstracted over relevant parameters. * Improved type-inference for theorem statement 'obtains': separate parameter scope for of each clause. * Term abbreviations via 'is' patterns also work for schematic statements: result is abstracted over unknowns. * Command 'subgoal' allows to impose some structure on backward refinements, to avoid proof scripts degenerating into long of 'apply' sequences. Further explanations and examples are given in the isar-ref manual. * Command 'supply' supports fact definitions during goal refinement ('apply' scripts). * Proof method "goal_cases" turns the current subgoals into cases within the context; the conclusion is bound to variable ?case in each case. For example: lemma "\x. A x \ B x \ C x" and "\y z. U y \ V z \ W y z" proof goal_cases case (1 x) then show ?case using \A x\ \B x\ sorry next case (2 y z) then show ?case using \U y\ \V z\ sorry qed lemma "\x. A x \ B x \ C x" and "\y z. U y \ V z \ W y z" proof goal_cases case prems: 1 then show ?case using prems sorry next case prems: 2 then show ?case using prems sorry qed * The undocumented feature of implicit cases goal1, goal2, goal3, etc. is marked as legacy, and will be removed eventually. The proof method "goals" achieves a similar effect within regular Isar; often it can be done more adequately by other means (e.g. 'consider'). * The vacuous fact "TERM x" may be established "by fact" or as `TERM x` as well, not just "by this" or "." as before. * Method "sleep" succeeds after a real-time delay (in seconds). This is occasionally useful for demonstration and testing purposes. *** Pure *** * Qualifiers in locale expressions default to mandatory ('!') regardless of the command. Previously, for 'locale' and 'sublocale' the default was optional ('?'). The old synatx '!' has been discontinued. INCOMPATIBILITY, remove '!' and add '?' as required. * Keyword 'rewrites' identifies rewrite morphisms in interpretation commands. Previously, the keyword was 'where'. INCOMPATIBILITY. * More gentle suppression of syntax along locale morphisms while printing terms. Previously 'abbreviation' and 'notation' declarations would be suppressed for morphisms except term identity. Now 'abbreviation' is also kept for morphims that only change the involved parameters, and only 'notation' is suppressed. This can be of great help when working with complex locale hierarchies, because proof states are displayed much more succinctly. It also means that only notation needs to be redeclared if desired, as illustrated by this example: locale struct = fixes composition :: "'a => 'a => 'a" (infixl "\" 65) begin definition derived (infixl "\" 65) where ... end locale morphism = left: struct composition + right: struct composition' for composition (infix "\" 65) and composition' (infix "\''" 65) begin notation right.derived ("\''") end * Command 'global_interpretation' issues interpretations into global theories, with optional rewrite definitions following keyword 'defines'. * Command 'sublocale' accepts optional rewrite definitions after keyword 'defines'. * Command 'permanent_interpretation' has been discontinued. Use 'global_interpretation' or 'sublocale' instead. INCOMPATIBILITY. * Command 'print_definitions' prints dependencies of definitional specifications. This functionality used to be part of 'print_theory'. * Configuration option rule_insts_schematic has been discontinued (intermediate legacy feature in Isabelle2015). INCOMPATIBILITY. * Abbreviations in type classes now carry proper sort constraint. Rare INCOMPATIBILITY in situations where the previous misbehaviour has been exploited. * Refinement of user-space type system in type classes: pseudo-local operations behave more similar to abbreviations. Potential INCOMPATIBILITY in exotic situations. *** HOL *** * The 'typedef' command has been upgraded from a partially checked "axiomatization", to a full definitional specification that takes the global collection of overloaded constant / type definitions into account. Type definitions with open dependencies on overloaded definitions need to be specified as "typedef (overloaded)". This provides extra robustness in theory construction. Rare INCOMPATIBILITY. * Qualification of various formal entities in the libraries is done more uniformly via "context begin qualified definition ... end" instead of old-style "hide_const (open) ...". Consequently, both the defined constant and its defining fact become qualified, e.g. Option.is_none and Option.is_none_def. Occasional INCOMPATIBILITY in applications. * Some old and rarely used ASCII replacement syntax has been removed. INCOMPATIBILITY, standard syntax with symbols should be used instead. The subsequent commands help to reproduce the old forms, e.g. to simplify porting old theories: notation iff (infixr "<->" 25) notation Times (infixr "<*>" 80) type_notation Map.map (infixr "~=>" 0) notation Map.map_comp (infixl "o'_m" 55) type_notation FinFun.finfun ("(_ =>f /_)" [22, 21] 21) notation FuncSet.funcset (infixr "->" 60) notation FuncSet.extensional_funcset (infixr "->\<^sub>E" 60) notation Omega_Words_Fun.conc (infixr "conc" 65) notation Preorder.equiv ("op ~~") and Preorder.equiv ("(_/ ~~ _)" [51, 51] 50) notation (in topological_space) tendsto (infixr "--->" 55) notation (in topological_space) LIMSEQ ("((_)/ ----> (_))" [60, 60] 60) notation LIM ("((_)/ -- (_)/ --> (_))" [60, 0, 60] 60) notation NSA.approx (infixl "@=" 50) notation NSLIMSEQ ("((_)/ ----NS> (_))" [60, 60] 60) notation NSLIM ("((_)/ -- (_)/ --NS> (_))" [60, 0, 60] 60) * The alternative notation "\" for type and sort constraints has been removed: in LaTeX document output it looks the same as "::". INCOMPATIBILITY, use plain "::" instead. * Commands 'inductive' and 'inductive_set' work better when names for intro rules are omitted: the "cases" and "induct" rules no longer declare empty case_names, but no case_names at all. This allows to use numbered cases in proofs, without requiring method "goal_cases". * Inductive definitions ('inductive', 'coinductive', etc.) expose low-level facts of the internal construction only if the option "inductive_internals" is enabled. This refers to the internal predicate definition and its monotonicity result. Rare INCOMPATIBILITY. * Recursive function definitions ('fun', 'function', 'partial_function') expose low-level facts of the internal construction only if the option "function_internals" is enabled. Its internal inductive definition is also subject to "inductive_internals". Rare INCOMPATIBILITY. * BNF datatypes ('datatype', 'codatatype', etc.) expose low-level facts of the internal construction only if the option "bnf_internals" is enabled. This supersedes the former option "bnf_note_all". Rare INCOMPATIBILITY. * Combinator to represent case distinction on products is named "case_prod", uniformly, discontinuing any input aliasses. Very popular theorem aliasses have been retained. Consolidated facts: PairE ~> prod.exhaust Pair_eq ~> prod.inject pair_collapse ~> prod.collapse Pair_fst_snd_eq ~> prod_eq_iff split_twice ~> prod.case_distrib split_weak_cong ~> prod.case_cong_weak split_split ~> prod.split split_split_asm ~> prod.split_asm splitI ~> case_prodI splitD ~> case_prodD splitI2 ~> case_prodI2 splitI2' ~> case_prodI2' splitE ~> case_prodE splitE' ~> case_prodE' split_pair ~> case_prod_Pair split_eta ~> case_prod_eta split_comp ~> case_prod_comp mem_splitI ~> mem_case_prodI mem_splitI2 ~> mem_case_prodI2 mem_splitE ~> mem_case_prodE The_split ~> The_case_prod cond_split_eta ~> cond_case_prod_eta Collect_split_in_rel_leE ~> Collect_case_prod_in_rel_leE Collect_split_in_rel_leI ~> Collect_case_prod_in_rel_leI in_rel_Collect_split_eq ~> in_rel_Collect_case_prod_eq Collect_split_Grp_eqD ~> Collect_case_prod_Grp_eqD Collect_split_Grp_inD ~> Collect_case_prod_Grp_in Domain_Collect_split ~> Domain_Collect_case_prod Image_Collect_split ~> Image_Collect_case_prod Range_Collect_split ~> Range_Collect_case_prod Eps_split ~> Eps_case_prod Eps_split_eq ~> Eps_case_prod_eq split_rsp ~> case_prod_rsp curry_split ~> curry_case_prod split_curry ~> case_prod_curry Changes in structure HOLogic: split_const ~> case_prod_const mk_split ~> mk_case_prod mk_psplits ~> mk_ptupleabs strip_psplits ~> strip_ptupleabs INCOMPATIBILITY. * The coercions to type 'real' have been reorganised. The function 'real' is no longer overloaded, but has type 'nat => real' and abbreviates of_nat for that type. Also 'real_of_int :: int => real' abbreviates of_int for that type. Other overloaded instances of 'real' have been replaced by 'real_of_ereal' and 'real_of_float'. Consolidated facts (among others): real_of_nat_le_iff -> of_nat_le_iff real_of_nat_numeral of_nat_numeral real_of_int_zero of_int_0 real_of_nat_zero of_nat_0 real_of_one of_int_1 real_of_int_add of_int_add real_of_nat_add of_nat_add real_of_int_diff of_int_diff real_of_nat_diff of_nat_diff floor_subtract floor_diff_of_int real_of_int_inject of_int_eq_iff real_of_int_gt_zero_cancel_iff of_int_0_less_iff real_of_int_ge_zero_cancel_iff of_int_0_le_iff real_of_nat_ge_zero of_nat_0_le_iff real_of_int_ceiling_ge le_of_int_ceiling ceiling_less_eq ceiling_less_iff ceiling_le_eq ceiling_le_iff less_floor_eq less_floor_iff floor_less_eq floor_less_iff floor_divide_eq_div floor_divide_of_int_eq real_of_int_zero_cancel of_nat_eq_0_iff ceiling_real_of_int ceiling_of_int INCOMPATIBILITY. * Theory Map: lemma map_of_is_SomeD was a clone of map_of_SomeD and has been removed. INCOMPATIBILITY. * Quickcheck setup for finite sets. * Discontinued simp_legacy_precond. Potential INCOMPATIBILITY. * Sledgehammer: - The MaSh relevance filter has been sped up. - Proof reconstruction has been improved, to minimize the incidence of cases where Sledgehammer gives a proof that does not work. - Auto Sledgehammer now minimizes and preplays the results. - Handle Vampire 4.0 proof output without raising exception. - Eliminated "MASH" environment variable. Use the "MaSh" option in Isabelle/jEdit instead. INCOMPATIBILITY. - Eliminated obsolete "blocking" option and related subcommands. * Nitpick: - Fixed soundness bug in translation of "finite" predicate. - Fixed soundness bug in "destroy_constrs" optimization. - Fixed soundness bug in translation of "rat" type. - Removed "check_potential" and "check_genuine" options. - Eliminated obsolete "blocking" option. * (Co)datatype package: - New commands "lift_bnf" and "copy_bnf" for lifting (copying) a BNF structure on the raw type to an abstract type defined using typedef. - Always generate "case_transfer" theorem. - For mutual types, generate slightly stronger "rel_induct", "rel_coinduct", and "coinduct" theorems. INCOMPATIBILITY. - Allow discriminators and selectors with the same name as the type being defined. - Avoid various internal name clashes (e.g., 'datatype f = f'). * Transfer: new methods for interactive debugging of 'transfer' and 'transfer_prover': 'transfer_start', 'transfer_step', 'transfer_end', 'transfer_prover_start' and 'transfer_prover_end'. * New diagnostic command print_record for displaying record definitions. * Division on integers is bootstrapped directly from division on naturals and uses generic numeral algorithm for computations. Slight INCOMPATIBILITY, simproc numeral_divmod replaces and generalizes former simprocs binary_int_div and binary_int_mod * Tightened specification of class semiring_no_zero_divisors. Minor INCOMPATIBILITY. * Class algebraic_semidom introduces common algebraic notions of integral (semi)domains, particularly units. Although logically subsumed by fields, is is not a super class of these in order not to burden fields with notions that are trivial there. * Class normalization_semidom specifies canonical representants for equivalence classes of associated elements in an integral (semi)domain. This formalizes associated elements as well. * Abstract specification of gcd/lcm operations in classes semiring_gcd, semiring_Gcd, semiring_Lcd. Minor INCOMPATIBILITY: facts gcd_nat.commute and gcd_int.commute are subsumed by gcd.commute, as well as gcd_nat.assoc and gcd_int.assoc by gcd.assoc. * Former constants Fields.divide (_ / _) and Divides.div (_ div _) are logically unified to Rings.divide in syntactic type class Rings.divide, with infix syntax (_ div _). Infix syntax (_ / _) for field division is added later as abbreviation in class Fields.inverse. INCOMPATIBILITY, instantiations must refer to Rings.divide rather than the former separate constants, hence infix syntax (_ / _) is usually not available during instantiation. * New cancellation simprocs for boolean algebras to cancel complementary terms for sup and inf. For example, "sup x (sup y (- x))" simplifies to "top". INCOMPATIBILITY. * Class uniform_space introduces uniform spaces btw topological spaces and metric spaces. Minor INCOMPATIBILITY: open__def needs to be introduced in the form of an uniformity. Some constants are more general now, it may be necessary to add type class constraints. open_real_def \ open_dist open_complex_def \ open_dist * Library/Monad_Syntax: notation uses symbols \ and \. INCOMPATIBILITY. * Library/Multiset: - Renamed multiset inclusion operators: < ~> <# > ~> ># <= ~> <=# >= ~> >=# \ ~> \# \ ~> \# INCOMPATIBILITY. - Added multiset inclusion operator syntax: \# \# \# \# - "'a multiset" is no longer an instance of the "order", "ordered_ab_semigroup_add_imp_le", "ordered_cancel_comm_monoid_diff", "semilattice_inf", and "semilattice_sup" type classes. The theorems previously provided by these type classes (directly or indirectly) are now available through the "subset_mset" interpretation (e.g. add_mono ~> subset_mset.add_mono). INCOMPATIBILITY. - Renamed conversions: multiset_of ~> mset multiset_of_set ~> mset_set set_of ~> set_mset INCOMPATIBILITY - Renamed lemmas: mset_le_def ~> subseteq_mset_def mset_less_def ~> subset_mset_def less_eq_multiset.rep_eq ~> subseteq_mset_def INCOMPATIBILITY - Removed lemmas generated by lift_definition: less_eq_multiset.abs_eq, less_eq_multiset.rsp, less_eq_multiset.transfer, less_eq_multiset_def INCOMPATIBILITY * Library/Omega_Words_Fun: Infinite words modeled as functions nat \ 'a. * Library/Bourbaki_Witt_Fixpoint: Added formalisation of the Bourbaki-Witt fixpoint theorem for increasing functions in chain-complete partial orders. * Library/Old_Recdef: discontinued obsolete 'defer_recdef' command. Minor INCOMPATIBILITY, use 'function' instead. * Library/Periodic_Fun: a locale that provides convenient lemmas for periodic functions. * Library/Formal_Power_Series: proper definition of division (with remainder) for formal power series; instances for Euclidean Ring and GCD. * HOL-Imperative_HOL: obsolete theory Legacy_Mrec has been removed. * HOL-Statespace: command 'statespace' uses mandatory qualifier for import of parent, as for general 'locale' expressions. INCOMPATIBILITY, remove '!' and add '?' as required. * HOL-Decision_Procs: The "approximation" method works with "powr" (exponentiation on real numbers) again. * HOL-Multivariate_Analysis: theory Cauchy_Integral_Thm with Contour integrals (= complex path integrals), Cauchy's integral theorem, winding numbers and Cauchy's integral formula, Liouville theorem, Fundamental Theorem of Algebra. Ported from HOL Light. * HOL-Multivariate_Analysis: topological concepts such as connected components, homotopic paths and the inside or outside of a set. * HOL-Multivariate_Analysis: radius of convergence of power series and various summability tests; Harmonic numbers and the Euler–Mascheroni constant; the Generalised Binomial Theorem; the complex and real Gamma/log-Gamma/Digamma/ Polygamma functions and their most important properties. * HOL-Probability: The central limit theorem based on Levy's uniqueness and continuity theorems, weak convergence, and characterisitc functions. * HOL-Data_Structures: new and growing session of standard data structures. *** ML *** * The following combinators for low-level profiling of the ML runtime system are available: profile_time (*CPU time*) profile_time_thread (*CPU time on this thread*) profile_allocations (*overall heap allocations*) * Antiquotation @{undefined} or \<^undefined> inlines (raise Match). * Antiquotation @{method NAME} inlines the (checked) name of the given Isar proof method. * Pretty printing of Poly/ML compiler output in Isabelle has been improved: proper treatment of break offsets and blocks with consistent breaks. * The auxiliary module Pure/display.ML has been eliminated. Its elementary thm print operations are now in Pure/more_thm.ML and thus called Thm.pretty_thm, Thm.string_of_thm etc. INCOMPATIBILITY. * Simproc programming interfaces have been simplified: Simplifier.make_simproc and Simplifier.define_simproc supersede various forms of Simplifier.mk_simproc, Simplifier.simproc_global etc. Note that term patterns for the left-hand sides are specified with implicitly fixed variables, like top-level theorem statements. INCOMPATIBILITY. * Instantiation rules have been re-organized as follows: Thm.instantiate (*low-level instantiation with named arguments*) Thm.instantiate' (*version with positional arguments*) Drule.infer_instantiate (*instantiation with type inference*) Drule.infer_instantiate' (*version with positional arguments*) The LHS only requires variable specifications, instead of full terms. Old cterm_instantiate is superseded by infer_instantiate. INCOMPATIBILITY, need to re-adjust some ML names and types accordingly. * Old tactic shorthands atac, rtac, etac, dtac, ftac have been discontinued. INCOMPATIBILITY, use regular assume_tac, resolve_tac etc. instead (with proper context). * Thm.instantiate (and derivatives) no longer require the LHS of the instantiation to be certified: plain variables are given directly. * Subgoal.SUBPROOF and Subgoal.FOCUS combinators use anonymous quasi-bound variables (like the Simplifier), instead of accidentally named local fixes. This has the potential to improve stability of proof tools, but can also cause INCOMPATIBILITY for tools that don't observe the proof context discipline. * Isar proof methods are based on a slightly more general type context_tactic, which allows to change the proof context dynamically (e.g. to update cases) and indicate explicit Seq.Error results. Former METHOD_CASES is superseded by CONTEXT_METHOD; further combinators are provided in src/Pure/Isar/method.ML for convenience. INCOMPATIBILITY. *** System *** * Command-line tool "isabelle console" enables print mode "ASCII". * Command-line tool "isabelle update_then" expands old Isar command conflations: hence ~> then have thus ~> then show This syntax is more orthogonal and improves readability and maintainability of proofs. * Global session timeout is multiplied by timeout_scale factor. This allows to adjust large-scale tests (e.g. AFP) to overall hardware performance. * Property values in etc/symbols may contain spaces, if written with the replacement character "␣" (Unicode point 0x2324). For example: \ code: 0x0022c6 group: operator font: Deja␣Vu␣Sans␣Mono * Java runtime environment for x86_64-windows allows to use larger heap space. * Java runtime options are determined separately for 32bit vs. 64bit platforms as follows. - Isabelle desktop application: platform-specific files that are associated with the main app bundle - isabelle jedit: settings JEDIT_JAVA_SYSTEM_OPTIONS JEDIT_JAVA_OPTIONS32 vs. JEDIT_JAVA_OPTIONS64 - isabelle build: settings ISABELLE_BUILD_JAVA_OPTIONS32 vs. ISABELLE_BUILD_JAVA_OPTIONS64 * Bash shell function "jvmpath" has been renamed to "platform_path": it is relevant both for Poly/ML and JVM processes. * Poly/ML default platform architecture may be changed from 32bit to 64bit via system option ML_system_64. A system restart (and rebuild) is required after change. * Poly/ML 5.6 runs natively on x86-windows and x86_64-windows, which both allow larger heap space than former x86-cygwin. * Heap images are 10-15% smaller due to less wasteful persistent theory content (using ML type theory_id instead of theory); New in Isabelle2015 (May 2015) ------------------------------ *** General *** * Local theory specification commands may have a 'private' or 'qualified' modifier to restrict name space accesses to the local scope, as provided by some "context begin ... end" block. For example: context begin private definition ... private lemma ... qualified definition ... qualified lemma ... lemma ... theorem ... end * Command 'experiment' opens an anonymous locale context with private naming policy. * Command 'notepad' requires proper nesting of begin/end and its proof structure in the body: 'oops' is no longer supported here. Minor INCOMPATIBILITY, use 'sorry' instead. * Command 'named_theorems' declares a dynamic fact within the context, together with an attribute to maintain the content incrementally. This supersedes functor Named_Thms in Isabelle/ML, but with a subtle change of semantics due to external visual order vs. internal reverse order. * 'find_theorems': search patterns which are abstractions are schematically expanded before search. Search results match the naive expectation more closely, particularly wrt. abbreviations. INCOMPATIBILITY. * Commands 'method_setup' and 'attribute_setup' now work within a local theory context. * Outer syntax commands are managed authentically within the theory context, without implicit global state. Potential for accidental INCOMPATIBILITY, make sure that required theories are really imported. * Historical command-line terminator ";" is no longer accepted (and already used differently in Isar). Minor INCOMPATIBILITY, use "isabelle update_semicolons" to remove obsolete semicolons from old theory sources. * Structural composition of proof methods (meth1; meth2) in Isar corresponds to (tac1 THEN_ALL_NEW tac2) in ML. * The Eisbach proof method language allows to define new proof methods by combining existing ones with their usual syntax. The "match" proof method provides basic fact/term matching in addition to premise/conclusion matching through Subgoal.focus, and binds fact names from matches as well as term patterns within matches. The Isabelle documentation provides an entry "eisbach" for the Eisbach User Manual. Sources and various examples are in ~~/src/HOL/Eisbach/. *** Prover IDE -- Isabelle/Scala/jEdit *** * Improved folding mode "isabelle" based on Isar syntax. Alternatively, the "sidekick" mode may be used for document structure. * Extended bracket matching based on Isar language structure. System option jedit_structure_limit determines maximum number of lines to scan in the buffer. * Support for BibTeX files: context menu, context-sensitive token marker, SideKick parser. * Document antiquotation @{cite} provides formal markup, which is interpreted semi-formally based on .bib files that happen to be open in the editor (hyperlinks, completion etc.). * Less waste of vertical space via negative line spacing (see Global Options / Text Area). * Improved graphview panel with optional output of PNG or PDF, for display of 'thy_deps', 'class_deps' etc. * The commands 'thy_deps' and 'class_deps' allow optional bounds to restrict the visualized hierarchy. * Improved scheduling for asynchronous print commands (e.g. provers managed by the Sledgehammer panel) wrt. ongoing document processing. *** Document preparation *** * Document markup commands 'chapter', 'section', 'subsection', 'subsubsection', 'text', 'txt', 'text_raw' work uniformly in any context, even before the initial 'theory' command. Obsolete proof commands 'sect', 'subsect', 'subsubsect', 'txt_raw' have been discontinued, use 'section', 'subsection', 'subsubsection', 'text_raw' instead. The old 'header' command is still retained for some time, but should be replaced by 'chapter', 'section' etc. (using "isabelle update_header"). Minor INCOMPATIBILITY. * Official support for "tt" style variants, via \isatt{...} or \begin{isabellett}...\end{isabellett}. The somewhat fragile \verb or verbatim environment of LaTeX is no longer used. This allows @{ML} etc. as argument to other macros (such as footnotes). * Document antiquotation @{verbatim} prints ASCII text literally in "tt" style. * Discontinued obsolete option "document_graph": session_graph.pdf is produced unconditionally for HTML browser_info and PDF-LaTeX document. * Diagnostic commands and document markup commands within a proof do not affect the command tag for output. Thus commands like 'thm' are subject to proof document structure, and no longer "stick out" accidentally. Commands 'text' and 'txt' merely differ in the LaTeX style, not their tags. Potential INCOMPATIBILITY in exotic situations. * System option "pretty_margin" is superseded by "thy_output_margin", which is also accessible via document antiquotation option "margin". Only the margin for document output may be changed, but not the global pretty printing: that is 76 for plain console output, and adapted dynamically in GUI front-ends. Implementations of document antiquotations need to observe the margin explicitly according to Thy_Output.string_of_margin. Minor INCOMPATIBILITY. * Specification of 'document_files' in the session ROOT file is mandatory for document preparation. The legacy mode with implicit copying of the document/ directory is no longer supported. Minor INCOMPATIBILITY. *** Pure *** * Proof methods with explicit instantiation ("rule_tac", "subgoal_tac" etc.) allow an optional context of local variables ('for' declaration): these variables become schematic in the instantiated theorem; this behaviour is analogous to 'for' in attributes "where" and "of". Configuration option rule_insts_schematic (default false) controls use of schematic variables outside the context. Minor INCOMPATIBILITY, declare rule_insts_schematic = true temporarily and update to use local variable declarations or dummy patterns instead. * Explicit instantiation via attributes "where", "of", and proof methods "rule_tac" with derivatives like "subgoal_tac" etc. admit dummy patterns ("_") that stand for anonymous local variables. * Generated schematic variables in standard format of exported facts are incremented to avoid material in the proof context. Rare INCOMPATIBILITY, explicit instantiation sometimes needs to refer to different index. * Lexical separation of signed and unsigned numerals: categories "num" and "float" are unsigned. INCOMPATIBILITY: subtle change in precedence of numeral signs, particularly in expressions involving infix syntax like "(- 1) ^ n". * Old inner token category "xnum" has been discontinued. Potential INCOMPATIBILITY for exotic syntax: may use mixfix grammar with "num" token category instead. *** HOL *** * New (co)datatype package: - The 'datatype_new' command has been renamed 'datatype'. The old command of that name is now called 'old_datatype' and is provided by "~~/src/HOL/Library/Old_Datatype.thy". See 'isabelle doc datatypes' for information on porting. INCOMPATIBILITY. - Renamed theorems: disc_corec ~> corec_disc disc_corec_iff ~> corec_disc_iff disc_exclude ~> distinct_disc disc_exhaust ~> exhaust_disc disc_map_iff ~> map_disc_iff sel_corec ~> corec_sel sel_exhaust ~> exhaust_sel sel_map ~> map_sel sel_set ~> set_sel sel_split ~> split_sel sel_split_asm ~> split_sel_asm strong_coinduct ~> coinduct_strong weak_case_cong ~> case_cong_weak INCOMPATIBILITY. - The "no_code" option to "free_constructors", "datatype_new", and "codatatype" has been renamed "plugins del: code". INCOMPATIBILITY. - The rules "set_empty" have been removed. They are easy consequences of other set rules "by auto". INCOMPATIBILITY. - The rule "set_cases" is now registered with the "[cases set]" attribute. This can influence the behavior of the "cases" proof method when more than one case rule is applicable (e.g., an assumption is of the form "w : set ws" and the method "cases w" is invoked). The solution is to specify the case rule explicitly (e.g. "cases w rule: widget.exhaust"). INCOMPATIBILITY. - Renamed theories: BNF_Comp ~> BNF_Composition BNF_FP_Base ~> BNF_Fixpoint_Base BNF_GFP ~> BNF_Greatest_Fixpoint BNF_LFP ~> BNF_Least_Fixpoint BNF_Constructions_on_Wellorders ~> BNF_Wellorder_Constructions Cardinals/Constructions_on_Wellorders ~> Cardinals/Wellorder_Constructions INCOMPATIBILITY. - Lifting and Transfer setup for basic HOL types sum and prod (also option) is now performed by the BNF package. Theories Lifting_Sum, Lifting_Product and Lifting_Option from Main became obsolete and were removed. Changed definitions of the relators rel_prod and rel_sum (using inductive). INCOMPATIBILITY: use rel_prod.simps and rel_sum.simps instead of rel_prod_def and rel_sum_def. Minor INCOMPATIBILITY: (rarely used by name) transfer theorem names changed (e.g. map_prod_transfer ~> prod.map_transfer). - Parametricity theorems for map functions, relators, set functions, constructors, case combinators, discriminators, selectors and (co)recursors are automatically proved and registered as transfer rules. * Old datatype package: - The old 'datatype' command has been renamed 'old_datatype', and 'rep_datatype' has been renamed 'old_rep_datatype'. They are provided by "~~/src/HOL/Library/Old_Datatype.thy". See 'isabelle doc datatypes' for information on porting. INCOMPATIBILITY. - Renamed theorems: weak_case_cong ~> case_cong_weak INCOMPATIBILITY. - Renamed theory: ~~/src/HOL/Datatype.thy ~> ~~/src/HOL/Library/Old_Datatype.thy INCOMPATIBILITY. * Nitpick: - Fixed soundness bug related to the strict and non-strict subset operations. * Sledgehammer: - CVC4 is now included with Isabelle instead of CVC3 and run by default. - Z3 is now always enabled by default, now that it is fully open source. The "z3_non_commercial" option is discontinued. - Minimization is now always enabled by default. Removed sub-command: min - Proof reconstruction, both one-liners and Isar, has been dramatically improved. - Improved support for CVC4 and veriT. * Old and new SMT modules: - The old 'smt' method has been renamed 'old_smt' and moved to 'src/HOL/Library/Old_SMT.thy'. It is provided for compatibility, until applications have been ported to use the new 'smt' method. For the method to work, an older version of Z3 (e.g. Z3 3.2 or 4.0) must be installed, and the environment variable "OLD_Z3_SOLVER" must point to it. INCOMPATIBILITY. - The 'smt2' method has been renamed 'smt'. INCOMPATIBILITY. - New option 'smt_reconstruction_step_timeout' to limit the reconstruction time of Z3 proof steps in the new 'smt' method. - New option 'smt_statistics' to display statistics of the new 'smt' method, especially runtime statistics of Z3 proof reconstruction. * Lifting: command 'lift_definition' allows to execute lifted constants that have as a return type a datatype containing a subtype. This overcomes long-time limitations in the area of code generation and lifting, and avoids tedious workarounds. * Command and antiquotation "value" provide different evaluation slots (again), where the previous strategy (NBE after ML) serves as default. Minor INCOMPATIBILITY. * Add NO_MATCH-simproc, allows to check for syntactic non-equality. * field_simps: Use NO_MATCH-simproc for distribution rules, to avoid non-termination in case of distributing a division. With this change field_simps is in some cases slightly less powerful, if it fails try to add algebra_simps, or use divide_simps. Minor INCOMPATIBILITY. * Separate class no_zero_divisors has been given up in favour of fully algebraic semiring_no_zero_divisors. INCOMPATIBILITY. * Class linordered_semidom really requires no zero divisors. INCOMPATIBILITY. * Classes division_ring, field and linordered_field always demand "inverse 0 = 0". Given up separate classes division_ring_inverse_zero, field_inverse_zero and linordered_field_inverse_zero. INCOMPATIBILITY. * Classes cancel_ab_semigroup_add / cancel_monoid_add specify explicit additive inverse operation. INCOMPATIBILITY. * Complex powers and square roots. The functions "ln" and "powr" are now overloaded for types real and complex, and 0 powr y = 0 by definition. INCOMPATIBILITY: type constraints may be necessary. * The functions "sin" and "cos" are now defined for any type of sort "{real_normed_algebra_1,banach}" type, so in particular on "real" and "complex" uniformly. Minor INCOMPATIBILITY: type constraints may be needed. * New library of properties of the complex transcendental functions sin, cos, tan, exp, Ln, Arctan, Arcsin, Arccos. Ported from HOL Light. * The factorial function, "fact", now has type "nat => 'a" (of a sort that admits numeric types including nat, int, real and complex. INCOMPATIBILITY: an expression such as "fact 3 = 6" may require a type constraint, and the combination "real (fact k)" is likely to be unsatisfactory. If a type conversion is still necessary, then use "of_nat (fact k)" or "real_of_nat (fact k)". * Removed functions "natfloor" and "natceiling", use "nat o floor" and "nat o ceiling" instead. A few of the lemmas have been retained and adapted: in their names "natfloor"/"natceiling" has been replaced by "nat_floor"/"nat_ceiling". * Qualified some duplicated fact names required for boostrapping the type class hierarchy: ab_add_uminus_conv_diff ~> diff_conv_add_uminus field_inverse_zero ~> inverse_zero field_divide_inverse ~> divide_inverse field_inverse ~> left_inverse Minor INCOMPATIBILITY. * Eliminated fact duplicates: mult_less_imp_less_right ~> mult_right_less_imp_less mult_less_imp_less_left ~> mult_left_less_imp_less Minor INCOMPATIBILITY. * Fact consolidation: even_less_0_iff is subsumed by double_add_less_zero_iff_single_add_less_zero (simp by default anyway). * Generalized and consolidated some theorems concerning divsibility: dvd_reduce ~> dvd_add_triv_right_iff dvd_plus_eq_right ~> dvd_add_right_iff dvd_plus_eq_left ~> dvd_add_left_iff Minor INCOMPATIBILITY. * "even" and "odd" are mere abbreviations for "2 dvd _" and "~ 2 dvd _" and part of theory Main. even_def ~> even_iff_mod_2_eq_zero INCOMPATIBILITY. * Lemma name consolidation: divide_Numeral1 ~> divide_numeral_1. Minor INCOMPATIBILITY. * Bootstrap of listsum as special case of abstract product over lists. Fact rename: listsum_def ~> listsum.eq_foldr INCOMPATIBILITY. * Product over lists via constant "listprod". * Theory List: renamed drop_Suc_conv_tl and nth_drop' to Cons_nth_drop_Suc. * New infrastructure for compiling, running, evaluating and testing generated code in target languages in HOL/Library/Code_Test. See HOL/Codegenerator_Test/Code_Test* for examples. * Library/Multiset: - Introduced "replicate_mset" operation. - Introduced alternative characterizations of the multiset ordering in "Library/Multiset_Order". - Renamed multiset ordering: <# ~> #<# <=# ~> #<=# \# ~> #\# \# ~> #\# INCOMPATIBILITY. - Introduced abbreviations for ill-named multiset operations: <#, \# abbreviate < (strict subset) <=#, \#, \# abbreviate <= (subset or equal) INCOMPATIBILITY. - Renamed in_multiset_of ~> in_multiset_in_set Multiset.fold ~> fold_mset Multiset.filter ~> filter_mset INCOMPATIBILITY. - Removed mcard, is equal to size. - Added attributes: image_mset.id [simp] image_mset_id [simp] elem_multiset_of_set [simp, intro] comp_fun_commute_plus_mset [simp] comp_fun_commute.fold_mset_insert [OF comp_fun_commute_plus_mset, simp] in_mset_fold_plus_iff [iff] set_of_Union_mset [simp] in_Union_mset_iff [iff] INCOMPATIBILITY. * Library/Sum_of_Squares: simplified and improved "sos" method. Always use local CSDP executable, which is much faster than the NEOS server. The "sos_cert" functionality is invoked as "sos" with additional argument. Minor INCOMPATIBILITY. * HOL-Decision_Procs: New counterexample generator quickcheck [approximation] for inequalities of transcendental functions. Uses hardware floating point arithmetic to randomly discover potential counterexamples. Counterexamples are certified with the "approximation" method. See HOL/Decision_Procs/ex/Approximation_Quickcheck_Ex.thy for examples. * HOL-Probability: Reworked measurability prover - applies destructor rules repeatedly - removed application splitting (replaced by destructor rule) - added congruence rules to rewrite measure spaces under the sets projection * New proof method "rewrite" (in theory ~~/src/HOL/Library/Rewrite) for single-step rewriting with subterm selection based on patterns. *** ML *** * Subtle change of name space policy: undeclared entries are now considered inaccessible, instead of accessible via the fully-qualified internal name. This mainly affects Name_Space.intern (and derivatives), which may produce an unexpected Long_Name.hidden prefix. Note that contemporary applications use the strict Name_Space.check (and derivatives) instead, which is not affected by the change. Potential INCOMPATIBILITY in rare applications of Name_Space.intern. * Subtle change of error semantics of Toplevel.proof_of: regular user ERROR instead of internal Toplevel.UNDEF. * Basic combinators map, fold, fold_map, split_list, apply are available as parameterized antiquotations, e.g. @{map 4} for lists of quadruples. * Renamed "pairself" to "apply2", in accordance to @{apply 2}. INCOMPATIBILITY. * Former combinators NAMED_CRITICAL and CRITICAL for central critical sections have been discontinued, in favour of the more elementary Multithreading.synchronized and its high-level derivative Synchronized.var (which is usually sufficient in applications). Subtle INCOMPATIBILITY: synchronized access needs to be atomic and cannot be nested. * Synchronized.value (ML) is actually synchronized (as in Scala): subtle change of semantics with minimal potential for INCOMPATIBILITY. * The main operations to certify logical entities are Thm.ctyp_of and Thm.cterm_of with a local context; old-style global theory variants are available as Thm.global_ctyp_of and Thm.global_cterm_of. INCOMPATIBILITY. * Elementary operations in module Thm are no longer pervasive. INCOMPATIBILITY, need to use qualified Thm.prop_of, Thm.cterm_of, Thm.term_of etc. * Proper context for various elementary tactics: assume_tac, resolve_tac, eresolve_tac, dresolve_tac, forward_tac, match_tac, compose_tac, Splitter.split_tac etc. INCOMPATIBILITY. * Tactical PARALLEL_ALLGOALS is the most common way to refer to PARALLEL_GOALS. * Goal.prove_multi is superseded by the fully general Goal.prove_common, which also allows to specify a fork priority. * Antiquotation @{command_spec "COMMAND"} is superseded by @{command_keyword COMMAND} (usually without quotes and with PIDE markup). Minor INCOMPATIBILITY. * Cartouches within ML sources are turned into values of type Input.source (with formal position information). *** System *** * The Isabelle tool "update_cartouches" changes theory files to use cartouches instead of old-style {* verbatim *} or `alt_string` tokens. * The Isabelle tool "build" provides new options -X, -k, -x. * Discontinued old-fashioned "codegen" tool. Code generation can always be externally triggered using an appropriate ROOT file plus a corresponding theory. Parametrization is possible using environment variables, or ML snippets in the most extreme cases. Minor INCOMPATIBILITY. * JVM system property "isabelle.threads" determines size of Scala thread pool, like Isabelle system option "threads" for ML. * JVM system property "isabelle.laf" determines the default Swing look-and-feel, via internal class name or symbolic name as in the jEdit menu Global Options / Appearance. * Support for Proof General and Isar TTY loop has been discontinued. Minor INCOMPATIBILITY, use standard PIDE infrastructure instead. New in Isabelle2014 (August 2014) --------------------------------- *** General *** * Support for official Standard ML within the Isabelle context. Command 'SML_file' reads and evaluates the given Standard ML file. Toplevel bindings are stored within the theory context; the initial environment is restricted to the Standard ML implementation of Poly/ML, without the add-ons of Isabelle/ML. Commands 'SML_import' and 'SML_export' allow to exchange toplevel bindings between the two separate environments. See also ~~/src/Tools/SML/Examples.thy for some examples. * Standard tactics and proof methods such as "clarsimp", "auto" and "safe" now preserve equality hypotheses "x = expr" where x is a free variable. Locale assumptions and chained facts containing "x" continue to be useful. The new method "hypsubst_thin" and the configuration option "hypsubst_thin" (within the attribute name space) restore the previous behavior. INCOMPATIBILITY, especially where induction is done after these methods or when the names of free and bound variables clash. As first approximation, old proofs may be repaired by "using [[hypsubst_thin = true]]" in the critical spot. * More static checking of proof methods, which allows the system to form a closure over the concrete syntax. Method arguments should be processed in the original proof context as far as possible, before operating on the goal state. In any case, the standard discipline for subgoal-addressing needs to be observed: no subgoals or a subgoal number that is out of range produces an empty result sequence, not an exception. Potential INCOMPATIBILITY for non-conformant tactical proof tools. * Lexical syntax (inner and outer) supports text cartouches with arbitrary nesting, and without escapes of quotes etc. The Prover IDE supports input via ` (backquote). * The outer syntax categories "text" (for formal comments and document markup commands) and "altstring" (for literal fact references) allow cartouches as well, in addition to the traditional mix of quotations. * Syntax of document antiquotation @{rail} now uses \ instead of "\\", to avoid the optical illusion of escaped backslash within string token. General renovation of its syntax using text cartouches. Minor INCOMPATIBILITY. * Discontinued legacy_isub_isup, which was a temporary workaround for Isabelle/ML in Isabelle2013-1. The prover process no longer accepts old identifier syntax with \<^isub> or \<^isup>. Potential INCOMPATIBILITY. * Document antiquotation @{url} produces markup for the given URL, which results in an active hyperlink within the text. * Document antiquotation @{file_unchecked} is like @{file}, but does not check existence within the file-system. * Updated and extended manuals: codegen, datatypes, implementation, isar-ref, jedit, system. *** Prover IDE -- Isabelle/Scala/jEdit *** * Improved Document panel: simplified interaction where every single mouse click (re)opens document via desktop environment or as jEdit buffer. * Support for Navigator plugin (with toolbar buttons), with connection to PIDE hyperlinks. * Auxiliary files ('ML_file' etc.) are managed by the Prover IDE. Open text buffers take precedence over copies within the file-system. * Improved support for Isabelle/ML, with jEdit mode "isabelle-ml" for auxiliary ML files. * Improved syntactic and semantic completion mechanism, with simple templates, completion language context, name-space completion, file-name completion, spell-checker completion. * Refined GUI popup for completion: more robust key/mouse event handling and propagation to enclosing text area -- avoid loosing keystrokes with slow / remote graphics displays. * Completion popup supports both ENTER and TAB (default) to select an item, depending on Isabelle options. * Refined insertion of completion items wrt. jEdit text: multiple selections, rectangular selections, rectangular selection as "tall caret". * Integrated spell-checker for document text, comments etc. with completion popup and context-menu. * More general "Query" panel supersedes "Find" panel, with GUI access to commands 'find_theorems' and 'find_consts', as well as print operations for the context. Minor incompatibility in keyboard shortcuts etc.: replace action isabelle-find by isabelle-query. * Search field for all output panels ("Output", "Query", "Info" etc.) to highlight text via regular expression. * Option "jedit_print_mode" (see also "Plugin Options / Isabelle / General") allows to specify additional print modes for the prover process, without requiring old-fashioned command-line invocation of "isabelle jedit -m MODE". * More support for remote files (e.g. http) using standard Java networking operations instead of jEdit virtual file-systems. * Empty editors buffers that are no longer required (e.g.\ via theory imports) are automatically removed from the document model. * Improved monitor panel. * Improved Console/Scala plugin: more uniform scala.Console output, more robust treatment of threads and interrupts. * Improved management of dockable windows: clarified keyboard focus and window placement wrt. main editor view; optional menu item to "Detach" a copy where this makes sense. * New Simplifier Trace panel provides an interactive view of the simplification process, enabled by the "simp_trace_new" attribute within the context. *** Pure *** * Low-level type-class commands 'classes', 'classrel', 'arities' have been discontinued to avoid the danger of non-trivial axiomatization that is not immediately visible. INCOMPATIBILITY, use regular 'instance' command with proof. The required OFCLASS(...) theorem might be postulated via 'axiomatization' beforehand, or the proof finished trivially if the underlying class definition is made vacuous (without any assumptions). See also Isabelle/ML operations Axclass.class_axiomatization, Axclass.classrel_axiomatization, Axclass.arity_axiomatization. * Basic constants of Pure use more conventional names and are always qualified. Rare INCOMPATIBILITY, but with potentially serious consequences, notably for tools in Isabelle/ML. The following renaming needs to be applied: == ~> Pure.eq ==> ~> Pure.imp all ~> Pure.all TYPE ~> Pure.type dummy_pattern ~> Pure.dummy_pattern Systematic porting works by using the following theory setup on a *previous* Isabelle version to introduce the new name accesses for the old constants: setup {* fn thy => thy |> Sign.root_path |> Sign.const_alias (Binding.qualify true "Pure" @{binding eq}) "==" |> Sign.const_alias (Binding.qualify true "Pure" @{binding imp}) "==>" |> Sign.const_alias (Binding.qualify true "Pure" @{binding all}) "all" |> Sign.restore_naming thy *} Thus ML antiquotations like @{const_name Pure.eq} may be used already. Later the application is moved to the current Isabelle version, and the auxiliary aliases are deleted. * Attributes "where" and "of" allow an optional context of local variables ('for' declaration): these variables become schematic in the instantiated theorem. * Obsolete attribute "standard" has been discontinued (legacy since Isabelle2012). Potential INCOMPATIBILITY, use explicit 'for' context where instantiations with schematic variables are intended (for declaration commands like 'lemmas' or attributes like "of"). The following temporary definition may help to port old applications: attribute_setup standard = "Scan.succeed (Thm.rule_attribute (K Drule.export_without_context))" * More thorough check of proof context for goal statements and attributed fact expressions (concerning background theory, declared hyps). Potential INCOMPATIBILITY, tools need to observe standard context discipline. See also Assumption.add_assumes and the more primitive Thm.assume_hyps. * Inner syntax token language allows regular quoted strings "..." (only makes sense in practice, if outer syntax is delimited differently, e.g. via cartouches). * Command 'print_term_bindings' supersedes 'print_binds' for clarity, but the latter is retained some time as Proof General legacy. * Code generator preprocessor: explicit control of simp tracing on a per-constant basis. See attribute "code_preproc". *** HOL *** * Code generator: enforce case of identifiers only for strict target language requirements. INCOMPATIBILITY. * Code generator: explicit proof contexts in many ML interfaces. INCOMPATIBILITY. * Code generator: minimize exported identifiers by default. Minor INCOMPATIBILITY. * Code generation for SML and OCaml: dropped arcane "no_signatures" option. Minor INCOMPATIBILITY. * "declare [[code abort: ...]]" replaces "code_abort ...". INCOMPATIBILITY. * "declare [[code drop: ...]]" drops all code equations associated with the given constants. * Code generations are provided for make, fields, extend and truncate operations on records. * Command and antiquotation "value" are now hardcoded against nbe and ML. Minor INCOMPATIBILITY. * Renamed command 'enriched_type' to 'functor'. INCOMPATIBILITY. * The symbol "\" may be used within char or string literals to represent (Char Nibble0 NibbleA), i.e. ASCII newline. * Qualified String.implode and String.explode. INCOMPATIBILITY. * Simplifier: Enhanced solver of preconditions of rewrite rules can now deal with conjunctions. For help with converting proofs, the old behaviour of the simplifier can be restored like this: declare/using [[simp_legacy_precond]]. This configuration option will disappear again in the future. INCOMPATIBILITY. * Simproc "finite_Collect" is no longer enabled by default, due to spurious crashes and other surprises. Potential INCOMPATIBILITY. * Moved new (co)datatype package and its dependencies from session "HOL-BNF" to "HOL". The commands 'bnf', 'wrap_free_constructors', 'datatype_new', 'codatatype', 'primcorec', 'primcorecursive' are now part of theory "Main". Theory renamings: FunDef.thy ~> Fun_Def.thy (and Fun_Def_Base.thy) Library/Wfrec.thy ~> Wfrec.thy Library/Zorn.thy ~> Zorn.thy Cardinals/Order_Relation.thy ~> Order_Relation.thy Library/Order_Union.thy ~> Cardinals/Order_Union.thy Cardinals/Cardinal_Arithmetic_Base.thy ~> BNF_Cardinal_Arithmetic.thy Cardinals/Cardinal_Order_Relation_Base.thy ~> BNF_Cardinal_Order_Relation.thy Cardinals/Constructions_on_Wellorders_Base.thy ~> BNF_Constructions_on_Wellorders.thy Cardinals/Wellorder_Embedding_Base.thy ~> BNF_Wellorder_Embedding.thy Cardinals/Wellorder_Relation_Base.thy ~> BNF_Wellorder_Relation.thy BNF/Ctr_Sugar.thy ~> Ctr_Sugar.thy BNF/Basic_BNFs.thy ~> Basic_BNFs.thy BNF/BNF_Comp.thy ~> BNF_Comp.thy BNF/BNF_Def.thy ~> BNF_Def.thy BNF/BNF_FP_Base.thy ~> BNF_FP_Base.thy BNF/BNF_GFP.thy ~> BNF_GFP.thy BNF/BNF_LFP.thy ~> BNF_LFP.thy BNF/BNF_Util.thy ~> BNF_Util.thy BNF/Coinduction.thy ~> Coinduction.thy BNF/More_BNFs.thy ~> Library/More_BNFs.thy BNF/Countable_Type.thy ~> Library/Countable_Set_Type.thy BNF/Examples/* ~> BNF_Examples/* New theories: Wellorder_Extension.thy (split from Zorn.thy) Library/Cardinal_Notations.thy Library/BNF_Axomatization.thy BNF_Examples/Misc_Primcorec.thy BNF_Examples/Stream_Processor.thy Discontinued theories: BNF/BNF.thy BNF/Equiv_Relations_More.thy INCOMPATIBILITY. * New (co)datatype package: - Command 'primcorec' is fully implemented. - Command 'datatype_new' generates size functions ("size_xxx" and "size") as required by 'fun'. - BNFs are integrated with the Lifting tool and new-style (co)datatypes with Transfer. - Renamed commands: datatype_new_compat ~> datatype_compat primrec_new ~> primrec wrap_free_constructors ~> free_constructors INCOMPATIBILITY. - The generated constants "xxx_case" and "xxx_rec" have been renamed "case_xxx" and "rec_xxx" (e.g., "prod_case" ~> "case_prod"). INCOMPATIBILITY. - The constant "xxx_(un)fold" and related theorems are no longer generated. Use "xxx_(co)rec" or define "xxx_(un)fold" manually using "prim(co)rec". INCOMPATIBILITY. - No discriminators are generated for nullary constructors by default, eliminating the need for the odd "=:" syntax. INCOMPATIBILITY. - No discriminators or selectors are generated by default by "datatype_new", unless custom names are specified or the new "discs_sels" option is passed. INCOMPATIBILITY. * Old datatype package: - The generated theorems "xxx.cases" and "xxx.recs" have been renamed "xxx.case" and "xxx.rec" (e.g., "sum.cases" -> "sum.case"). INCOMPATIBILITY. - The generated constants "xxx_case", "xxx_rec", and "xxx_size" have been renamed "case_xxx", "rec_xxx", and "size_xxx" (e.g., "prod_case" ~> "case_prod"). INCOMPATIBILITY. * The types "'a list" and "'a option", their set and map functions, their relators, and their selectors are now produced using the new BNF-based datatype package. Renamed constants: Option.set ~> set_option Option.map ~> map_option option_rel ~> rel_option Renamed theorems: set_def ~> set_rec[abs_def] map_def ~> map_rec[abs_def] Option.map_def ~> map_option_case[abs_def] (with "case_option" instead of "rec_option") option.recs ~> option.rec list_all2_def ~> list_all2_iff set.simps ~> set_simps (or the slightly different "list.set") map.simps ~> list.map hd.simps ~> list.sel(1) tl.simps ~> list.sel(2-3) the.simps ~> option.sel INCOMPATIBILITY. * The following map functions and relators have been renamed: sum_map ~> map_sum map_pair ~> map_prod prod_rel ~> rel_prod sum_rel ~> rel_sum fun_rel ~> rel_fun set_rel ~> rel_set filter_rel ~> rel_filter fset_rel ~> rel_fset (in "src/HOL/Library/FSet.thy") cset_rel ~> rel_cset (in "src/HOL/Library/Countable_Set_Type.thy") vset ~> rel_vset (in "src/HOL/Library/Quotient_Set.thy") INCOMPATIBILITY. * Lifting and Transfer: - a type variable as a raw type is supported - stronger reflexivity prover - rep_eq is always generated by lift_definition - setup for Lifting/Transfer is now automated for BNFs + holds for BNFs that do not contain a dead variable + relator_eq, relator_mono, relator_distr, relator_domain, relator_eq_onp, quot_map, transfer rules for bi_unique, bi_total, right_unique, right_total, left_unique, left_total are proved automatically + definition of a predicator is generated automatically + simplification rules for a predicator definition are proved automatically for datatypes - consolidation of the setup of Lifting/Transfer + property that a relator preservers reflexivity is not needed any more Minor INCOMPATIBILITY. + left_total and left_unique rules are now transfer rules (reflexivity_rule attribute not needed anymore) INCOMPATIBILITY. + Domainp does not have to be a separate assumption in relator_domain theorems (=> more natural statement) INCOMPATIBILITY. - registration of code equations is more robust Potential INCOMPATIBILITY. - respectfulness proof obligation is preprocessed to a more readable form Potential INCOMPATIBILITY. - eq_onp is always unfolded in respectfulness proof obligation Potential INCOMPATIBILITY. - unregister lifting setup for Code_Numeral.integer and Code_Numeral.natural Potential INCOMPATIBILITY. - Lifting.invariant -> eq_onp INCOMPATIBILITY. * New internal SAT solver "cdclite" that produces models and proof traces. This solver replaces the internal SAT solvers "enumerate" and "dpll". Applications that explicitly used one of these two SAT solvers should use "cdclite" instead. In addition, "cdclite" is now the default SAT solver for the "sat" and "satx" proof methods and corresponding tactics; the old default can be restored using "declare [[sat_solver = zchaff_with_proofs]]". Minor INCOMPATIBILITY. * SMT module: A new version of the SMT module, temporarily called "SMT2", uses SMT-LIB 2 and supports recent versions of Z3 (e.g., 4.3). The new proof method is called "smt2". CVC3 and CVC4 are also supported as oracles. Yices is no longer supported, because no version of the solver can handle both SMT-LIB 2 and quantifiers. * Activation of Z3 now works via "z3_non_commercial" system option (without requiring restart), instead of former settings variable "Z3_NON_COMMERCIAL". The option can be edited in Isabelle/jEdit menu Plugin Options / Isabelle / General. * Sledgehammer: - Z3 can now produce Isar proofs. - MaSh overhaul: . New SML-based learning algorithms eliminate the dependency on Python and increase performance and reliability. . MaSh and MeSh are now used by default together with the traditional MePo (Meng-Paulson) relevance filter. To disable MaSh, set the "MaSh" system option in Isabelle/jEdit Plugin Options / Isabelle / General to "none". - New option: smt_proofs - Renamed options: isar_compress ~> compress isar_try0 ~> try0 INCOMPATIBILITY. * Removed solvers remote_cvc3 and remote_z3. Use cvc3 and z3 instead. * Nitpick: - Fixed soundness bug whereby mutually recursive datatypes could take infinite values. - Fixed soundness bug with low-level number functions such as "Abs_Integ" and "Rep_Integ". - Removed "std" option. - Renamed "show_datatypes" to "show_types" and "hide_datatypes" to "hide_types". * Metis: Removed legacy proof method 'metisFT'. Use 'metis (full_types)' instead. INCOMPATIBILITY. * Try0: Added 'algebra' and 'meson' to the set of proof methods. * Adjustion of INF and SUP operations: - Elongated constants INFI and SUPR to INFIMUM and SUPREMUM. - Consolidated theorem names containing INFI and SUPR: have INF and SUP instead uniformly. - More aggressive normalization of expressions involving INF and Inf or SUP and Sup. - INF_image and SUP_image do not unfold composition. - Dropped facts INF_comp, SUP_comp. - Default congruence rules strong_INF_cong and strong_SUP_cong, with simplifier implication in premises. Generalize and replace former INT_cong, SUP_cong INCOMPATIBILITY. * SUP and INF generalized to conditionally_complete_lattice. * Swapped orientation of facts image_comp and vimage_comp: image_compose ~> image_comp [symmetric] image_comp ~> image_comp [symmetric] vimage_compose ~> vimage_comp [symmetric] vimage_comp ~> vimage_comp [symmetric] INCOMPATIBILITY. * Theory reorganization: split of Big_Operators.thy into Groups_Big.thy and Lattices_Big.thy. * Consolidated some facts about big group operators: setsum_0' ~> setsum.neutral setsum_0 ~> setsum.neutral_const setsum_addf ~> setsum.distrib setsum_cartesian_product ~> setsum.cartesian_product setsum_cases ~> setsum.If_cases setsum_commute ~> setsum.commute setsum_cong ~> setsum.cong setsum_delta ~> setsum.delta setsum_delta' ~> setsum.delta' setsum_diff1' ~> setsum.remove setsum_empty ~> setsum.empty setsum_infinite ~> setsum.infinite setsum_insert ~> setsum.insert setsum_inter_restrict'' ~> setsum.inter_filter setsum_mono_zero_cong_left ~> setsum.mono_neutral_cong_left setsum_mono_zero_cong_right ~> setsum.mono_neutral_cong_right setsum_mono_zero_left ~> setsum.mono_neutral_left setsum_mono_zero_right ~> setsum.mono_neutral_right setsum_reindex ~> setsum.reindex setsum_reindex_cong ~> setsum.reindex_cong setsum_reindex_nonzero ~> setsum.reindex_nontrivial setsum_restrict_set ~> setsum.inter_restrict setsum_Plus ~> setsum.Plus setsum_setsum_restrict ~> setsum.commute_restrict setsum_Sigma ~> setsum.Sigma setsum_subset_diff ~> setsum.subset_diff setsum_Un_disjoint ~> setsum.union_disjoint setsum_UN_disjoint ~> setsum.UNION_disjoint setsum_Un_Int ~> setsum.union_inter setsum_Union_disjoint ~> setsum.Union_disjoint setsum_UNION_zero ~> setsum.Union_comp setsum_Un_zero ~> setsum.union_inter_neutral strong_setprod_cong ~> setprod.strong_cong strong_setsum_cong ~> setsum.strong_cong setprod_1' ~> setprod.neutral setprod_1 ~> setprod.neutral_const setprod_cartesian_product ~> setprod.cartesian_product setprod_cong ~> setprod.cong setprod_delta ~> setprod.delta setprod_delta' ~> setprod.delta' setprod_empty ~> setprod.empty setprod_infinite ~> setprod.infinite setprod_insert ~> setprod.insert setprod_mono_one_cong_left ~> setprod.mono_neutral_cong_left setprod_mono_one_cong_right ~> setprod.mono_neutral_cong_right setprod_mono_one_left ~> setprod.mono_neutral_left setprod_mono_one_right ~> setprod.mono_neutral_right setprod_reindex ~> setprod.reindex setprod_reindex_cong ~> setprod.reindex_cong setprod_reindex_nonzero ~> setprod.reindex_nontrivial setprod_Sigma ~> setprod.Sigma setprod_subset_diff ~> setprod.subset_diff setprod_timesf ~> setprod.distrib setprod_Un2 ~> setprod.union_diff2 setprod_Un_disjoint ~> setprod.union_disjoint setprod_UN_disjoint ~> setprod.UNION_disjoint setprod_Un_Int ~> setprod.union_inter setprod_Union_disjoint ~> setprod.Union_disjoint setprod_Un_one ~> setprod.union_inter_neutral Dropped setsum_cong2 (simple variant of setsum.cong). Dropped setsum_inter_restrict' (simple variant of setsum.inter_restrict) Dropped setsum_reindex_id, setprod_reindex_id (simple variants of setsum.reindex [symmetric], setprod.reindex [symmetric]). INCOMPATIBILITY. * Abolished slightly odd global lattice interpretation for min/max. Fact consolidations: min_max.inf_assoc ~> min.assoc min_max.inf_commute ~> min.commute min_max.inf_left_commute ~> min.left_commute min_max.inf_idem ~> min.idem min_max.inf_left_idem ~> min.left_idem min_max.inf_right_idem ~> min.right_idem min_max.sup_assoc ~> max.assoc min_max.sup_commute ~> max.commute min_max.sup_left_commute ~> max.left_commute min_max.sup_idem ~> max.idem min_max.sup_left_idem ~> max.left_idem min_max.sup_inf_distrib1 ~> max_min_distrib2 min_max.sup_inf_distrib2 ~> max_min_distrib1 min_max.inf_sup_distrib1 ~> min_max_distrib2 min_max.inf_sup_distrib2 ~> min_max_distrib1 min_max.distrib ~> min_max_distribs min_max.inf_absorb1 ~> min.absorb1 min_max.inf_absorb2 ~> min.absorb2 min_max.sup_absorb1 ~> max.absorb1 min_max.sup_absorb2 ~> max.absorb2 min_max.le_iff_inf ~> min.absorb_iff1 min_max.le_iff_sup ~> max.absorb_iff2 min_max.inf_le1 ~> min.cobounded1 min_max.inf_le2 ~> min.cobounded2 le_maxI1, min_max.sup_ge1 ~> max.cobounded1 le_maxI2, min_max.sup_ge2 ~> max.cobounded2 min_max.le_infI1 ~> min.coboundedI1 min_max.le_infI2 ~> min.coboundedI2 min_max.le_supI1 ~> max.coboundedI1 min_max.le_supI2 ~> max.coboundedI2 min_max.less_infI1 ~> min.strict_coboundedI1 min_max.less_infI2 ~> min.strict_coboundedI2 min_max.less_supI1 ~> max.strict_coboundedI1 min_max.less_supI2 ~> max.strict_coboundedI2 min_max.inf_mono ~> min.mono min_max.sup_mono ~> max.mono min_max.le_infI, min_max.inf_greatest ~> min.boundedI min_max.le_supI, min_max.sup_least ~> max.boundedI min_max.le_inf_iff ~> min.bounded_iff min_max.le_sup_iff ~> max.bounded_iff For min_max.inf_sup_aci, prefer (one of) min.commute, min.assoc, min.left_commute, min.left_idem, max.commute, max.assoc, max.left_commute, max.left_idem directly. For min_max.inf_sup_ord, prefer (one of) min.cobounded1, min.cobounded2, max.cobounded1m max.cobounded2 directly. For min_ac or max_ac, prefer more general collection ac_simps. INCOMPATIBILITY. * Theorem disambiguation Inf_le_Sup (on finite sets) ~> Inf_fin_le_Sup_fin. INCOMPATIBILITY. * Qualified constant names Wellfounded.acc, Wellfounded.accp. INCOMPATIBILITY. * Fact generalization and consolidation: neq_one_mod_two, mod_2_not_eq_zero_eq_one_int ~> not_mod_2_eq_0_eq_1 INCOMPATIBILITY. * Purely algebraic definition of even. Fact generalization and consolidation: nat_even_iff_2_dvd, int_even_iff_2_dvd ~> even_iff_2_dvd even_zero_(nat|int) ~> even_zero INCOMPATIBILITY. * Abolished neg_numeral. - Canonical representation for minus one is "- 1". - Canonical representation for other negative numbers is "- (numeral _)". - When devising rule sets for number calculation, consider the following canonical cases: 0, 1, numeral _, - 1, - numeral _. - HOLogic.dest_number also recognizes numerals in non-canonical forms like "numeral One", "- numeral One", "- 0" and even "- ... - _". - Syntax for negative numerals is mere input syntax. INCOMPATIBILITY. * Reduced name variants for rules on associativity and commutativity: add_assoc ~> add.assoc add_commute ~> add.commute add_left_commute ~> add.left_commute mult_assoc ~> mult.assoc mult_commute ~> mult.commute mult_left_commute ~> mult.left_commute nat_add_assoc ~> add.assoc nat_add_commute ~> add.commute nat_add_left_commute ~> add.left_commute nat_mult_assoc ~> mult.assoc nat_mult_commute ~> mult.commute eq_assoc ~> iff_assoc eq_left_commute ~> iff_left_commute INCOMPATIBILITY. * Fact collections add_ac and mult_ac are considered old-fashioned. Prefer ac_simps instead, or specify rules (add|mult).(assoc|commute|left_commute) individually. * Elimination of fact duplicates: equals_zero_I ~> minus_unique diff_eq_0_iff_eq ~> right_minus_eq nat_infinite ~> infinite_UNIV_nat int_infinite ~> infinite_UNIV_int INCOMPATIBILITY. * Fact name consolidation: diff_def, diff_minus, ab_diff_minus ~> diff_conv_add_uminus minus_le_self_iff ~> neg_less_eq_nonneg le_minus_self_iff ~> less_eq_neg_nonpos neg_less_nonneg ~> neg_less_pos less_minus_self_iff ~> less_neg_neg [simp] INCOMPATIBILITY. * More simplification rules on unary and binary minus: add_diff_cancel, add_diff_cancel_left, add_le_same_cancel1, add_le_same_cancel2, add_less_same_cancel1, add_less_same_cancel2, add_minus_cancel, diff_add_cancel, le_add_same_cancel1, le_add_same_cancel2, less_add_same_cancel1, less_add_same_cancel2, minus_add_cancel, uminus_add_conv_diff. These correspondingly have been taken away from fact collections algebra_simps and field_simps. INCOMPATIBILITY. To restore proofs, the following patterns are helpful: a) Arbitrary failing proof not involving "diff_def": Consider simplification with algebra_simps or field_simps. b) Lifting rules from addition to subtraction: Try with "using of [... "- _" ...]" by simp". c) Simplification with "diff_def": just drop "diff_def". Consider simplification with algebra_simps or field_simps; or the brute way with "simp add: diff_conv_add_uminus del: add_uminus_conv_diff". * Introduce bdd_above and bdd_below in theory Conditionally_Complete_Lattices, use them instead of explicitly stating boundedness of sets. * ccpo.admissible quantifies only over non-empty chains to allow more syntax-directed proof rules; the case of the empty chain shows up as additional case in fixpoint induction proofs. INCOMPATIBILITY. * Removed and renamed theorems in Series: summable_le ~> suminf_le suminf_le ~> suminf_le_const series_pos_le ~> setsum_le_suminf series_pos_less ~> setsum_less_suminf suminf_ge_zero ~> suminf_nonneg suminf_gt_zero ~> suminf_pos suminf_gt_zero_iff ~> suminf_pos_iff summable_sumr_LIMSEQ_suminf ~> summable_LIMSEQ suminf_0_le ~> suminf_nonneg [rotate] pos_summable ~> summableI_nonneg_bounded ratio_test ~> summable_ratio_test removed series_zero, replaced by sums_finite removed auxiliary lemmas: sumr_offset, sumr_offset2, sumr_offset3, sumr_offset4, sumr_group, half, le_Suc_ex_iff, lemma_realpow_diff_sumr, real_setsum_nat_ivl_bounded, summable_le2, ratio_test_lemma2, sumr_minus_one_realpow_zerom, sumr_one_lb_realpow_zero, summable_convergent_sumr_iff, sumr_diff_mult_const INCOMPATIBILITY. * Replace (F)DERIV syntax by has_derivative: - "(f has_derivative f') (at x within s)" replaces "FDERIV f x : s : f'" - "(f has_field_derivative f') (at x within s)" replaces "DERIV f x : s : f'" - "f differentiable at x within s" replaces "_ differentiable _ in _" syntax - removed constant isDiff - "DERIV f x : f'" and "FDERIV f x : f'" syntax is only available as input syntax. - "DERIV f x : s : f'" and "FDERIV f x : s : f'" syntax removed. - Renamed FDERIV_... lemmas to has_derivative_... - renamed deriv (the syntax constant used for "DERIV _ _ :> _") to DERIV - removed DERIV_intros, has_derivative_eq_intros - introduced derivative_intros and deriative_eq_intros which includes now rules for DERIV, has_derivative and has_vector_derivative. - Other renamings: differentiable_def ~> real_differentiable_def differentiableE ~> real_differentiableE fderiv_def ~> has_derivative_at field_fderiv_def ~> field_has_derivative_at isDiff_der ~> differentiable_def deriv_fderiv ~> has_field_derivative_def deriv_def ~> DERIV_def INCOMPATIBILITY. * Include more theorems in continuous_intros. Remove the continuous_on_intros, isCont_intros collections, these facts are now in continuous_intros. * Theorems about complex numbers are now stated only using Re and Im, the Complex constructor is not used anymore. It is possible to use primcorec to defined the behaviour of a complex-valued function. Removed theorems about the Complex constructor from the simpset, they are available as the lemma collection legacy_Complex_simps. This especially removes i_complex_of_real: "ii * complex_of_real r = Complex 0 r". Instead the reverse direction is supported with Complex_eq: "Complex a b = a + \ * b" Moved csqrt from Fundamental_Algebra_Theorem to Complex. Renamings: Re/Im ~> complex.sel complex_Re/Im_zero ~> zero_complex.sel complex_Re/Im_add ~> plus_complex.sel complex_Re/Im_minus ~> uminus_complex.sel complex_Re/Im_diff ~> minus_complex.sel complex_Re/Im_one ~> one_complex.sel complex_Re/Im_mult ~> times_complex.sel complex_Re/Im_inverse ~> inverse_complex.sel complex_Re/Im_scaleR ~> scaleR_complex.sel complex_Re/Im_i ~> ii.sel complex_Re/Im_cnj ~> cnj.sel Re/Im_cis ~> cis.sel complex_divide_def ~> divide_complex_def complex_norm_def ~> norm_complex_def cmod_def ~> norm_complex_de Removed theorems: complex_zero_def complex_add_def complex_minus_def complex_diff_def complex_one_def complex_mult_def complex_inverse_def complex_scaleR_def INCOMPATIBILITY. * Theory Lubs moved HOL image to HOL-Library. It is replaced by Conditionally_Complete_Lattices. INCOMPATIBILITY. * HOL-Library: new theory src/HOL/Library/Tree.thy. * HOL-Library: removed theory src/HOL/Library/Kleene_Algebra.thy; it is subsumed by session Kleene_Algebra in AFP. * HOL-Library / theory RBT: various constants and facts are hidden; lifting setup is unregistered. INCOMPATIBILITY. * HOL-Cardinals: new theory src/HOL/Cardinals/Ordinal_Arithmetic.thy. * HOL-Word: bit representations prefer type bool over type bit. INCOMPATIBILITY. * HOL-Word: - Abandoned fact collection "word_arith_alts", which is a duplicate of "word_arith_wis". - Dropped first (duplicated) element in fact collections "sint_word_ariths", "word_arith_alts", "uint_word_ariths", "uint_word_arith_bintrs". * HOL-Number_Theory: - consolidated the proofs of the binomial theorem - the function fib is again of type nat => nat and not overloaded - no more references to Old_Number_Theory in the HOL libraries (except the AFP) INCOMPATIBILITY. * HOL-Multivariate_Analysis: - Type class ordered_real_vector for ordered vector spaces. - New theory Complex_Basic_Analysis defining complex derivatives, holomorphic functions, etc., ported from HOL Light's canal.ml. - Changed order of ordered_euclidean_space to be compatible with pointwise ordering on products. Therefore instance of conditionally_complete_lattice and ordered_real_vector. INCOMPATIBILITY: use box instead of greaterThanLessThan or explicit set-comprehensions with eucl_less for other (half-)open intervals. - removed dependencies on type class ordered_euclidean_space with introduction of "cbox" on euclidean_space - renamed theorems: interval ~> box mem_interval ~> mem_box interval_eq_empty ~> box_eq_empty interval_ne_empty ~> box_ne_empty interval_sing(1) ~> cbox_sing interval_sing(2) ~> box_sing subset_interval_imp ~> subset_box_imp subset_interval ~> subset_box open_interval ~> open_box closed_interval ~> closed_cbox interior_closed_interval ~> interior_cbox bounded_closed_interval ~> bounded_cbox compact_interval ~> compact_cbox bounded_subset_closed_interval_symmetric ~> bounded_subset_cbox_symmetric bounded_subset_closed_interval ~> bounded_subset_cbox mem_interval_componentwiseI ~> mem_box_componentwiseI convex_box ~> convex_prod rel_interior_real_interval ~> rel_interior_real_box convex_interval ~> convex_box convex_hull_eq_real_interval ~> convex_hull_eq_real_cbox frechet_derivative_within_closed_interval ~> frechet_derivative_within_cbox content_closed_interval' ~> content_cbox' elementary_subset_interval ~> elementary_subset_box diameter_closed_interval ~> diameter_cbox frontier_closed_interval ~> frontier_cbox frontier_open_interval ~> frontier_box bounded_subset_open_interval_symmetric ~> bounded_subset_box_symmetric closure_open_interval ~> closure_box open_closed_interval_convex ~> open_cbox_convex open_interval_midpoint ~> box_midpoint content_image_affinity_interval ~> content_image_affinity_cbox is_interval_interval ~> is_interval_cbox + is_interval_box + is_interval_closed_interval bounded_interval ~> bounded_closed_interval + bounded_boxes - respective theorems for intervals over the reals: content_closed_interval + content_cbox has_integral + has_integral_real fine_division_exists + fine_division_exists_real has_integral_null + has_integral_null_real tagged_division_union_interval + tagged_division_union_interval_real has_integral_const + has_integral_const_real integral_const + integral_const_real has_integral_bound + has_integral_bound_real integrable_continuous + integrable_continuous_real integrable_subinterval + integrable_subinterval_real has_integral_reflect_lemma + has_integral_reflect_lemma_real integrable_reflect + integrable_reflect_real integral_reflect + integral_reflect_real image_affinity_interval + image_affinity_cbox image_smult_interval + image_smult_cbox integrable_const + integrable_const_ivl integrable_on_subinterval + integrable_on_subcbox - renamed theorems: derivative_linear ~> has_derivative_bounded_linear derivative_is_linear ~> has_derivative_linear bounded_linear_imp_linear ~> bounded_linear.linear * HOL-Probability: - Renamed positive_integral to nn_integral: . Renamed all lemmas "*positive_integral*" to *nn_integral*" positive_integral_positive ~> nn_integral_nonneg . Renamed abbreviation integral\<^sup>P to integral\<^sup>N. - replaced the Lebesgue integral on real numbers by the more general Bochner integral for functions into a real-normed vector space. integral_zero ~> integral_zero / integrable_zero integral_minus ~> integral_minus / integrable_minus integral_add ~> integral_add / integrable_add integral_diff ~> integral_diff / integrable_diff integral_setsum ~> integral_setsum / integrable_setsum integral_multc ~> integral_mult_left / integrable_mult_left integral_cmult ~> integral_mult_right / integrable_mult_right integral_triangle_inequality~> integral_norm_bound integrable_nonneg ~> integrableI_nonneg integral_positive ~> integral_nonneg_AE integrable_abs_iff ~> integrable_abs_cancel positive_integral_lim_INF ~> nn_integral_liminf lebesgue_real_affine ~> lborel_real_affine borel_integral_has_integral ~> has_integral_lebesgue_integral integral_indicator ~> integral_real_indicator / integrable_real_indicator positive_integral_fst ~> nn_integral_fst' positive_integral_fst_measurable ~> nn_integral_fst positive_integral_snd_measurable ~> nn_integral_snd integrable_fst_measurable ~> integral_fst / integrable_fst / AE_integrable_fst integrable_snd_measurable ~> integral_snd / integrable_snd / AE_integrable_snd integral_monotone_convergence ~> integral_monotone_convergence / integrable_monotone_convergence integral_monotone_convergence_at_top ~> integral_monotone_convergence_at_top / integrable_monotone_convergence_at_top has_integral_iff_positive_integral_lebesgue ~> has_integral_iff_has_bochner_integral_lebesgue_nonneg lebesgue_integral_has_integral ~> has_integral_integrable_lebesgue_nonneg positive_integral_lebesgue_has_integral ~> integral_has_integral_lebesgue_nonneg / integrable_has_integral_lebesgue_nonneg lebesgue_integral_real_affine ~> nn_integral_real_affine has_integral_iff_positive_integral_lborel ~> integral_has_integral_nonneg / integrable_has_integral_nonneg The following theorems where removed: lebesgue_integral_nonneg lebesgue_integral_uminus lebesgue_integral_cmult lebesgue_integral_multc lebesgue_integral_cmult_nonneg integral_cmul_indicator integral_real - Formalized properties about exponentially, Erlang, and normal distributed random variables. * HOL-Decision_Procs: Separate command 'approximate' for approximative computation in src/HOL/Decision_Procs/Approximation. Minor INCOMPATIBILITY. *** Scala *** * The signature and semantics of Document.Snapshot.cumulate_markup / select_markup have been clarified. Markup is now traversed in the order of reports given by the prover: later markup is usually more specific and may override results accumulated so far. The elements guard is mandatory and checked precisely. Subtle INCOMPATIBILITY. * Substantial reworking of internal PIDE protocol communication channels. INCOMPATIBILITY. *** ML *** * Subtle change of semantics of Thm.eq_thm: theory stamps are not compared (according to Thm.thm_ord), but assumed to be covered by the current background theory. Thus equivalent data produced in different branches of the theory graph usually coincides (e.g. relevant for theory merge). Note that the softer Thm.eq_thm_prop is often more appropriate than Thm.eq_thm. * Proper context for basic Simplifier operations: rewrite_rule, rewrite_goals_rule, rewrite_goals_tac etc. INCOMPATIBILITY, need to pass runtime Proof.context (and ensure that the simplified entity actually belongs to it). * Proper context discipline for read_instantiate and instantiate_tac: variables that are meant to become schematic need to be given as fixed, and are generalized by the explicit context of local variables. This corresponds to Isar attributes "where" and "of" with 'for' declaration. INCOMPATIBILITY, also due to potential change of indices of schematic variables. * Moved ML_Compiler.exn_trace and other operations on exceptions to structure Runtime. Minor INCOMPATIBILITY. * Discontinued old Toplevel.debug in favour of system option "ML_exception_trace", which may be also declared within the context via "declare [[ML_exception_trace = true]]". Minor INCOMPATIBILITY. * Renamed configuration option "ML_trace" to "ML_source_trace". Minor INCOMPATIBILITY. * Configuration option "ML_print_depth" controls the pretty-printing depth of the ML compiler within the context. The old print_depth in ML is still available as default_print_depth, but rarely used. Minor INCOMPATIBILITY. * Toplevel function "use" refers to raw ML bootstrap environment, without Isar context nor antiquotations. Potential INCOMPATIBILITY. Note that 'ML_file' is the canonical command to load ML files into the formal context. * Simplified programming interface to define ML antiquotations, see structure ML_Antiquotation. Minor INCOMPATIBILITY. * ML antiquotation @{here} refers to its source position, which is occasionally useful for experimentation and diagnostic purposes. * ML antiquotation @{path} produces a Path.T value, similarly to Path.explode, but with compile-time check against the file-system and some PIDE markup. Note that unlike theory source, ML does not have a well-defined master directory, so an absolute symbolic path specification is usually required, e.g. "~~/src/HOL". * ML antiquotation @{print} inlines a function to print an arbitrary ML value, which is occasionally useful for diagnostic or demonstration purposes. *** System *** * Proof General with its traditional helper scripts is now an optional Isabelle component, e.g. see ProofGeneral-4.2-2 from the Isabelle component repository http://isabelle.in.tum.de/components/. Note that the "system" manual provides general explanations about add-on components, especially those that are not bundled with the release. * The raw Isabelle process executable has been renamed from "isabelle-process" to "isabelle_process", which conforms to common shell naming conventions, and allows to define a shell function within the Isabelle environment to avoid dynamic path lookup. Rare incompatibility for old tools that do not use the ISABELLE_PROCESS settings variable. * Former "isabelle tty" has been superseded by "isabelle console", with implicit build like "isabelle jedit", and without the mostly obsolete Isar TTY loop. * Simplified "isabelle display" tool. Settings variables DVI_VIEWER and PDF_VIEWER now refer to the actual programs, not shell command-lines. Discontinued option -c: invocation may be asynchronous via desktop environment, without any special precautions. Potential INCOMPATIBILITY with ambitious private settings. * Removed obsolete "isabelle unsymbolize". Note that the usual format for email communication is the Unicode rendering of Isabelle symbols, as produced by Isabelle/jEdit, for example. * Removed obsolete tool "wwwfind". Similar functionality may be integrated into Isabelle/jEdit eventually. * Improved 'display_drafts' concerning desktop integration and repeated invocation in PIDE front-end: re-use single file $ISABELLE_HOME_USER/tmp/drafts.pdf and corresponding views. * Session ROOT specifications require explicit 'document_files' for robust dependencies on LaTeX sources. Only these explicitly given files are copied to the document output directory, before document processing is started. * Windows: support for regular TeX installation (e.g. MiKTeX) instead of TeX Live from Cygwin. New in Isabelle2013-2 (December 2013) ------------------------------------- *** Prover IDE -- Isabelle/Scala/jEdit *** * More robust editing of running commands with internal forks, e.g. non-terminating 'by' steps. * More relaxed Sledgehammer panel: avoid repeated application of query after edits surrounding the command location. * More status information about commands that are interrupted accidentally (via physical event or Poly/ML runtime system signal, e.g. out-of-memory). *** System *** * More robust termination of external processes managed by Isabelle/ML: support cancellation of tasks within the range of milliseconds, as required for PIDE document editing with automatically tried tools (e.g. Sledgehammer). * Reactivated Isabelle/Scala kill command for external processes on Mac OS X, which was accidentally broken in Isabelle2013-1 due to a workaround for some Debian/Ubuntu Linux versions from 2013. New in Isabelle2013-1 (November 2013) ------------------------------------- *** General *** * Discontinued obsolete 'uses' within theory header. Note that commands like 'ML_file' work without separate declaration of file dependencies. Minor INCOMPATIBILITY. * Discontinued redundant 'use' command, which was superseded by 'ML_file' in Isabelle2013. Minor INCOMPATIBILITY. * Simplified subscripts within identifiers, using plain \<^sub> instead of the second copy \<^isub> and \<^isup>. Superscripts are only for literal tokens within notation; explicit mixfix annotations for consts or fixed variables may be used as fall-back for unusual names. Obsolete \ has been expanded to \<^sup>2 in Isabelle/HOL. INCOMPATIBILITY, use "isabelle update_sub_sup" to standardize symbols as a starting point for further manual cleanup. The ML reference variable "legacy_isub_isup" may be set as temporary workaround, to make the prover accept a subset of the old identifier syntax. * Document antiquotations: term style "isub" has been renamed to "sub". Minor INCOMPATIBILITY. * Uniform management of "quick_and_dirty" as system option (see also "isabelle options"), configuration option within the context (see also Config.get in Isabelle/ML), and attribute in Isabelle/Isar. Minor INCOMPATIBILITY, need to use more official Isabelle means to access quick_and_dirty, instead of historical poking into mutable reference. * Renamed command 'print_configs' to 'print_options'. Minor INCOMPATIBILITY. * Proper diagnostic command 'print_state'. Old 'pr' (with its implicit change of some global references) is retained for now as control command, e.g. for ProofGeneral 3.7.x. * Discontinued 'print_drafts' command with its old-fashioned PS output and Unix command-line print spooling. Minor INCOMPATIBILITY: use 'display_drafts' instead and print via the regular document viewer. * Updated and extended "isar-ref" and "implementation" manual, eliminated old "ref" manual. *** Prover IDE -- Isabelle/Scala/jEdit *** * New manual "jedit" for Isabelle/jEdit, see isabelle doc or Documentation panel. * Dockable window "Documentation" provides access to Isabelle documentation. * Dockable window "Find" provides query operations for formal entities (GUI front-end to 'find_theorems' command). * Dockable window "Sledgehammer" manages asynchronous / parallel sledgehammer runs over existing document sources, independently of normal editing and checking process. * Dockable window "Timing" provides an overview of relevant command timing information, depending on option jedit_timing_threshold. The same timing information is shown in the extended tooltip of the command keyword, when hovering the mouse over it while the CONTROL or COMMAND modifier is pressed. * Improved dockable window "Theories": Continuous checking of proof document (visible and required parts) may be controlled explicitly, using check box or shortcut "C+e ENTER". Individual theory nodes may be marked explicitly as required and checked in full, using check box or shortcut "C+e SPACE". * Improved completion mechanism, which is now managed by the Isabelle/jEdit plugin instead of SideKick. Refined table of Isabelle symbol abbreviations (see $ISABELLE_HOME/etc/symbols). * Standard jEdit keyboard shortcut C+b complete-word is remapped to isabelle.complete for explicit completion in Isabelle sources. INCOMPATIBILITY wrt. jEdit defaults, may have to invent new shortcuts to resolve conflict. * Improved support of various "minor modes" for Isabelle NEWS, options, session ROOT etc., with completion and SideKick tree view. * Strictly monotonic document update, without premature cancellation of running transactions that are still needed: avoid reset/restart of such command executions while editing. * Support for asynchronous print functions, as overlay to existing document content. * Support for automatic tools in HOL, which try to prove or disprove toplevel theorem statements. * Action isabelle.reset-font-size resets main text area font size according to Isabelle/Scala plugin option "jedit_font_reset_size" (see also "Plugin Options / Isabelle / General"). It can be bound to some keyboard shortcut by the user (e.g. C+0 and/or C+NUMPAD0). * File specifications in jEdit (e.g. file browser) may refer to $ISABELLE_HOME and $ISABELLE_HOME_USER on all platforms. Discontinued obsolete $ISABELLE_HOME_WINDOWS variable. * Improved support for Linux look-and-feel "GTK+", see also "Utilities / Global Options / Appearance". * Improved support of native Mac OS X functionality via "MacOSX" plugin, which is now enabled by default. *** Pure *** * Commands 'interpretation' and 'sublocale' are now target-sensitive. In particular, 'interpretation' allows for non-persistent interpretation within "context ... begin ... end" blocks offering a light-weight alternative to 'sublocale'. See "isar-ref" manual for details. * Improved locales diagnostic command 'print_dependencies'. * Discontinued obsolete 'axioms' command, which has been marked as legacy since Isabelle2009-2. INCOMPATIBILITY, use 'axiomatization' instead, while observing its uniform scope for polymorphism. * Discontinued empty name bindings in 'axiomatization'. INCOMPATIBILITY. * System option "proofs" has been discontinued. Instead the global state of Proofterm.proofs is persistently compiled into logic images as required, notably HOL-Proofs. Users no longer need to change Proofterm.proofs dynamically. Minor INCOMPATIBILITY. * Syntax translation functions (print_translation etc.) always depend on Proof.context. Discontinued former "(advanced)" option -- this is now the default. Minor INCOMPATIBILITY. * Former global reference trace_unify_fail is now available as configuration option "unify_trace_failure" (global context only). * SELECT_GOAL now retains the syntactic context of the overall goal state (schematic variables etc.). Potential INCOMPATIBILITY in rare situations. *** HOL *** * Stronger precedence of syntax for big intersection and union on sets, in accordance with corresponding lattice operations. INCOMPATIBILITY. * Notation "{p:A. P}" now allows tuple patterns as well. * Nested case expressions are now translated in a separate check phase rather than during parsing. The data for case combinators is separated from the datatype package. The declaration attribute "case_translation" can be used to register new case combinators: declare [[case_translation case_combinator constructor1 ... constructorN]] * Code generator: - 'code_printing' unifies 'code_const' / 'code_type' / 'code_class' / 'code_instance'. - 'code_identifier' declares name hints for arbitrary identifiers in generated code, subsuming 'code_modulename'. See the isar-ref manual for syntax diagrams, and the HOL theories for examples. * Attibute 'code': 'code' now declares concrete and abstract code equations uniformly. Use explicit 'code equation' and 'code abstract' to distinguish both when desired. * Discontinued theories Code_Integer and Efficient_Nat by a more fine-grain stack of theories Code_Target_Int, Code_Binary_Nat, Code_Target_Nat and Code_Target_Numeral. See the tutorial on code generation for details. INCOMPATIBILITY. * Numeric types are mapped by default to target language numerals: natural (replaces former code_numeral) and integer (replaces former code_int). Conversions are available as integer_of_natural / natural_of_integer / integer_of_nat / nat_of_integer (in HOL) and Code_Numeral.integer_of_natural / Code_Numeral.natural_of_integer (in ML). INCOMPATIBILITY. * Function package: For mutually recursive functions f and g, separate cases rules f.cases and g.cases are generated instead of unusable f_g.cases which exposed internal sum types. Potential INCOMPATIBILITY, in the case that the unusable rule was used nevertheless. * Function package: For each function f, new rules f.elims are generated, which eliminate equalities of the form "f x = t". * New command 'fun_cases' derives ad-hoc elimination rules for function equations as simplified instances of f.elims, analogous to inductive_cases. See ~~/src/HOL/ex/Fundefs.thy for some examples. * Lifting: - parametrized correspondence relations are now supported: + parametricity theorems for the raw term can be specified in the command lift_definition, which allow us to generate stronger transfer rules + setup_lifting generates stronger transfer rules if parametric correspondence relation can be generated + various new properties of the relator must be specified to support parametricity + parametricity theorem for the Quotient relation can be specified - setup_lifting generates domain rules for the Transfer package - stronger reflexivity prover of respectfulness theorems for type copies - ===> and --> are now local. The symbols can be introduced by interpreting the locale lifting_syntax (typically in an anonymous context) - Lifting/Transfer relevant parts of Library/Quotient_* are now in Main. Potential INCOMPATIBILITY - new commands for restoring and deleting Lifting/Transfer context: lifting_forget, lifting_update - the command print_quotmaps was renamed to print_quot_maps. INCOMPATIBILITY * Transfer: - better support for domains in Transfer: replace Domainp T by the actual invariant in a transferred goal - transfer rules can have as assumptions other transfer rules - Experimental support for transferring from the raw level to the abstract level: Transfer.transferred attribute - Attribute version of the transfer method: untransferred attribute * Reification and reflection: - Reification is now directly available in HOL-Main in structure "Reification". - Reflection now handles multiple lists with variables also. - The whole reflection stack has been decomposed into conversions. INCOMPATIBILITY. * Revised devices for recursive definitions over finite sets: - Only one fundamental fold combinator on finite set remains: Finite_Set.fold :: ('a => 'b => 'b) => 'b => 'a set => 'b This is now identity on infinite sets. - Locales ("mini packages") for fundamental definitions with Finite_Set.fold: folding, folding_idem. - Locales comm_monoid_set, semilattice_order_set and semilattice_neutr_order_set for big operators on sets. See theory Big_Operators for canonical examples. Note that foundational constants comm_monoid_set.F and semilattice_set.F correspond to former combinators fold_image and fold1 respectively. These are now gone. You may use those foundational constants as substitutes, but it is preferable to interpret the above locales accordingly. - Dropped class ab_semigroup_idem_mult (special case of lattice, no longer needed in connection with Finite_Set.fold etc.) - Fact renames: card.union_inter ~> card_Un_Int [symmetric] card.union_disjoint ~> card_Un_disjoint INCOMPATIBILITY. * Locale hierarchy for abstract orderings and (semi)lattices. * Complete_Partial_Order.admissible is defined outside the type class ccpo, but with mandatory prefix ccpo. Admissibility theorems lose the class predicate assumption or sort constraint when possible. INCOMPATIBILITY. * Introduce type class "conditionally_complete_lattice": Like a complete lattice but does not assume the existence of the top and bottom elements. Allows to generalize some lemmas about reals and extended reals. Removed SupInf and replaced it by the instantiation of conditionally_complete_lattice for real. Renamed lemmas about conditionally-complete lattice from Sup_... to cSup_... and from Inf_... to cInf_... to avoid hidding of similar complete lattice lemmas. * Introduce type class linear_continuum as combination of conditionally-complete lattices and inner dense linorders which have more than one element. INCOMPATIBILITY. * Introduced type classes order_top and order_bot. The old classes top and bot only contain the syntax without assumptions. INCOMPATIBILITY: Rename bot -> order_bot, top -> order_top * Introduce type classes "no_top" and "no_bot" for orderings without top and bottom elements. * Split dense_linorder into inner_dense_order and no_top, no_bot. * Complex_Main: Unify and move various concepts from HOL-Multivariate_Analysis to HOL-Complex_Main. - Introduce type class (lin)order_topology and linear_continuum_topology. Allows to generalize theorems about limits and order. Instances are reals and extended reals. - continuous and continuos_on from Multivariate_Analysis: "continuous" is the continuity of a function at a filter. "isCont" is now an abbrevitation: "isCont x f == continuous (at _) f". Generalized continuity lemmas from isCont to continuous on an arbitrary filter. - compact from Multivariate_Analysis. Use Bolzano's lemma to prove compactness of closed intervals on reals. Continuous functions attain infimum and supremum on compact sets. The inverse of a continuous function is continuous, when the function is continuous on a compact set. - connected from Multivariate_Analysis. Use it to prove the intermediate value theorem. Show connectedness of intervals on linear_continuum_topology). - first_countable_topology from Multivariate_Analysis. Is used to show equivalence of properties on the neighbourhood filter of x and on all sequences converging to x. - FDERIV: Definition of has_derivative moved to Deriv.thy. Moved theorems from Library/FDERIV.thy to Deriv.thy and base the definition of DERIV on FDERIV. Add variants of DERIV and FDERIV which are restricted to sets, i.e. to represent derivatives from left or right. - Removed the within-filter. It is replaced by the principal filter: F within X = inf F (principal X) - Introduce "at x within U" as a single constant, "at x" is now an abbreviation for "at x within UNIV" - Introduce named theorem collections tendsto_intros, continuous_intros, continuous_on_intros and FDERIV_intros. Theorems in tendsto_intros (or FDERIV_intros) are also available as tendsto_eq_intros (or FDERIV_eq_intros) where the right-hand side is replaced by a congruence rule. This allows to apply them as intro rules and then proving equivalence by the simplifier. - Restructured theories in HOL-Complex_Main: + Moved RealDef and RComplete into Real + Introduced Topological_Spaces and moved theorems about topological spaces, filters, limits and continuity to it + Renamed RealVector to Real_Vector_Spaces + Split Lim, SEQ, Series into Topological_Spaces, Real_Vector_Spaces, and Limits + Moved Ln and Log to Transcendental + Moved theorems about continuity from Deriv to Topological_Spaces - Remove various auxiliary lemmas. INCOMPATIBILITY. * Nitpick: - Added option "spy". - Reduce incidence of "too high arity" errors. * Sledgehammer: - Renamed option: isar_shrink ~> isar_compress INCOMPATIBILITY. - Added options "isar_try0", "spy". - Better support for "isar_proofs". - MaSh has been fined-tuned and now runs as a local server. * Improved support for ad hoc overloading of constants (see also isar-ref manual and ~~/src/HOL/ex/Adhoc_Overloading_Examples.thy). * Library/Polynomial.thy: - Use lifting for primitive definitions. - Explicit conversions from and to lists of coefficients, used for generated code. - Replaced recursion operator poly_rec by fold_coeffs. - Prefer pre-existing gcd operation for gcd. - Fact renames: poly_eq_iff ~> poly_eq_poly_eq_iff poly_ext ~> poly_eqI expand_poly_eq ~> poly_eq_iff IMCOMPATIBILITY. * New Library/Simps_Case_Conv.thy: Provides commands simps_of_case and case_of_simps to convert function definitions between a list of equations with patterns on the lhs and a single equation with case expressions on the rhs. See also Ex/Simps_Case_Conv_Examples.thy. * New Library/FSet.thy: type of finite sets defined as a subtype of sets defined by Lifting/Transfer. * Discontinued theory src/HOL/Library/Eval_Witness. INCOMPATIBILITY. * Consolidation of library theories on product orders: Product_Lattice ~> Product_Order -- pointwise order on products Product_ord ~> Product_Lexorder -- lexicographic order on products INCOMPATIBILITY. * Imperative-HOL: The MREC combinator is considered legacy and no longer included by default. INCOMPATIBILITY, use partial_function instead, or import theory Legacy_Mrec as a fallback. * HOL-Algebra: Discontinued theories ~~/src/HOL/Algebra/abstract and ~~/src/HOL/Algebra/poly. Existing theories should be based on ~~/src/HOL/Library/Polynomial instead. The latter provides integration with HOL's type classes for rings. INCOMPATIBILITY. * HOL-BNF: - Various improvements to BNF-based (co)datatype package, including new commands "primrec_new", "primcorec", and "datatype_new_compat", as well as documentation. See "datatypes.pdf" for details. - New "coinduction" method to avoid some boilerplate (compared to coinduct). - Renamed keywords: data ~> datatype_new codata ~> codatatype bnf_def ~> bnf - Renamed many generated theorems, including discs ~> disc map_comp' ~> map_comp map_id' ~> map_id sels ~> sel set_map' ~> set_map sets ~> set IMCOMPATIBILITY. *** ML *** * Spec_Check is a Quickcheck tool for Isabelle/ML. The ML function "check_property" allows to check specifications of the form "ALL x y z. prop x y z". See also ~~/src/Tools/Spec_Check/ with its Examples.thy in particular. * Improved printing of exception trace in Poly/ML 5.5.1, with regular tracing output in the command transaction context instead of physical stdout. See also Toplevel.debug, Toplevel.debugging and ML_Compiler.exn_trace. * ML type "theory" is now immutable, without any special treatment of drafts or linear updates (which could lead to "stale theory" errors in the past). Discontinued obsolete operations like Theory.copy, Theory.checkpoint, and the auxiliary type theory_ref. Minor INCOMPATIBILITY. * More uniform naming of goal functions for skipped proofs: Skip_Proof.prove ~> Goal.prove_sorry Skip_Proof.prove_global ~> Goal.prove_sorry_global Minor INCOMPATIBILITY. * Simplifier tactics and tools use proper Proof.context instead of historic type simpset. Old-style declarations like addsimps, addsimprocs etc. operate directly on Proof.context. Raw type simpset retains its use as snapshot of the main Simplifier context, using simpset_of and put_simpset on Proof.context. INCOMPATIBILITY -- port old tools by making them depend on (ctxt : Proof.context) instead of (ss : simpset), then turn (simpset_of ctxt) into ctxt. * Modifiers for classical wrappers (e.g. addWrapper, delWrapper) operate on Proof.context instead of claset, for uniformity with addIs, addEs, addDs etc. Note that claset_of and put_claset allow to manage clasets separately from the context. * Discontinued obsolete ML antiquotations @{claset} and @{simpset}. INCOMPATIBILITY, use @{context} instead. * Antiquotation @{theory_context A} is similar to @{theory A}, but presents the result as initial Proof.context. *** System *** * Discontinued obsolete isabelle usedir, mkdir, make -- superseded by "isabelle build" in Isabelle2013. INCOMPATIBILITY. * Discontinued obsolete isabelle-process options -f and -u (former administrative aliases of option -e). Minor INCOMPATIBILITY. * Discontinued obsolete isabelle print tool, and PRINT_COMMAND settings variable. * Discontinued ISABELLE_DOC_FORMAT settings variable and historic document formats: dvi.gz, ps, ps.gz -- the default document format is always pdf. * Isabelle settings variable ISABELLE_BUILD_JAVA_OPTIONS allows to specify global resources of the JVM process run by isabelle build. * Toplevel executable $ISABELLE_HOME/bin/isabelle_scala_script allows to run Isabelle/Scala source files as standalone programs. * Improved "isabelle keywords" tool (for old-style ProofGeneral keyword tables): use Isabelle/Scala operations, which inspect outer syntax without requiring to build sessions first. * Sessions may be organized via 'chapter' specifications in the ROOT file, which determines a two-level hierarchy of browser info. The old tree-like organization via implicit sub-session relation (with its tendency towards erratic fluctuation of URLs) has been discontinued. The default chapter is called "Unsorted". Potential INCOMPATIBILITY for HTML presentation of theories. New in Isabelle2013 (February 2013) ----------------------------------- *** General *** * Theorem status about oracles and unfinished/failed future proofs is no longer printed by default, since it is incompatible with incremental / parallel checking of the persistent document model. ML function Thm.peek_status may be used to inspect a snapshot of the ongoing evaluation process. Note that in batch mode --- notably isabelle build --- the system ensures that future proofs of all accessible theorems in the theory context are finished (as before). * Configuration option show_markup controls direct inlining of markup into the printed representation of formal entities --- notably type and sort constraints. This enables Prover IDE users to retrieve that information via tooltips in the output window, for example. * Command 'ML_file' evaluates ML text from a file directly within the theory, without any predeclaration via 'uses' in the theory header. * Old command 'use' command and corresponding keyword 'uses' in the theory header are legacy features and will be discontinued soon. Tools that load their additional source files may imitate the 'ML_file' implementation, such that the system can take care of dependencies properly. * Discontinued obsolete method fastsimp / tactic fast_simp_tac, which is called fastforce / fast_force_tac already since Isabelle2011-1. * Updated and extended "isar-ref" and "implementation" manual, reduced remaining material in old "ref" manual. * Improved support for auxiliary contexts that indicate block structure for specifications. Nesting of "context fixes ... context assumes ..." and "class ... context ...". * Attribute "consumes" allows a negative value as well, which is interpreted relatively to the total number of premises of the rule in the target context. This form of declaration is stable when exported from a nested 'context' with additional assumptions. It is the preferred form for definitional packages, notably cases/rules produced in HOL/inductive and HOL/function. * More informative error messages for Isar proof commands involving lazy enumerations (method applications etc.). * Refined 'help' command to retrieve outer syntax commands according to name patterns (with clickable results). *** Prover IDE -- Isabelle/Scala/jEdit *** * Parallel terminal proofs ('by') are enabled by default, likewise proofs that are built into packages like 'datatype', 'function'. This allows to "run ahead" checking the theory specifications on the surface, while the prover is still crunching on internal justifications. Unfinished / cancelled proofs are restarted as required to complete full proof checking eventually. * Improved output panel with tooltips, hyperlinks etc. based on the same Rich_Text_Area as regular Isabelle/jEdit buffers. Activation of tooltips leads to some window that supports the same recursively, which can lead to stacks of tooltips as the semantic document content is explored. ESCAPE closes the whole stack, individual windows may be closed separately, or detached to become independent jEdit dockables. * Improved support for commands that produce graph output: the text message contains a clickable area to open a new instance of the graph browser on demand. * More robust incremental parsing of outer syntax (partial comments, malformed symbols). Changing the balance of open/close quotes and comment delimiters works more conveniently with unfinished situations that frequently occur in user interaction. * More efficient painting and improved reactivity when editing large files. More scalable management of formal document content. * Smarter handling of tracing messages: prover process pauses after certain number of messages per command transaction, with some user dialog to stop or continue. This avoids swamping the front-end with potentially infinite message streams. * More plugin options and preferences, based on Isabelle/Scala. The jEdit plugin option panel provides access to some Isabelle/Scala options, including tuning parameters for editor reactivity and color schemes. * Dockable window "Symbols" provides some editing support for Isabelle symbols. * Dockable window "Monitor" shows ML runtime statistics. Note that continuous display of the chart slows down the system. * Improved editing support for control styles: subscript, superscript, bold, reset of style -- operating on single symbols or text selections. Cf. keyboard shortcuts C+e DOWN/UP/RIGHT/LEFT. * Actions isabelle.increase-font-size and isabelle.decrease-font-size adjust the main text area font size, and its derivatives for output, tooltips etc. Cf. keyboard shortcuts C-PLUS and C-MINUS, which often need to be adapted to local keyboard layouts. * More reactive completion popup by default: use \t (TAB) instead of \n (NEWLINE) to minimize intrusion into regular flow of editing. See also "Plugin Options / SideKick / General / Code Completion Options". * Implicit check and build dialog of the specified logic session image. For example, HOL, HOLCF, HOL-Nominal can be produced on demand, without bundling big platform-dependent heap images in the Isabelle distribution. * Uniform Java 7 platform on Linux, Mac OS X, Windows: recent updates from Oracle provide better multi-platform experience. This version is now bundled exclusively with Isabelle. *** Pure *** * Code generation for Haskell: restrict unqualified imports from Haskell Prelude to a small set of fundamental operations. * Command 'export_code': relative file names are interpreted relatively to master directory of current theory rather than the rather arbitrary current working directory. INCOMPATIBILITY. * Discontinued obsolete attribute "COMP". Potential INCOMPATIBILITY, use regular rule composition via "OF" / "THEN", or explicit proof structure instead. Note that Isabelle/ML provides a variety of operators like COMP, INCR_COMP, COMP_INCR, which need to be applied with some care where this is really required. * Command 'typ' supports an additional variant with explicit sort constraint, to infer and check the most general type conforming to a given sort. Example (in HOL): typ "_ * _ * bool * unit" :: finite * Command 'locale_deps' visualizes all locales and their relations as a Hasse diagram. *** HOL *** * Sledgehammer: - Added MaSh relevance filter based on machine-learning; see the Sledgehammer manual for details. - Polished Isar proofs generated with "isar_proofs" option. - Rationalized type encodings ("type_enc" option). - Renamed "kill_provers" subcommand to "kill_all". - Renamed options: isar_proof ~> isar_proofs isar_shrink_factor ~> isar_shrink max_relevant ~> max_facts relevance_thresholds ~> fact_thresholds * Quickcheck: added an optimisation for equality premises. It is switched on by default, and can be switched off by setting the configuration quickcheck_optimise_equality to false. * Quotient: only one quotient can be defined by quotient_type INCOMPATIBILITY. * Lifting: - generation of an abstraction function equation in lift_definition - quot_del attribute - renamed no_abs_code -> no_code (INCOMPATIBILITY.) * Simproc "finite_Collect" rewrites set comprehensions into pointfree expressions. * Preprocessing of the code generator rewrites set comprehensions into pointfree expressions. * The SMT solver Z3 has now by default a restricted set of directly supported features. For the full set of features (div/mod, nonlinear arithmetic, datatypes/records) with potential proof reconstruction failures, enable the configuration option "z3_with_extensions". Minor INCOMPATIBILITY. * Simplified 'typedef' specifications: historical options for implicit set definition and alternative name have been discontinued. The former behavior of "typedef (open) t = A" is now the default, but written just "typedef t = A". INCOMPATIBILITY, need to adapt theories accordingly. * Removed constant "chars"; prefer "Enum.enum" on type "char" directly. INCOMPATIBILITY. * Moved operation product, sublists and n_lists from theory Enum to List. INCOMPATIBILITY. * Theorem UN_o generalized to SUP_comp. INCOMPATIBILITY. * Class "comm_monoid_diff" formalises properties of bounded subtraction, with natural numbers and multisets as typical instances. * Added combinator "Option.these" with type "'a option set => 'a set". * Theory "Transitive_Closure": renamed lemmas reflcl_tranclp -> reflclp_tranclp rtranclp_reflcl -> rtranclp_reflclp INCOMPATIBILITY. * Theory "Rings": renamed lemmas (in class semiring) left_distrib ~> distrib_right right_distrib ~> distrib_left INCOMPATIBILITY. * Generalized the definition of limits: - Introduced the predicate filterlim (LIM x F. f x :> G) which expresses that when the input values x converge to F then the output f x converges to G. - Added filters for convergence to positive (at_top) and negative infinity (at_bot). - Moved infinity in the norm (at_infinity) from Multivariate_Analysis to Complex_Main. - Removed real_tendsto_inf, it is superseded by "LIM x F. f x :> at_top". INCOMPATIBILITY. * Theory "Library/Option_ord" provides instantiation of option type to lattice type classes. * Theory "Library/Multiset": renamed constant fold_mset ~> Multiset.fold fact fold_mset_commute ~> fold_mset_comm INCOMPATIBILITY. * Renamed theory Library/List_Prefix to Library/Sublist, with related changes as follows. - Renamed constants (and related lemmas) prefix ~> prefixeq strict_prefix ~> prefix - Replaced constant "postfix" by "suffixeq" with swapped argument order (i.e., "postfix xs ys" is now "suffixeq ys xs") and dropped old infix syntax "xs >>= ys"; use "suffixeq ys xs" instead. Renamed lemmas accordingly. - Added constant "list_hembeq" for homeomorphic embedding on lists. Added abbreviation "sublisteq" for special case "list_hembeq (op =)". - Theory Library/Sublist no longer provides "order" and "bot" type class instances for the prefix order (merely corresponding locale interpretations). The type class instances are now in theory Library/Prefix_Order. - The sublist relation of theory Library/Sublist_Order is now based on "Sublist.sublisteq". Renamed lemmas accordingly: le_list_append_le_same_iff ~> Sublist.sublisteq_append_le_same_iff le_list_append_mono ~> Sublist.list_hembeq_append_mono le_list_below_empty ~> Sublist.list_hembeq_Nil, Sublist.list_hembeq_Nil2 le_list_Cons_EX ~> Sublist.list_hembeq_ConsD le_list_drop_Cons2 ~> Sublist.sublisteq_Cons2' le_list_drop_Cons_neq ~> Sublist.sublisteq_Cons2_neq le_list_drop_Cons ~> Sublist.sublisteq_Cons' le_list_drop_many ~> Sublist.sublisteq_drop_many le_list_filter_left ~> Sublist.sublisteq_filter_left le_list_rev_drop_many ~> Sublist.sublisteq_rev_drop_many le_list_rev_take_iff ~> Sublist.sublisteq_append le_list_same_length ~> Sublist.sublisteq_same_length le_list_take_many_iff ~> Sublist.sublisteq_append' less_eq_list.drop ~> less_eq_list_drop less_eq_list.induct ~> less_eq_list_induct not_le_list_length ~> Sublist.not_sublisteq_length INCOMPATIBILITY. * New theory Library/Countable_Set. * Theory Library/Debug and Library/Parallel provide debugging and parallel execution for code generated towards Isabelle/ML. * Theory Library/FuncSet: Extended support for Pi and extensional and introduce the extensional dependent function space "PiE". Replaced extensional_funcset by an abbreviation, and renamed lemmas from extensional_funcset to PiE as follows: extensional_empty ~> PiE_empty extensional_funcset_empty_domain ~> PiE_empty_domain extensional_funcset_empty_range ~> PiE_empty_range extensional_funcset_arb ~> PiE_arb extensional_funcset_mem ~> PiE_mem extensional_funcset_extend_domainI ~> PiE_fun_upd extensional_funcset_restrict_domain ~> fun_upd_in_PiE extensional_funcset_extend_domain_eq ~> PiE_insert_eq card_extensional_funcset ~> card_PiE finite_extensional_funcset ~> finite_PiE INCOMPATIBILITY. * Theory Library/FinFun: theory of almost everywhere constant functions (supersedes the AFP entry "Code Generation for Functions as Data"). * Theory Library/Phantom: generic phantom type to make a type parameter appear in a constant's type. This alternative to adding TYPE('a) as another parameter avoids unnecessary closures in generated code. * Theory Library/RBT_Impl: efficient construction of red-black trees from sorted associative lists. Merging two trees with rbt_union may return a structurally different tree than before. Potential INCOMPATIBILITY. * Theory Library/IArray: immutable arrays with code generation. * Theory Library/Finite_Lattice: theory of finite lattices. * HOL/Multivariate_Analysis: replaced "basis :: 'a::euclidean_space => nat => real" "\\ :: (nat => real) => 'a::euclidean_space" on euclidean spaces by using the inner product "_ \ _" with vectors from the Basis set: "\\ i. f i" is superseded by "SUM i : Basis. f i * r i". With this change the following constants are also changed or removed: DIM('a) :: nat ~> card (Basis :: 'a set) (is an abbreviation) a $$ i ~> inner a i (where i : Basis) cart_base i removed \, \' removed Theorems about these constants where removed. Renamed lemmas: component_le_norm ~> Basis_le_norm euclidean_eq ~> euclidean_eq_iff differential_zero_maxmin_component ~> differential_zero_maxmin_cart euclidean_simps ~> inner_simps independent_basis ~> independent_Basis span_basis ~> span_Basis in_span_basis ~> in_span_Basis norm_bound_component_le ~> norm_boound_Basis_le norm_bound_component_lt ~> norm_boound_Basis_lt component_le_infnorm ~> Basis_le_infnorm INCOMPATIBILITY. * HOL/Probability: - Added simproc "measurable" to automatically prove measurability. - Added induction rules for sigma sets with disjoint union (sigma_sets_induct_disjoint) and for Borel-measurable functions (borel_measurable_induct). - Added the Daniell-Kolmogorov theorem (the existence the limit of a projective family). * HOL/Cardinals: Theories of ordinals and cardinals (supersedes the AFP entry "Ordinals_and_Cardinals"). * HOL/BNF: New (co)datatype package based on bounded natural functors with support for mixed, nested recursion and interesting non-free datatypes. * HOL/Finite_Set and Relation: added new set and relation operations expressed by Finite_Set.fold. * New theory HOL/Library/RBT_Set: implementation of sets by red-black trees for the code generator. * HOL/Library/RBT and HOL/Library/Mapping have been converted to Lifting/Transfer. possible INCOMPATIBILITY. * HOL/Set: renamed Set.project -> Set.filter INCOMPATIBILITY. *** Document preparation *** * Dropped legacy antiquotations "term_style" and "thm_style", since styles may be given as arguments to "term" and "thm" already. Discontinued legacy styles "prem1" .. "prem19". * Default LaTeX rendering for \ is now based on eurosym package, instead of slightly exotic babel/greek. * Document variant NAME may use different LaTeX entry point document/root_NAME.tex if that file exists, instead of the common document/root.tex. * Simplified custom document/build script, instead of old-style document/IsaMakefile. Minor INCOMPATIBILITY. *** ML *** * The default limit for maximum number of worker threads is now 8, instead of 4, in correspondence to capabilities of contemporary hardware and Poly/ML runtime system. * Type Seq.results and related operations support embedded error messages within lazy enumerations, and thus allow to provide informative errors in the absence of any usable results. * Renamed Position.str_of to Position.here to emphasize that this is a formal device to inline positions into message text, but not necessarily printing visible text. *** System *** * Advanced support for Isabelle sessions and build management, see "system" manual for the chapter of that name, especially the "isabelle build" tool and its examples. The "isabelle mkroot" tool prepares session root directories for use with "isabelle build", similar to former "isabelle mkdir" for "isabelle usedir". Note that this affects document preparation as well. INCOMPATIBILITY, isabelle usedir / mkdir / make are rendered obsolete. * Discontinued obsolete Isabelle/build script, it is superseded by the regular isabelle build tool. For example: isabelle build -s -b HOL * Discontinued obsolete "isabelle makeall". * Discontinued obsolete IsaMakefile and ROOT.ML files from the Isabelle distribution, except for rudimentary src/HOL/IsaMakefile that provides some traditional targets that invoke "isabelle build". Note that this is inefficient! Applications of Isabelle/HOL involving "isabelle make" should be upgraded to use "isabelle build" directly. * The "isabelle options" tool prints Isabelle system options, as required for "isabelle build", for example. * The "isabelle logo" tool produces EPS and PDF format simultaneously. Minor INCOMPATIBILITY in command-line options. * The "isabelle install" tool has now a simpler command-line. Minor INCOMPATIBILITY. * The "isabelle components" tool helps to resolve add-on components that are not bundled, or referenced from a bare-bones repository version of Isabelle. * Settings variable ISABELLE_PLATFORM_FAMILY refers to the general platform family: "linux", "macos", "windows". * The ML system is configured as regular component, and no longer picked up from some surrounding directory. Potential INCOMPATIBILITY for home-made settings. * Improved ML runtime statistics (heap, threads, future tasks etc.). * Discontinued support for Poly/ML 5.2.1, which was the last version without exception positions and advanced ML compiler/toplevel configuration. * Discontinued special treatment of Proof General -- no longer guess PROOFGENERAL_HOME based on accidental file-system layout. Minor INCOMPATIBILITY: provide PROOFGENERAL_HOME and PROOFGENERAL_OPTIONS settings manually, or use a Proof General version that has been bundled as Isabelle component. New in Isabelle2012 (May 2012) ------------------------------ *** General *** * Prover IDE (PIDE) improvements: - more robust Sledgehammer integration (as before the sledgehammer command-line needs to be typed into the source buffer) - markup for bound variables - markup for types of term variables (displayed as tooltips) - support for user-defined Isar commands within the running session - improved support for Unicode outside original 16bit range e.g. glyph for \ (thanks to jEdit 4.5.1) * Forward declaration of outer syntax keywords within the theory header -- minor INCOMPATIBILITY for user-defined commands. Allow new commands to be used in the same theory where defined. * Auxiliary contexts indicate block structure for specifications with additional parameters and assumptions. Such unnamed contexts may be nested within other targets, like 'theory', 'locale', 'class', 'instantiation' etc. Results from the local context are generalized accordingly and applied to the enclosing target context. Example: context fixes x y z :: 'a assumes xy: "x = y" and yz: "y = z" begin lemma my_trans: "x = z" using xy yz by simp end thm my_trans The most basic application is to factor-out context elements of several fixes/assumes/shows theorem statements, e.g. see ~~/src/HOL/Isar_Examples/Group_Context.thy Any other local theory specification element works within the "context ... begin ... end" block as well. * Bundled declarations associate attributed fact expressions with a given name in the context. These may be later included in other contexts. This allows to manage context extensions casually, without the logical dependencies of locales and locale interpretation. See commands 'bundle', 'include', 'including' etc. in the isar-ref manual. * Commands 'lemmas' and 'theorems' allow local variables using 'for' declaration, and results are standardized before being stored. Thus old-style "standard" after instantiation or composition of facts becomes obsolete. Minor INCOMPATIBILITY, due to potential change of indices of schematic variables. * Rule attributes in local theory declarations (e.g. locale or class) are now statically evaluated: the resulting theorem is stored instead of the original expression. INCOMPATIBILITY in rare situations, where the historic accident of dynamic re-evaluation in interpretations etc. was exploited. * New tutorial "Programming and Proving in Isabelle/HOL" ("prog-prove"). It completely supersedes "A Tutorial Introduction to Structured Isar Proofs" ("isar-overview"), which has been removed. It also supersedes "Isabelle/HOL, A Proof Assistant for Higher-Order Logic" as the recommended beginners tutorial, but does not cover all of the material of that old tutorial. * Updated and extended reference manuals: "isar-ref", "implementation", "system"; reduced remaining material in old "ref" manual. *** Pure *** * Command 'definition' no longer exports the foundational "raw_def" into the user context. Minor INCOMPATIBILITY, may use the regular "def" result with attribute "abs_def" to imitate the old version. * Attribute "abs_def" turns an equation of the form "f x y == t" into "f == %x y. t", which ensures that "simp" or "unfold" steps always expand it. This also works for object-logic equality. (Formerly undocumented feature.) * Sort constraints are now propagated in simultaneous statements, just like type constraints. INCOMPATIBILITY in rare situations, where distinct sorts used to be assigned accidentally. For example: lemma "P (x::'a::foo)" and "Q (y::'a::bar)" -- "now illegal" lemma "P (x::'a)" and "Q (y::'a::bar)" -- "now uniform 'a::bar instead of default sort for first occurrence (!)" * Rule composition via attribute "OF" (or ML functions OF/MRS) is more tolerant against multiple unifiers, as long as the final result is unique. (As before, rules are composed in canonical right-to-left order to accommodate newly introduced premises.) * Renamed some inner syntax categories: num ~> num_token xnum ~> xnum_token xstr ~> str_token Minor INCOMPATIBILITY. Note that in practice "num_const" or "num_position" etc. are mainly used instead (which also include position information via constraints). * Simplified configuration options for syntax ambiguity: see "syntax_ambiguity_warning" and "syntax_ambiguity_limit" in isar-ref manual. Minor INCOMPATIBILITY. * Discontinued configuration option "syntax_positions": atomic terms in parse trees are always annotated by position constraints. * Old code generator for SML and its commands 'code_module', 'code_library', 'consts_code', 'types_code' have been discontinued. Use commands of the generic code generator instead. INCOMPATIBILITY. * Redundant attribute "code_inline" has been discontinued. Use "code_unfold" instead. INCOMPATIBILITY. * Dropped attribute "code_unfold_post" in favor of the its dual "code_abbrev", which yields a common pattern in definitions like definition [code_abbrev]: "f = t" INCOMPATIBILITY. * Obsolete 'types' command has been discontinued. Use 'type_synonym' instead. INCOMPATIBILITY. * Discontinued old "prems" fact, which used to refer to the accidental collection of foundational premises in the context (already marked as legacy since Isabelle2011). *** HOL *** * Type 'a set is now a proper type constructor (just as before Isabelle2008). Definitions mem_def and Collect_def have disappeared. Non-trivial INCOMPATIBILITY. For developments keeping predicates and sets separate, it is often sufficient to rephrase some set S that has been accidentally used as predicates by "%x. x : S", and some predicate P that has been accidentally used as set by "{x. P x}". Corresponding proofs in a first step should be pruned from any tinkering with former theorems mem_def and Collect_def as far as possible. For developments which deliberately mix predicates and sets, a planning step is necessary to determine what should become a predicate and what a set. It can be helpful to carry out that step in Isabelle2011-1 before jumping right into the current release. * Code generation by default implements sets as container type rather than predicates. INCOMPATIBILITY. * New type synonym 'a rel = ('a * 'a) set * The representation of numerals has changed. Datatype "num" represents strictly positive binary numerals, along with functions "numeral :: num => 'a" and "neg_numeral :: num => 'a" to represent positive and negated numeric literals, respectively. See also definitions in ~~/src/HOL/Num.thy. Potential INCOMPATIBILITY, some user theories may require adaptations as follows: - Theorems with number_ring or number_semiring constraints: These classes are gone; use comm_ring_1 or comm_semiring_1 instead. - Theories defining numeric types: Remove number, number_semiring, and number_ring instances. Defer all theorems about numerals until after classes one and semigroup_add have been instantiated. - Numeral-only simp rules: Replace each rule having a "number_of v" pattern with two copies, one for numeral and one for neg_numeral. - Theorems about subclasses of semiring_1 or ring_1: These classes automatically support numerals now, so more simp rules and simprocs may now apply within the proof. - Definitions and theorems using old constructors Pls/Min/Bit0/Bit1: Redefine using other integer operations. * Transfer: New package intended to generalize the existing "descending" method and related theorem attributes from the Quotient package. (Not all functionality is implemented yet, but future development will focus on Transfer as an eventual replacement for the corresponding parts of the Quotient package.) - transfer_rule attribute: Maintains a collection of transfer rules, which relate constants at two different types. Transfer rules may relate different type instances of the same polymorphic constant, or they may relate an operation on a raw type to a corresponding operation on an abstract type (quotient or subtype). For example: ((A ===> B) ===> list_all2 A ===> list_all2 B) map map (cr_int ===> cr_int ===> cr_int) (%(x,y) (u,v). (x+u, y+v)) plus_int - transfer method: Replaces a subgoal on abstract types with an equivalent subgoal on the corresponding raw types. Constants are replaced with corresponding ones according to the transfer rules. Goals are generalized over all free variables by default; this is necessary for variables whose types change, but can be overridden for specific variables with e.g. "transfer fixing: x y z". The variant transfer' method allows replacing a subgoal with one that is logically stronger (rather than equivalent). - relator_eq attribute: Collects identity laws for relators of various type constructors, e.g. "list_all2 (op =) = (op =)". The transfer method uses these lemmas to infer transfer rules for non-polymorphic constants on the fly. - transfer_prover method: Assists with proving a transfer rule for a new constant, provided the constant is defined in terms of other constants that already have transfer rules. It should be applied after unfolding the constant definitions. - HOL/ex/Transfer_Int_Nat.thy: Example theory demonstrating transfer from type nat to type int. * Lifting: New package intended to generalize the quotient_definition facility of the Quotient package; designed to work with Transfer. - lift_definition command: Defines operations on an abstract type in terms of a corresponding operation on a representation type. Example syntax: lift_definition dlist_insert :: "'a => 'a dlist => 'a dlist" is List.insert Users must discharge a respectfulness proof obligation when each constant is defined. (For a type copy, i.e. a typedef with UNIV, the proof is discharged automatically.) The obligation is presented in a user-friendly, readable form; a respectfulness theorem in the standard format and a transfer rule are generated by the package. - Integration with code_abstype: For typedefs (e.g. subtypes corresponding to a datatype invariant, such as dlist), lift_definition generates a code certificate theorem and sets up code generation for each constant. - setup_lifting command: Sets up the Lifting package to work with a user-defined type. The user must provide either a quotient theorem or a type_definition theorem. The package configures transfer rules for equality and quantifiers on the type, and sets up the lift_definition command to work with the type. - Usage examples: See Quotient_Examples/Lift_DList.thy, Quotient_Examples/Lift_RBT.thy, Quotient_Examples/Lift_FSet.thy, Word/Word.thy and Library/Float.thy. * Quotient package: - The 'quotient_type' command now supports a 'morphisms' option with rep and abs functions, similar to typedef. - 'quotient_type' sets up new types to work with the Lifting and Transfer packages, as with 'setup_lifting'. - The 'quotient_definition' command now requires the user to prove a respectfulness property at the point where the constant is defined, similar to lift_definition; INCOMPATIBILITY. - Renamed predicate 'Quotient' to 'Quotient3', and renamed theorems accordingly, INCOMPATIBILITY. * New diagnostic command 'find_unused_assms' to find potentially superfluous assumptions in theorems using Quickcheck. * Quickcheck: - Quickcheck returns variable assignments as counterexamples, which allows to reveal the underspecification of functions under test. For example, refuting "hd xs = x", it presents the variable assignment xs = [] and x = a1 as a counterexample, assuming that any property is false whenever "hd []" occurs in it. These counterexample are marked as potentially spurious, as Quickcheck also returns "xs = []" as a counterexample to the obvious theorem "hd xs = hd xs". After finding a potentially spurious counterexample, Quickcheck continues searching for genuine ones. By default, Quickcheck shows potentially spurious and genuine counterexamples. The option "genuine_only" sets quickcheck to only show genuine counterexamples. - The command 'quickcheck_generator' creates random and exhaustive value generators for a given type and operations. It generates values by using the operations as if they were constructors of that type. - Support for multisets. - Added "use_subtype" options. - Added "quickcheck_locale" configuration to specify how to process conjectures in a locale context. * Nitpick: Fixed infinite loop caused by the 'peephole_optim' option and affecting 'rat' and 'real'. * Sledgehammer: - Integrated more tightly with SPASS, as described in the ITP 2012 paper "More SPASS with Isabelle". - Made it try "smt" as a fallback if "metis" fails or times out. - Added support for the following provers: Alt-Ergo (via Why3 and TFF1), iProver, iProver-Eq. - Sped up the minimizer. - Added "lam_trans", "uncurry_aliases", and "minimize" options. - Renamed "slicing" ("no_slicing") option to "slice" ("dont_slice"). - Renamed "sound" option to "strict". * Metis: Added possibility to specify lambda translations scheme as a parenthesized argument (e.g., "by (metis (lifting) ...)"). * SMT: Renamed "smt_fixed" option to "smt_read_only_certificates". * Command 'try0': Renamed from 'try_methods'. INCOMPATIBILITY. * New "case_product" attribute to generate a case rule doing multiple case distinctions at the same time. E.g. list.exhaust [case_product nat.exhaust] produces a rule which can be used to perform case distinction on both a list and a nat. * New "eventually_elim" method as a generalized variant of the eventually_elim* rules. Supports structured proofs. * Typedef with implicit set definition is considered legacy. Use "typedef (open)" form instead, which will eventually become the default. * Record: code generation can be switched off manually with declare [[record_coden = false]] -- "default true" * Datatype: type parameters allow explicit sort constraints. * Concrete syntax for case expressions includes constraints for source positions, and thus produces Prover IDE markup for its bindings. INCOMPATIBILITY for old-style syntax translations that augment the pattern notation; e.g. see src/HOL/HOLCF/One.thy for translations of one_case. * Clarified attribute "mono_set": pure declaration without modifying the result of the fact expression. * More default pred/set conversions on a couple of relation operations and predicates. Added powers of predicate relations. Consolidation of some relation theorems: converse_def ~> converse_unfold rel_comp_def ~> relcomp_unfold symp_def ~> (modified, use symp_def and sym_def instead) transp_def ~> transp_trans Domain_def ~> Domain_unfold Range_def ~> Domain_converse [symmetric] Generalized theorems INF_INT_eq, INF_INT_eq2, SUP_UN_eq, SUP_UN_eq2. See theory "Relation" for examples for making use of pred/set conversions by means of attributes "to_set" and "to_pred". INCOMPATIBILITY. * Renamed facts about the power operation on relations, i.e., relpow to match the constant's name: rel_pow_1 ~> relpow_1 rel_pow_0_I ~> relpow_0_I rel_pow_Suc_I ~> relpow_Suc_I rel_pow_Suc_I2 ~> relpow_Suc_I2 rel_pow_0_E ~> relpow_0_E rel_pow_Suc_E ~> relpow_Suc_E rel_pow_E ~> relpow_E rel_pow_Suc_D2 ~> relpow_Suc_D2 rel_pow_Suc_E2 ~> relpow_Suc_E2 rel_pow_Suc_D2' ~> relpow_Suc_D2' rel_pow_E2 ~> relpow_E2 rel_pow_add ~> relpow_add rel_pow_commute ~> relpow rel_pow_empty ~> relpow_empty: rtrancl_imp_UN_rel_pow ~> rtrancl_imp_UN_relpow rel_pow_imp_rtrancl ~> relpow_imp_rtrancl rtrancl_is_UN_rel_pow ~> rtrancl_is_UN_relpow rtrancl_imp_rel_pow ~> rtrancl_imp_relpow rel_pow_fun_conv ~> relpow_fun_conv rel_pow_finite_bounded1 ~> relpow_finite_bounded1 rel_pow_finite_bounded ~> relpow_finite_bounded rtrancl_finite_eq_rel_pow ~> rtrancl_finite_eq_relpow trancl_finite_eq_rel_pow ~> trancl_finite_eq_relpow single_valued_rel_pow ~> single_valued_relpow INCOMPATIBILITY. * Theory Relation: Consolidated constant name for relation composition and corresponding theorem names: - Renamed constant rel_comp to relcomp. - Dropped abbreviation pred_comp. Use relcompp instead. - Renamed theorems: rel_compI ~> relcompI rel_compEpair ~> relcompEpair rel_compE ~> relcompE pred_comp_rel_comp_eq ~> relcompp_relcomp_eq rel_comp_empty1 ~> relcomp_empty1 rel_comp_mono ~> relcomp_mono rel_comp_subset_Sigma ~> relcomp_subset_Sigma rel_comp_distrib ~> relcomp_distrib rel_comp_distrib2 ~> relcomp_distrib2 rel_comp_UNION_distrib ~> relcomp_UNION_distrib rel_comp_UNION_distrib2 ~> relcomp_UNION_distrib2 single_valued_rel_comp ~> single_valued_relcomp rel_comp_def ~> relcomp_unfold converse_rel_comp ~> converse_relcomp pred_compI ~> relcomppI pred_compE ~> relcomppE pred_comp_bot1 ~> relcompp_bot1 pred_comp_bot2 ~> relcompp_bot2 transp_pred_comp_less_eq ~> transp_relcompp_less_eq pred_comp_mono ~> relcompp_mono pred_comp_distrib ~> relcompp_distrib pred_comp_distrib2 ~> relcompp_distrib2 converse_pred_comp ~> converse_relcompp finite_rel_comp ~> finite_relcomp set_rel_comp ~> set_relcomp INCOMPATIBILITY. * Theory Divides: Discontinued redundant theorems about div and mod. INCOMPATIBILITY, use the corresponding generic theorems instead. DIVISION_BY_ZERO ~> div_by_0, mod_by_0 zdiv_self ~> div_self zmod_self ~> mod_self zdiv_zero ~> div_0 zmod_zero ~> mod_0 zdiv_zmod_equality ~> div_mod_equality2 zdiv_zmod_equality2 ~> div_mod_equality zmod_zdiv_trivial ~> mod_div_trivial zdiv_zminus_zminus ~> div_minus_minus zmod_zminus_zminus ~> mod_minus_minus zdiv_zminus2 ~> div_minus_right zmod_zminus2 ~> mod_minus_right zdiv_minus1_right ~> div_minus1_right zmod_minus1_right ~> mod_minus1_right zdvd_mult_div_cancel ~> dvd_mult_div_cancel zmod_zmult1_eq ~> mod_mult_right_eq zpower_zmod ~> power_mod zdvd_zmod ~> dvd_mod zdvd_zmod_imp_zdvd ~> dvd_mod_imp_dvd mod_mult_distrib ~> mult_mod_left mod_mult_distrib2 ~> mult_mod_right * Removed redundant theorems nat_mult_2 and nat_mult_2_right; use generic mult_2 and mult_2_right instead. INCOMPATIBILITY. * Finite_Set.fold now qualified. INCOMPATIBILITY. * Consolidated theorem names concerning fold combinators: inf_INFI_fold_inf ~> inf_INF_fold_inf sup_SUPR_fold_sup ~> sup_SUP_fold_sup INFI_fold_inf ~> INF_fold_inf SUPR_fold_sup ~> SUP_fold_sup union_set ~> union_set_fold minus_set ~> minus_set_fold INFI_set_fold ~> INF_set_fold SUPR_set_fold ~> SUP_set_fold INF_code ~> INF_set_foldr SUP_code ~> SUP_set_foldr foldr.simps ~> foldr.simps (in point-free formulation) foldr_fold_rev ~> foldr_conv_fold foldl_fold ~> foldl_conv_fold foldr_foldr ~> foldr_conv_foldl foldl_foldr ~> foldl_conv_foldr fold_set_remdups ~> fold_set_fold_remdups fold_set ~> fold_set_fold fold1_set ~> fold1_set_fold INCOMPATIBILITY. * Dropped rarely useful theorems concerning fold combinators: foldl_apply, foldl_fun_comm, foldl_rev, fold_weak_invariant, rev_foldl_cons, fold_set_remdups, fold_set, fold_set1, concat_conv_foldl, foldl_weak_invariant, foldl_invariant, foldr_invariant, foldl_absorb0, foldl_foldr1_lemma, foldl_foldr1, listsum_conv_fold, listsum_foldl, sort_foldl_insort, foldl_assoc, foldr_conv_foldl, start_le_sum, elem_le_sum, sum_eq_0_conv. INCOMPATIBILITY. For the common phrases "%xs. List.foldr plus xs 0" and "List.foldl plus 0", prefer "List.listsum". Otherwise it can be useful to boil down "List.foldr" and "List.foldl" to "List.fold" by unfolding "foldr_conv_fold" and "foldl_conv_fold". * Dropped lemmas minus_set_foldr, union_set_foldr, union_coset_foldr, inter_coset_foldr, Inf_fin_set_foldr, Sup_fin_set_foldr, Min_fin_set_foldr, Max_fin_set_foldr, Inf_set_foldr, Sup_set_foldr, INF_set_foldr, SUP_set_foldr. INCOMPATIBILITY. Prefer corresponding lemmas over fold rather than foldr, or make use of lemmas fold_conv_foldr and fold_rev. * Congruence rules Option.map_cong and Option.bind_cong for recursion through option types. * "Transitive_Closure.ntrancl": bounded transitive closure on relations. * Constant "Set.not_member" now qualified. INCOMPATIBILITY. * Theory Int: Discontinued many legacy theorems specific to type int. INCOMPATIBILITY, use the corresponding generic theorems instead. zminus_zminus ~> minus_minus zminus_0 ~> minus_zero zminus_zadd_distrib ~> minus_add_distrib zadd_commute ~> add_commute zadd_assoc ~> add_assoc zadd_left_commute ~> add_left_commute zadd_ac ~> add_ac zmult_ac ~> mult_ac zadd_0 ~> add_0_left zadd_0_right ~> add_0_right zadd_zminus_inverse2 ~> left_minus zmult_zminus ~> mult_minus_left zmult_commute ~> mult_commute zmult_assoc ~> mult_assoc zadd_zmult_distrib ~> left_distrib zadd_zmult_distrib2 ~> right_distrib zdiff_zmult_distrib ~> left_diff_distrib zdiff_zmult_distrib2 ~> right_diff_distrib zmult_1 ~> mult_1_left zmult_1_right ~> mult_1_right zle_refl ~> order_refl zle_trans ~> order_trans zle_antisym ~> order_antisym zle_linear ~> linorder_linear zless_linear ~> linorder_less_linear zadd_left_mono ~> add_left_mono zadd_strict_right_mono ~> add_strict_right_mono zadd_zless_mono ~> add_less_le_mono int_0_less_1 ~> zero_less_one int_0_neq_1 ~> zero_neq_one zless_le ~> less_le zpower_zadd_distrib ~> power_add zero_less_zpower_abs_iff ~> zero_less_power_abs_iff zero_le_zpower_abs ~> zero_le_power_abs * Theory Deriv: Renamed DERIV_nonneg_imp_nonincreasing ~> DERIV_nonneg_imp_nondecreasing * Theory Library/Multiset: Improved code generation of multisets. * Theory HOL/Library/Set_Algebras: Addition and multiplication on sets are expressed via type classes again. The special syntax \/\ has been replaced by plain +/*. Removed constant setsum_set, which is now subsumed by Big_Operators.setsum. INCOMPATIBILITY. * Theory HOL/Library/Diagonalize has been removed. INCOMPATIBILITY, use theory HOL/Library/Nat_Bijection instead. * Theory HOL/Library/RBT_Impl: Backing implementation of red-black trees is now inside a type class context. Names of affected operations and lemmas have been prefixed by rbt_. INCOMPATIBILITY for theories working directly with raw red-black trees, adapt the names as follows: Operations: bulkload -> rbt_bulkload del_from_left -> rbt_del_from_left del_from_right -> rbt_del_from_right del -> rbt_del delete -> rbt_delete ins -> rbt_ins insert -> rbt_insert insertw -> rbt_insert_with insert_with_key -> rbt_insert_with_key map_entry -> rbt_map_entry lookup -> rbt_lookup sorted -> rbt_sorted tree_greater -> rbt_greater tree_less -> rbt_less tree_less_symbol -> rbt_less_symbol union -> rbt_union union_with -> rbt_union_with union_with_key -> rbt_union_with_key Lemmas: balance_left_sorted -> balance_left_rbt_sorted balance_left_tree_greater -> balance_left_rbt_greater balance_left_tree_less -> balance_left_rbt_less balance_right_sorted -> balance_right_rbt_sorted balance_right_tree_greater -> balance_right_rbt_greater balance_right_tree_less -> balance_right_rbt_less balance_sorted -> balance_rbt_sorted balance_tree_greater -> balance_rbt_greater balance_tree_less -> balance_rbt_less bulkload_is_rbt -> rbt_bulkload_is_rbt combine_sorted -> combine_rbt_sorted combine_tree_greater -> combine_rbt_greater combine_tree_less -> combine_rbt_less delete_in_tree -> rbt_delete_in_tree delete_is_rbt -> rbt_delete_is_rbt del_from_left_tree_greater -> rbt_del_from_left_rbt_greater del_from_left_tree_less -> rbt_del_from_left_rbt_less del_from_right_tree_greater -> rbt_del_from_right_rbt_greater del_from_right_tree_less -> rbt_del_from_right_rbt_less del_in_tree -> rbt_del_in_tree del_inv1_inv2 -> rbt_del_inv1_inv2 del_sorted -> rbt_del_rbt_sorted del_tree_greater -> rbt_del_rbt_greater del_tree_less -> rbt_del_rbt_less dom_lookup_Branch -> dom_rbt_lookup_Branch entries_lookup -> entries_rbt_lookup finite_dom_lookup -> finite_dom_rbt_lookup insert_sorted -> rbt_insert_rbt_sorted insertw_is_rbt -> rbt_insertw_is_rbt insertwk_is_rbt -> rbt_insertwk_is_rbt insertwk_sorted -> rbt_insertwk_rbt_sorted insertw_sorted -> rbt_insertw_rbt_sorted ins_sorted -> ins_rbt_sorted ins_tree_greater -> ins_rbt_greater ins_tree_less -> ins_rbt_less is_rbt_sorted -> is_rbt_rbt_sorted lookup_balance -> rbt_lookup_balance lookup_bulkload -> rbt_lookup_rbt_bulkload lookup_delete -> rbt_lookup_rbt_delete lookup_Empty -> rbt_lookup_Empty lookup_from_in_tree -> rbt_lookup_from_in_tree lookup_in_tree -> rbt_lookup_in_tree lookup_ins -> rbt_lookup_ins lookup_insert -> rbt_lookup_rbt_insert lookup_insertw -> rbt_lookup_rbt_insertw lookup_insertwk -> rbt_lookup_rbt_insertwk lookup_keys -> rbt_lookup_keys lookup_map -> rbt_lookup_map lookup_map_entry -> rbt_lookup_rbt_map_entry lookup_tree_greater -> rbt_lookup_rbt_greater lookup_tree_less -> rbt_lookup_rbt_less lookup_union -> rbt_lookup_rbt_union map_entry_color_of -> rbt_map_entry_color_of map_entry_inv1 -> rbt_map_entry_inv1 map_entry_inv2 -> rbt_map_entry_inv2 map_entry_is_rbt -> rbt_map_entry_is_rbt map_entry_sorted -> rbt_map_entry_rbt_sorted map_entry_tree_greater -> rbt_map_entry_rbt_greater map_entry_tree_less -> rbt_map_entry_rbt_less map_tree_greater -> map_rbt_greater map_tree_less -> map_rbt_less map_sorted -> map_rbt_sorted paint_sorted -> paint_rbt_sorted paint_lookup -> paint_rbt_lookup paint_tree_greater -> paint_rbt_greater paint_tree_less -> paint_rbt_less sorted_entries -> rbt_sorted_entries tree_greater_eq_trans -> rbt_greater_eq_trans tree_greater_nit -> rbt_greater_nit tree_greater_prop -> rbt_greater_prop tree_greater_simps -> rbt_greater_simps tree_greater_trans -> rbt_greater_trans tree_less_eq_trans -> rbt_less_eq_trans tree_less_nit -> rbt_less_nit tree_less_prop -> rbt_less_prop tree_less_simps -> rbt_less_simps tree_less_trans -> rbt_less_trans tree_ord_props -> rbt_ord_props union_Branch -> rbt_union_Branch union_is_rbt -> rbt_union_is_rbt unionw_is_rbt -> rbt_unionw_is_rbt unionwk_is_rbt -> rbt_unionwk_is_rbt unionwk_sorted -> rbt_unionwk_rbt_sorted * Theory HOL/Library/Float: Floating point numbers are now defined as a subset of the real numbers. All operations are defined using the lifing-framework and proofs use the transfer method. INCOMPATIBILITY. Changed Operations: float_abs -> abs float_nprt -> nprt float_pprt -> pprt pow2 -> use powr round_down -> float_round_down round_up -> float_round_up scale -> exponent Removed Operations: ceiling_fl, lb_mult, lb_mod, ub_mult, ub_mod Renamed Lemmas: abs_float_def -> Float.compute_float_abs bitlen_ge0 -> bitlen_nonneg bitlen.simps -> Float.compute_bitlen float_components -> Float_mantissa_exponent float_divl.simps -> Float.compute_float_divl float_divr.simps -> Float.compute_float_divr float_eq_odd -> mult_powr_eq_mult_powr_iff float_power -> real_of_float_power lapprox_posrat_def -> Float.compute_lapprox_posrat lapprox_rat.simps -> Float.compute_lapprox_rat le_float_def' -> Float.compute_float_le le_float_def -> less_eq_float.rep_eq less_float_def' -> Float.compute_float_less less_float_def -> less_float.rep_eq normfloat_def -> Float.compute_normfloat normfloat_imp_odd_or_zero -> mantissa_not_dvd and mantissa_noteq_0 normfloat -> normfloat_def normfloat_unique -> use normfloat_def number_of_float_Float -> Float.compute_float_numeral, Float.compute_float_neg_numeral one_float_def -> Float.compute_float_one plus_float_def -> Float.compute_float_plus rapprox_posrat_def -> Float.compute_rapprox_posrat rapprox_rat.simps -> Float.compute_rapprox_rat real_of_float_0 -> zero_float.rep_eq real_of_float_1 -> one_float.rep_eq real_of_float_abs -> abs_float.rep_eq real_of_float_add -> plus_float.rep_eq real_of_float_minus -> uminus_float.rep_eq real_of_float_mult -> times_float.rep_eq real_of_float_simp -> Float.rep_eq real_of_float_sub -> minus_float.rep_eq round_down.simps -> Float.compute_float_round_down round_up.simps -> Float.compute_float_round_up times_float_def -> Float.compute_float_times uminus_float_def -> Float.compute_float_uminus zero_float_def -> Float.compute_float_zero Lemmas not necessary anymore, use the transfer method: bitlen_B0, bitlen_B1, bitlen_ge1, bitlen_Min, bitlen_Pls, float_divl, float_divr, float_le_simp, float_less1_mantissa_bound, float_less_simp, float_less_zero, float_le_zero, float_pos_less1_e_neg, float_pos_m_pos, float_split, float_split2, floor_pos_exp, lapprox_posrat, lapprox_posrat_bottom, lapprox_rat, lapprox_rat_bottom, normalized_float, rapprox_posrat, rapprox_posrat_le1, rapprox_rat, real_of_float_ge0_exp, real_of_float_neg_exp, real_of_float_nge0_exp, round_down floor_fl, round_up, zero_le_float, zero_less_float * New theory HOL/Library/DAList provides an abstract type for association lists with distinct keys. * Session HOL/IMP: Added new theory of abstract interpretation of annotated commands. * Session HOL-Import: Re-implementation from scratch is faster, simpler, and more scalable. Requires a proof bundle, which is available as an external component. Discontinued old (and mostly dead) Importer for HOL4 and HOL Light. INCOMPATIBILITY. * Session HOL-Word: Discontinued many redundant theorems specific to type 'a word. INCOMPATIBILITY, use the corresponding generic theorems instead. word_sub_alt ~> word_sub_wi word_add_alt ~> word_add_def word_mult_alt ~> word_mult_def word_minus_alt ~> word_minus_def word_0_alt ~> word_0_wi word_1_alt ~> word_1_wi word_add_0 ~> add_0_left word_add_0_right ~> add_0_right word_mult_1 ~> mult_1_left word_mult_1_right ~> mult_1_right word_add_commute ~> add_commute word_add_assoc ~> add_assoc word_add_left_commute ~> add_left_commute word_mult_commute ~> mult_commute word_mult_assoc ~> mult_assoc word_mult_left_commute ~> mult_left_commute word_left_distrib ~> left_distrib word_right_distrib ~> right_distrib word_left_minus ~> left_minus word_diff_0_right ~> diff_0_right word_diff_self ~> diff_self word_sub_def ~> diff_minus word_diff_minus ~> diff_minus word_add_ac ~> add_ac word_mult_ac ~> mult_ac word_plus_ac0 ~> add_0_left add_0_right add_ac word_times_ac1 ~> mult_1_left mult_1_right mult_ac word_order_trans ~> order_trans word_order_refl ~> order_refl word_order_antisym ~> order_antisym word_order_linear ~> linorder_linear lenw1_zero_neq_one ~> zero_neq_one word_number_of_eq ~> number_of_eq word_of_int_add_hom ~> wi_hom_add word_of_int_sub_hom ~> wi_hom_sub word_of_int_mult_hom ~> wi_hom_mult word_of_int_minus_hom ~> wi_hom_neg word_of_int_succ_hom ~> wi_hom_succ word_of_int_pred_hom ~> wi_hom_pred word_of_int_0_hom ~> word_0_wi word_of_int_1_hom ~> word_1_wi * Session HOL-Word: New proof method "word_bitwise" for splitting machine word equalities and inequalities into logical circuits, defined in HOL/Word/WordBitwise.thy. Supports addition, subtraction, multiplication, shifting by constants, bitwise operators and numeric constants. Requires fixed-length word types, not 'a word. Solves many standard word identities outright and converts more into first order problems amenable to blast or similar. See also examples in HOL/Word/Examples/WordExamples.thy. * Session HOL-Probability: Introduced the type "'a measure" to represent measures, this replaces the records 'a algebra and 'a measure_space. The locales based on subset_class now have two locale-parameters the space \ and the set of measurable sets M. The product of probability spaces uses now the same constant as the finite product of sigma-finite measure spaces "PiM :: ('i => 'a) measure". Most constants are defined now outside of locales and gain an additional parameter, like null_sets, almost_eventually or \'. Measure space constructions for distributions and densities now got their own constants distr and density. Instead of using locales to describe measure spaces with a finite space, the measure count_space and point_measure is introduced. INCOMPATIBILITY. Renamed constants: measure -> emeasure finite_measure.\' -> measure product_algebra_generator -> prod_algebra product_prob_space.emb -> prod_emb product_prob_space.infprod_algebra -> PiM Removed locales: completeable_measure_space finite_measure_space finite_prob_space finite_product_finite_prob_space finite_product_sigma_algebra finite_sigma_algebra measure_space pair_finite_prob_space pair_finite_sigma_algebra pair_finite_space pair_sigma_algebra product_sigma_algebra Removed constants: conditional_space distribution -> use distr measure, or distributed predicate image_space joint_distribution -> use distr measure, or distributed predicate pair_measure_generator product_prob_space.infprod_algebra -> use PiM subvimage Replacement theorems: finite_additivity_sufficient -> ring_of_sets.countably_additiveI_finite finite_measure.empty_measure -> measure_empty finite_measure.finite_continuity_from_above -> finite_measure.finite_Lim_measure_decseq finite_measure.finite_continuity_from_below -> finite_measure.finite_Lim_measure_incseq finite_measure.finite_measure_countably_subadditive -> finite_measure.finite_measure_subadditive_countably finite_measure.finite_measure_eq -> finite_measure.emeasure_eq_measure finite_measure.finite_measure -> finite_measure.emeasure_finite finite_measure.finite_measure_finite_singleton -> finite_measure.finite_measure_eq_setsum_singleton finite_measure.positive_measure' -> measure_nonneg finite_measure.real_measure -> finite_measure.emeasure_real finite_product_prob_space.finite_measure_times -> finite_product_prob_space.finite_measure_PiM_emb finite_product_sigma_algebra.in_P -> sets_PiM_I_finite finite_product_sigma_algebra.P_empty -> space_PiM_empty, sets_PiM_empty information_space.conditional_entropy_eq -> information_space.conditional_entropy_simple_distributed information_space.conditional_entropy_positive -> information_space.conditional_entropy_nonneg_simple information_space.conditional_mutual_information_eq_mutual_information -> information_space.conditional_mutual_information_eq_mutual_information_simple information_space.conditional_mutual_information_generic_positive -> information_space.conditional_mutual_information_nonneg_simple information_space.conditional_mutual_information_positive -> information_space.conditional_mutual_information_nonneg_simple information_space.entropy_commute -> information_space.entropy_commute_simple information_space.entropy_eq -> information_space.entropy_simple_distributed information_space.entropy_generic_eq -> information_space.entropy_simple_distributed information_space.entropy_positive -> information_space.entropy_nonneg_simple information_space.entropy_uniform_max -> information_space.entropy_uniform information_space.KL_eq_0_imp -> information_space.KL_eq_0_iff_eq information_space.KL_eq_0 -> information_space.KL_same_eq_0 information_space.KL_ge_0 -> information_space.KL_nonneg information_space.mutual_information_eq -> information_space.mutual_information_simple_distributed information_space.mutual_information_positive -> information_space.mutual_information_nonneg_simple Int_stable_cuboids -> Int_stable_atLeastAtMost Int_stable_product_algebra_generator -> positive_integral measure_preserving -> equality "distr M N f = N" "f : measurable M N" measure_space.additive -> emeasure_additive measure_space.AE_iff_null_set -> AE_iff_null measure_space.almost_everywhere_def -> eventually_ae_filter measure_space.almost_everywhere_vimage -> AE_distrD measure_space.continuity_from_above -> INF_emeasure_decseq measure_space.continuity_from_above_Lim -> Lim_emeasure_decseq measure_space.continuity_from_below_Lim -> Lim_emeasure_incseq measure_space.continuity_from_below -> SUP_emeasure_incseq measure_space_density -> emeasure_density measure_space.density_is_absolutely_continuous -> absolutely_continuousI_density measure_space.integrable_vimage -> integrable_distr measure_space.integral_translated_density -> integral_density measure_space.integral_vimage -> integral_distr measure_space.measure_additive -> plus_emeasure measure_space.measure_compl -> emeasure_compl measure_space.measure_countable_increasing -> emeasure_countable_increasing measure_space.measure_countably_subadditive -> emeasure_subadditive_countably measure_space.measure_decseq -> decseq_emeasure measure_space.measure_Diff -> emeasure_Diff measure_space.measure_Diff_null_set -> emeasure_Diff_null_set measure_space.measure_eq_0 -> emeasure_eq_0 measure_space.measure_finitely_subadditive -> emeasure_subadditive_finite measure_space.measure_finite_singleton -> emeasure_eq_setsum_singleton measure_space.measure_incseq -> incseq_emeasure measure_space.measure_insert -> emeasure_insert measure_space.measure_mono -> emeasure_mono measure_space.measure_not_negative -> emeasure_not_MInf measure_space.measure_preserving_Int_stable -> measure_eqI_generator_eq measure_space.measure_setsum -> setsum_emeasure measure_space.measure_setsum_split -> setsum_emeasure_cover measure_space.measure_space_vimage -> emeasure_distr measure_space.measure_subadditive_finite -> emeasure_subadditive_finite measure_space.measure_subadditive -> subadditive measure_space.measure_top -> emeasure_space measure_space.measure_UN_eq_0 -> emeasure_UN_eq_0 measure_space.measure_Un_null_set -> emeasure_Un_null_set measure_space.positive_integral_translated_density -> positive_integral_density measure_space.positive_integral_vimage -> positive_integral_distr measure_space.real_continuity_from_above -> Lim_measure_decseq measure_space.real_continuity_from_below -> Lim_measure_incseq measure_space.real_measure_countably_subadditive -> measure_subadditive_countably measure_space.real_measure_Diff -> measure_Diff measure_space.real_measure_finite_Union -> measure_finite_Union measure_space.real_measure_setsum_singleton -> measure_eq_setsum_singleton measure_space.real_measure_subadditive -> measure_subadditive measure_space.real_measure_Union -> measure_Union measure_space.real_measure_UNION -> measure_UNION measure_space.simple_function_vimage -> simple_function_comp measure_space.simple_integral_vimage -> simple_integral_distr measure_space.simple_integral_vimage -> simple_integral_distr measure_unique_Int_stable -> measure_eqI_generator_eq measure_unique_Int_stable_vimage -> measure_eqI_generator_eq pair_sigma_algebra.measurable_cut_fst -> sets_Pair1 pair_sigma_algebra.measurable_cut_snd -> sets_Pair2 pair_sigma_algebra.measurable_pair_image_fst -> measurable_Pair1 pair_sigma_algebra.measurable_pair_image_snd -> measurable_Pair2 pair_sigma_algebra.measurable_product_swap -> measurable_pair_swap_iff pair_sigma_algebra.pair_sigma_algebra_measurable -> measurable_pair_swap pair_sigma_algebra.pair_sigma_algebra_swap_measurable -> measurable_pair_swap' pair_sigma_algebra.sets_swap -> sets_pair_swap pair_sigma_finite.measure_cut_measurable_fst -> pair_sigma_finite.measurable_emeasure_Pair1 pair_sigma_finite.measure_cut_measurable_snd -> pair_sigma_finite.measurable_emeasure_Pair2 pair_sigma_finite.measure_preserving_swap -> pair_sigma_finite.distr_pair_swap pair_sigma_finite.pair_measure_alt2 -> pair_sigma_finite.emeasure_pair_measure_alt2 pair_sigma_finite.pair_measure_alt -> pair_sigma_finite.emeasure_pair_measure_alt pair_sigma_finite.pair_measure_times -> pair_sigma_finite.emeasure_pair_measure_Times prob_space.indep_distribution_eq_measure -> prob_space.indep_vars_iff_distr_eq_PiM prob_space.indep_var_distributionD -> prob_space.indep_var_distribution_eq prob_space.measure_space_1 -> prob_space.emeasure_space_1 prob_space.prob_space_vimage -> prob_space_distr prob_space.random_variable_restrict -> measurable_restrict prob_space_unique_Int_stable -> measure_eqI_prob_space product_algebraE -> prod_algebraE_all product_algebra_generator_der -> prod_algebra_eq_finite product_algebra_generator_into_space -> prod_algebra_sets_into_space product_algebraI -> sets_PiM_I_finite product_measure_exists -> product_sigma_finite.sigma_finite product_prob_space.finite_index_eq_finite_product -> product_prob_space.sets_PiM_generator product_prob_space.finite_measure_infprod_emb_Pi -> product_prob_space.measure_PiM_emb product_prob_space.infprod_spec -> product_prob_space.emeasure_PiM_emb_not_empty product_prob_space.measurable_component -> measurable_component_singleton product_prob_space.measurable_emb -> measurable_prod_emb product_prob_space.measurable_into_infprod_algebra -> measurable_PiM_single product_prob_space.measurable_singleton_infprod -> measurable_component_singleton product_prob_space.measure_emb -> emeasure_prod_emb product_prob_space.measure_preserving_restrict -> product_prob_space.distr_restrict product_sigma_algebra.product_algebra_into_space -> space_closed product_sigma_finite.measure_fold -> product_sigma_finite.distr_merge product_sigma_finite.measure_preserving_component_singelton -> product_sigma_finite.distr_singleton product_sigma_finite.measure_preserving_merge -> product_sigma_finite.distr_merge sequence_space.measure_infprod -> sequence_space.measure_PiM_countable sets_product_algebra -> sets_PiM sigma_algebra.measurable_sigma -> measurable_measure_of sigma_finite_measure.disjoint_sigma_finite -> sigma_finite_disjoint sigma_finite_measure.RN_deriv_vimage -> sigma_finite_measure.RN_deriv_distr sigma_product_algebra_sigma_eq -> sigma_prod_algebra_sigma_eq space_product_algebra -> space_PiM * Session HOL-TPTP: support to parse and import TPTP problems (all languages) into Isabelle/HOL. *** FOL *** * New "case_product" attribute (see HOL). *** ZF *** * Greater support for structured proofs involving induction or case analysis. * Much greater use of mathematical symbols. * Removal of many ML theorem bindings. INCOMPATIBILITY. *** ML *** * Antiquotation @{keyword "name"} produces a parser for outer syntax from a minor keyword introduced via theory header declaration. * Antiquotation @{command_spec "name"} produces the Outer_Syntax.command_spec from a major keyword introduced via theory header declaration; it can be passed to Outer_Syntax.command etc. * Local_Theory.define no longer hard-wires default theorem name "foo_def", but retains the binding as given. If that is Binding.empty / Attrib.empty_binding, the result is not registered as user-level fact. The Local_Theory.define_internal variant allows to specify a non-empty name (used for the foundation in the background theory), while omitting the fact binding in the user-context. Potential INCOMPATIBILITY for derived definitional packages: need to specify naming policy for primitive definitions more explicitly. * Renamed Thm.capply to Thm.apply, and Thm.cabs to Thm.lambda in conformance with similar operations in structure Term and Logic. * Antiquotation @{attributes [...]} embeds attribute source representation into the ML text, which is particularly useful with declarations like Local_Theory.note. * Structure Proof_Context follows standard naming scheme. Old ProofContext has been discontinued. INCOMPATIBILITY. * Refined Local_Theory.declaration {syntax, pervasive}, with subtle change of semantics: update is applied to auxiliary local theory context as well. * Modernized some old-style infix operations: addeqcongs ~> Simplifier.add_eqcong deleqcongs ~> Simplifier.del_eqcong addcongs ~> Simplifier.add_cong delcongs ~> Simplifier.del_cong setmksimps ~> Simplifier.set_mksimps setmkcong ~> Simplifier.set_mkcong setmksym ~> Simplifier.set_mksym setmkeqTrue ~> Simplifier.set_mkeqTrue settermless ~> Simplifier.set_termless setsubgoaler ~> Simplifier.set_subgoaler addsplits ~> Splitter.add_split delsplits ~> Splitter.del_split *** System *** * USER_HOME settings variable points to cross-platform user home directory, which coincides with HOME on POSIX systems only. Likewise, the Isabelle path specification "~" now expands to $USER_HOME, instead of former $HOME. A different default for USER_HOME may be set explicitly in shell environment, before Isabelle settings are evaluated. Minor INCOMPATIBILITY: need to adapt Isabelle path where the generic user home was intended. * ISABELLE_HOME_WINDOWS refers to ISABELLE_HOME in windows file name notation, which is useful for the jEdit file browser, for example. * ISABELLE_JDK_HOME settings variable points to JDK with javac and jar (not just JRE). New in Isabelle2011-1 (October 2011) ------------------------------------ *** General *** * Improved Isabelle/jEdit Prover IDE (PIDE), which can be invoked as "isabelle jedit" or "ISABELLE_HOME/Isabelle" on the command line. - Management of multiple theory files directly from the editor buffer store -- bypassing the file-system (no requirement to save files for checking). - Markup of formal entities within the text buffer, with semantic highlighting, tooltips and hyperlinks to jump to defining source positions. - Improved text rendering, with sub/superscripts in the source buffer (including support for copy/paste wrt. output panel, HTML theory output and other non-Isabelle text boxes). - Refined scheduling of proof checking and printing of results, based on interactive editor view. (Note: jEdit folding and narrowing allows to restrict buffer perspectives explicitly.) - Reduced CPU performance requirements, usable on machines with few cores. - Reduced memory requirements due to pruning of unused document versions (garbage collection). See also ~~/src/Tools/jEdit/README.html for further information, including some remaining limitations. * Theory loader: source files are exclusively located via the master directory of each theory node (where the .thy file itself resides). The global load path (such as src/HOL/Library) has been discontinued. Note that the path element ~~ may be used to reference theories in the Isabelle home folder -- for instance, "~~/src/HOL/Library/FuncSet". INCOMPATIBILITY. * Theory loader: source files are identified by content via SHA1 digests. Discontinued former path/modtime identification and optional ISABELLE_FILE_IDENT plugin scripts. * Parallelization of nested Isar proofs is subject to Goal.parallel_proofs_threshold (default 100). See also isabelle usedir option -Q. * Name space: former unsynchronized references are now proper configuration options, with more conventional names: long_names ~> names_long short_names ~> names_short unique_names ~> names_unique Minor INCOMPATIBILITY, need to declare options in context like this: declare [[names_unique = false]] * Literal facts `prop` may contain dummy patterns, e.g. `_ = _`. Note that the result needs to be unique, which means fact specifications may have to be refined after enriching a proof context. * Attribute "case_names" has been refined: the assumptions in each case can be named now by following the case name with [name1 name2 ...]. * Isabelle/Isar reference manual has been updated and extended: - "Synopsis" provides a catalog of main Isar language concepts. - Formal references in syntax diagrams, via @{rail} antiquotation. - Updated material from classic "ref" manual, notably about "Classical Reasoner". *** HOL *** * Class bot and top require underlying partial order rather than preorder: uniqueness of bot and top is guaranteed. INCOMPATIBILITY. * Class complete_lattice: generalized a couple of lemmas from sets; generalized theorems INF_cong and SUP_cong. New type classes for complete boolean algebras and complete linear orders. Lemmas Inf_less_iff, less_Sup_iff, INF_less_iff, less_SUP_iff now reside in class complete_linorder. Changed proposition of lemmas Inf_bool_def, Sup_bool_def, Inf_fun_def, Sup_fun_def, Inf_apply, Sup_apply. Removed redundant lemmas (the right hand side gives hints how to replace them for (metis ...), or (simp only: ...) proofs): Inf_singleton ~> Inf_insert [where A="{}", unfolded Inf_empty inf_top_right] Sup_singleton ~> Sup_insert [where A="{}", unfolded Sup_empty sup_bot_right] Inf_binary ~> Inf_insert, Inf_empty, and inf_top_right Sup_binary ~> Sup_insert, Sup_empty, and sup_bot_right Int_eq_Inter ~> Inf_insert, Inf_empty, and inf_top_right Un_eq_Union ~> Sup_insert, Sup_empty, and sup_bot_right Inter_def ~> INF_def, image_def Union_def ~> SUP_def, image_def INT_eq ~> INF_def, and image_def UN_eq ~> SUP_def, and image_def INF_subset ~> INF_superset_mono [OF _ order_refl] More consistent and comprehensive names: INTER_eq_Inter_image ~> INF_def UNION_eq_Union_image ~> SUP_def INFI_def ~> INF_def SUPR_def ~> SUP_def INF_leI ~> INF_lower INF_leI2 ~> INF_lower2 le_INFI ~> INF_greatest le_SUPI ~> SUP_upper le_SUPI2 ~> SUP_upper2 SUP_leI ~> SUP_least INFI_bool_eq ~> INF_bool_eq SUPR_bool_eq ~> SUP_bool_eq INFI_apply ~> INF_apply SUPR_apply ~> SUP_apply INTER_def ~> INTER_eq UNION_def ~> UNION_eq INCOMPATIBILITY. * Renamed theory Complete_Lattice to Complete_Lattices. INCOMPATIBILITY. * Theory Complete_Lattices: lemmas Inf_eq_top_iff, INF_eq_top_iff, INF_image, Inf_insert, INF_top, Inf_top_conv, INF_top_conv, SUP_bot, Sup_bot_conv, SUP_bot_conv, Sup_eq_top_iff, SUP_eq_top_iff, SUP_image, Sup_insert are now declared as [simp]. INCOMPATIBILITY. * Theory Lattice: lemmas compl_inf_bot, compl_le_comp_iff, compl_sup_top, inf_idem, inf_left_idem, inf_sup_absorb, sup_idem, sup_inf_absob, sup_left_idem are now declared as [simp]. Minor INCOMPATIBILITY. * Added syntactic classes "inf" and "sup" for the respective constants. INCOMPATIBILITY: Changes in the argument order of the (mostly internal) locale predicates for some derived classes. * Theorem collections ball_simps and bex_simps do not contain theorems referring to UNION any longer; these have been moved to collection UN_ball_bex_simps. INCOMPATIBILITY. * Theory Archimedean_Field: floor now is defined as parameter of a separate type class floor_ceiling. * Theory Finite_Set: more coherent development of fold_set locales: locale fun_left_comm ~> locale comp_fun_commute locale fun_left_comm_idem ~> locale comp_fun_idem Both use point-free characterization; interpretation proofs may need adjustment. INCOMPATIBILITY. * Theory Limits: Type "'a net" has been renamed to "'a filter", in accordance with standard mathematical terminology. INCOMPATIBILITY. * Theory Complex_Main: The locale interpretations for the bounded_linear and bounded_bilinear locales have been removed, in order to reduce the number of duplicate lemmas. Users must use the original names for distributivity theorems, potential INCOMPATIBILITY. divide.add ~> add_divide_distrib divide.diff ~> diff_divide_distrib divide.setsum ~> setsum_divide_distrib mult.add_right ~> right_distrib mult.diff_right ~> right_diff_distrib mult_right.setsum ~> setsum_right_distrib mult_left.diff ~> left_diff_distrib * Theory Complex_Main: Several redundant theorems have been removed or replaced by more general versions. INCOMPATIBILITY. real_diff_def ~> minus_real_def real_divide_def ~> divide_real_def real_less_def ~> less_le real_abs_def ~> abs_real_def real_sgn_def ~> sgn_real_def real_mult_commute ~> mult_commute real_mult_assoc ~> mult_assoc real_mult_1 ~> mult_1_left real_add_mult_distrib ~> left_distrib real_zero_not_eq_one ~> zero_neq_one real_mult_inverse_left ~> left_inverse INVERSE_ZERO ~> inverse_zero real_le_refl ~> order_refl real_le_antisym ~> order_antisym real_le_trans ~> order_trans real_le_linear ~> linear real_le_eq_diff ~> le_iff_diff_le_0 real_add_left_mono ~> add_left_mono real_mult_order ~> mult_pos_pos real_mult_less_mono2 ~> mult_strict_left_mono real_of_int_real_of_nat ~> real_of_int_of_nat_eq real_0_le_divide_iff ~> zero_le_divide_iff realpow_two_disj ~> power2_eq_iff real_squared_diff_one_factored ~> square_diff_one_factored realpow_two_diff ~> square_diff_square_factored reals_complete2 ~> complete_real real_sum_squared_expand ~> power2_sum exp_ln_eq ~> ln_unique expi_add ~> exp_add expi_zero ~> exp_zero lemma_DERIV_subst ~> DERIV_cong LIMSEQ_Zfun_iff ~> tendsto_Zfun_iff LIMSEQ_const ~> tendsto_const LIMSEQ_norm ~> tendsto_norm LIMSEQ_add ~> tendsto_add LIMSEQ_minus ~> tendsto_minus LIMSEQ_minus_cancel ~> tendsto_minus_cancel LIMSEQ_diff ~> tendsto_diff bounded_linear.LIMSEQ ~> bounded_linear.tendsto bounded_bilinear.LIMSEQ ~> bounded_bilinear.tendsto LIMSEQ_mult ~> tendsto_mult LIMSEQ_inverse ~> tendsto_inverse LIMSEQ_divide ~> tendsto_divide LIMSEQ_pow ~> tendsto_power LIMSEQ_setsum ~> tendsto_setsum LIMSEQ_setprod ~> tendsto_setprod LIMSEQ_norm_zero ~> tendsto_norm_zero_iff LIMSEQ_rabs_zero ~> tendsto_rabs_zero_iff LIMSEQ_imp_rabs ~> tendsto_rabs LIMSEQ_add_minus ~> tendsto_add [OF _ tendsto_minus] LIMSEQ_add_const ~> tendsto_add [OF _ tendsto_const] LIMSEQ_diff_const ~> tendsto_diff [OF _ tendsto_const] LIMSEQ_Complex ~> tendsto_Complex LIM_ident ~> tendsto_ident_at LIM_const ~> tendsto_const LIM_add ~> tendsto_add LIM_add_zero ~> tendsto_add_zero LIM_minus ~> tendsto_minus LIM_diff ~> tendsto_diff LIM_norm ~> tendsto_norm LIM_norm_zero ~> tendsto_norm_zero LIM_norm_zero_cancel ~> tendsto_norm_zero_cancel LIM_norm_zero_iff ~> tendsto_norm_zero_iff LIM_rabs ~> tendsto_rabs LIM_rabs_zero ~> tendsto_rabs_zero LIM_rabs_zero_cancel ~> tendsto_rabs_zero_cancel LIM_rabs_zero_iff ~> tendsto_rabs_zero_iff LIM_compose ~> tendsto_compose LIM_mult ~> tendsto_mult LIM_scaleR ~> tendsto_scaleR LIM_of_real ~> tendsto_of_real LIM_power ~> tendsto_power LIM_inverse ~> tendsto_inverse LIM_sgn ~> tendsto_sgn isCont_LIM_compose ~> isCont_tendsto_compose bounded_linear.LIM ~> bounded_linear.tendsto bounded_linear.LIM_zero ~> bounded_linear.tendsto_zero bounded_bilinear.LIM ~> bounded_bilinear.tendsto bounded_bilinear.LIM_prod_zero ~> bounded_bilinear.tendsto_zero bounded_bilinear.LIM_left_zero ~> bounded_bilinear.tendsto_left_zero bounded_bilinear.LIM_right_zero ~> bounded_bilinear.tendsto_right_zero LIM_inverse_fun ~> tendsto_inverse [OF tendsto_ident_at] * Theory Complex_Main: The definition of infinite series was generalized. Now it is defined on the type class {topological_space, comm_monoid_add}. Hence it is useable also for extended real numbers. * Theory Complex_Main: The complex exponential function "expi" is now a type-constrained abbreviation for "exp :: complex => complex"; thus several polymorphic lemmas about "exp" are now applicable to "expi". * Code generation: - Theory Library/Code_Char_ord provides native ordering of characters in the target language. - Commands code_module and code_library are legacy, use export_code instead. - Method "evaluation" is legacy, use method "eval" instead. - Legacy evaluator "SML" is deactivated by default. May be reactivated by the following theory command: setup {* Value.add_evaluator ("SML", Codegen.eval_term) *} * Declare ext [intro] by default. Rare INCOMPATIBILITY. * New proof method "induction" that gives induction hypotheses the name "IH", thus distinguishing them from further hypotheses that come from rule induction. The latter are still called "hyps". Method "induction" is a thin wrapper around "induct" and follows the same syntax. * Method "fastsimp" has been renamed to "fastforce", but "fastsimp" is still available as a legacy feature for some time. * Nitpick: - Added "need" and "total_consts" options. - Reintroduced "show_skolems" option by popular demand. - Renamed attribute: nitpick_def ~> nitpick_unfold. INCOMPATIBILITY. * Sledgehammer: - Use quasi-sound (and efficient) translations by default. - Added support for the following provers: E-ToFoF, LEO-II, Satallax, SNARK, Waldmeister, and Z3 with TPTP syntax. - Automatically preplay and minimize proofs before showing them if this can be done within reasonable time. - sledgehammer available_provers ~> sledgehammer supported_provers. INCOMPATIBILITY. - Added "preplay_timeout", "slicing", "type_enc", "sound", "max_mono_iters", and "max_new_mono_instances" options. - Removed "explicit_apply" and "full_types" options as well as "Full Types" Proof General menu item. INCOMPATIBILITY. * Metis: - Removed "metisF" -- use "metis" instead. INCOMPATIBILITY. - Obsoleted "metisFT" -- use "metis (full_types)" instead. INCOMPATIBILITY. * Command 'try': - Renamed 'try_methods' and added "simp:", "intro:", "dest:", and "elim:" options. INCOMPATIBILITY. - Introduced 'try' that not only runs 'try_methods' but also 'solve_direct', 'sledgehammer', 'quickcheck', and 'nitpick'. * Quickcheck: - Added "eval" option to evaluate terms for the found counterexample (currently only supported by the default (exhaustive) tester). - Added post-processing of terms to obtain readable counterexamples (currently only supported by the default (exhaustive) tester). - New counterexample generator quickcheck[narrowing] enables narrowing-based testing. Requires the Glasgow Haskell compiler with its installation location defined in the Isabelle settings environment as ISABELLE_GHC. - Removed quickcheck tester "SML" based on the SML code generator (formly in HOL/Library). * Function package: discontinued option "tailrec". INCOMPATIBILITY, use 'partial_function' instead. * Theory Library/Extended_Reals replaces now the positive extended reals found in probability theory. This file is extended by Multivariate_Analysis/Extended_Real_Limits. * Theory Library/Old_Recdef: old 'recdef' package has been moved here, from where it must be imported explicitly if it is really required. INCOMPATIBILITY. * Theory Library/Wfrec: well-founded recursion combinator "wfrec" has been moved here. INCOMPATIBILITY. * Theory Library/Saturated provides type of numbers with saturated arithmetic. * Theory Library/Product_Lattice defines a pointwise ordering for the product type 'a * 'b, and provides instance proofs for various order and lattice type classes. * Theory Library/Countable now provides the "countable_datatype" proof method for proving "countable" class instances for datatypes. * Theory Library/Cset_Monad allows do notation for computable sets (cset) via the generic monad ad-hoc overloading facility. * Library: Theories of common data structures are split into theories for implementation, an invariant-ensuring type, and connection to an abstract type. INCOMPATIBILITY. - RBT is split into RBT and RBT_Mapping. - AssocList is split and renamed into AList and AList_Mapping. - DList is split into DList_Impl, DList, and DList_Cset. - Cset is split into Cset and List_Cset. * Theory Library/Nat_Infinity has been renamed to Library/Extended_Nat, with name changes of the following types and constants: type inat ~> type enat Fin ~> enat Infty ~> infinity (overloaded) iSuc ~> eSuc the_Fin ~> the_enat Every theorem name containing "inat", "Fin", "Infty", or "iSuc" has been renamed accordingly. INCOMPATIBILITY. * Session Multivariate_Analysis: The euclidean_space type class now fixes a constant "Basis :: 'a set" consisting of the standard orthonormal basis for the type. Users now have the option of quantifying over this set instead of using the "basis" function, e.g. "ALL x:Basis. P x" vs "ALL i vec_eq_iff dist_nth_le_cart ~> dist_vec_nth_le tendsto_vector ~> vec_tendstoI Cauchy_vector ~> vec_CauchyI * Session Multivariate_Analysis: Several duplicate theorems have been removed, and other theorems have been renamed or replaced with more general versions. INCOMPATIBILITY. finite_choice ~> finite_set_choice eventually_conjI ~> eventually_conj eventually_and ~> eventually_conj_iff eventually_false ~> eventually_False setsum_norm ~> norm_setsum Lim_sequentially ~> LIMSEQ_def Lim_ident_at ~> LIM_ident Lim_const ~> tendsto_const Lim_cmul ~> tendsto_scaleR [OF tendsto_const] Lim_neg ~> tendsto_minus Lim_add ~> tendsto_add Lim_sub ~> tendsto_diff Lim_mul ~> tendsto_scaleR Lim_vmul ~> tendsto_scaleR [OF _ tendsto_const] Lim_null_norm ~> tendsto_norm_zero_iff [symmetric] Lim_linear ~> bounded_linear.tendsto Lim_component ~> tendsto_euclidean_component Lim_component_cart ~> tendsto_vec_nth Lim_inner ~> tendsto_inner [OF tendsto_const] dot_lsum ~> inner_setsum_left dot_rsum ~> inner_setsum_right continuous_cmul ~> continuous_scaleR [OF continuous_const] continuous_neg ~> continuous_minus continuous_sub ~> continuous_diff continuous_vmul ~> continuous_scaleR [OF _ continuous_const] continuous_mul ~> continuous_scaleR continuous_inv ~> continuous_inverse continuous_at_within_inv ~> continuous_at_within_inverse continuous_at_inv ~> continuous_at_inverse continuous_at_norm ~> continuous_norm [OF continuous_at_id] continuous_at_infnorm ~> continuous_infnorm [OF continuous_at_id] continuous_at_component ~> continuous_component [OF continuous_at_id] continuous_on_neg ~> continuous_on_minus continuous_on_sub ~> continuous_on_diff continuous_on_cmul ~> continuous_on_scaleR [OF continuous_on_const] continuous_on_vmul ~> continuous_on_scaleR [OF _ continuous_on_const] continuous_on_mul ~> continuous_on_scaleR continuous_on_mul_real ~> continuous_on_mult continuous_on_inner ~> continuous_on_inner [OF continuous_on_const] continuous_on_norm ~> continuous_on_norm [OF continuous_on_id] continuous_on_inverse ~> continuous_on_inv uniformly_continuous_on_neg ~> uniformly_continuous_on_minus uniformly_continuous_on_sub ~> uniformly_continuous_on_diff subset_interior ~> interior_mono subset_closure ~> closure_mono closure_univ ~> closure_UNIV real_arch_lt ~> reals_Archimedean2 real_arch ~> reals_Archimedean3 real_abs_norm ~> abs_norm_cancel real_abs_sub_norm ~> norm_triangle_ineq3 norm_cauchy_schwarz_abs ~> Cauchy_Schwarz_ineq2 * Session HOL-Probability: - Caratheodory's extension lemma is now proved for ring_of_sets. - Infinite products of probability measures are now available. - Sigma closure is independent, if the generator is independent - Use extended reals instead of positive extended reals. INCOMPATIBILITY. * Session HOLCF: Discontinued legacy theorem names, INCOMPATIBILITY. expand_fun_below ~> fun_below_iff below_fun_ext ~> fun_belowI expand_cfun_eq ~> cfun_eq_iff ext_cfun ~> cfun_eqI expand_cfun_below ~> cfun_below_iff below_cfun_ext ~> cfun_belowI monofun_fun_fun ~> fun_belowD monofun_fun_arg ~> monofunE monofun_lub_fun ~> adm_monofun [THEN admD] cont_lub_fun ~> adm_cont [THEN admD] cont2cont_Rep_CFun ~> cont2cont_APP cont_Rep_CFun_app ~> cont_APP_app cont_Rep_CFun_app_app ~> cont_APP_app_app cont_cfun_fun ~> cont_Rep_cfun1 [THEN contE] cont_cfun_arg ~> cont_Rep_cfun2 [THEN contE] contlub_cfun ~> lub_APP [symmetric] contlub_LAM ~> lub_LAM [symmetric] thelubI ~> lub_eqI UU_I ~> bottomI lift_distinct1 ~> lift.distinct(1) lift_distinct2 ~> lift.distinct(2) Def_not_UU ~> lift.distinct(2) Def_inject ~> lift.inject below_UU_iff ~> below_bottom_iff eq_UU_iff ~> eq_bottom_iff *** Document preparation *** * Antiquotation @{rail} layouts railroad syntax diagrams, see also isar-ref manual, both for description and actual application of the same. * Antiquotation @{value} evaluates the given term and presents its result. * Antiquotations: term style "isub" provides ad-hoc conversion of variables x1, y23 into subscripted form x\<^isub>1, y\<^isub>2\<^isub>3. * Predefined LaTeX macros for Isabelle symbols \ and \ (e.g. see ~~/src/HOL/Library/Monad_Syntax.thy). * Localized \isabellestyle switch can be used within blocks or groups like this: \isabellestyle{it} %preferred default {\isabellestylett @{text "typewriter stuff"}} * Discontinued special treatment of hard tabulators. Implicit tab-width is now defined as 1. Potential INCOMPATIBILITY for visual layouts. *** ML *** * The inner syntax of sort/type/term/prop supports inlined YXML representations within quoted string tokens. By encoding logical entities via Term_XML (in ML or Scala) concrete syntax can be bypassed, which is particularly useful for producing bits of text under external program control. * Antiquotations for ML and document preparation are managed as theory data, which requires explicit setup. * Isabelle_Process.is_active allows tools to check if the official process wrapper is running (Isabelle/Scala/jEdit) or the old TTY loop (better known as Proof General). * Structure Proof_Context follows standard naming scheme. Old ProofContext is still available for some time as legacy alias. * Structure Timing provides various operations for timing; supersedes former start_timing/end_timing etc. * Path.print is the official way to show file-system paths to users (including quotes etc.). * Inner syntax: identifiers in parse trees of generic categories "logic", "aprop", "idt" etc. carry position information (disguised as type constraints). Occasional INCOMPATIBILITY with non-compliant translations that choke on unexpected type constraints. Positions can be stripped in ML translations via Syntax.strip_positions / Syntax.strip_positions_ast, or via the syntax constant "_strip_positions" within parse trees. As last resort, positions can be disabled via the configuration option Syntax.positions, which is called "syntax_positions" in Isar attribute syntax. * Discontinued special status of various ML structures that contribute to structure Syntax (Ast, Lexicon, Mixfix, Parser, Printer etc.): less pervasive content, no inclusion in structure Syntax. INCOMPATIBILITY, refer directly to Ast.Constant, Lexicon.is_identifier, Syntax_Trans.mk_binder_tr etc. * Typed print translation: discontinued show_sorts argument, which is already available via context of "advanced" translation. * Refined PARALLEL_GOALS tactical: degrades gracefully for schematic goal states; body tactic needs to address all subgoals uniformly. * Slightly more special eq_list/eq_set, with shortcut involving pointer equality (assumes that eq relation is reflexive). * Classical tactics use proper Proof.context instead of historic types claset/clasimpset. Old-style declarations like addIs, addEs, addDs operate directly on Proof.context. Raw type claset retains its use as snapshot of the classical context, which can be recovered via (put_claset HOL_cs) etc. Type clasimpset has been discontinued. INCOMPATIBILITY, classical tactics and derived proof methods require proper Proof.context. *** System *** * Discontinued support for Poly/ML 5.2, which was the last version without proper multithreading and TimeLimit implementation. * Discontinued old lib/scripts/polyml-platform, which has been obsolete since Isabelle2009-2. * Various optional external tools are referenced more robustly and uniformly by explicit Isabelle settings as follows: ISABELLE_CSDP (formerly CSDP_EXE) ISABELLE_GHC (formerly EXEC_GHC or GHC_PATH) ISABELLE_OCAML (formerly EXEC_OCAML) ISABELLE_SWIPL (formerly EXEC_SWIPL) ISABELLE_YAP (formerly EXEC_YAP) Note that automated detection from the file-system or search path has been discontinued. INCOMPATIBILITY. * Scala layer provides JVM method invocation service for static methods of type (String)String, see Invoke_Scala.method in ML. For example: Invoke_Scala.method "java.lang.System.getProperty" "java.home" Together with YXML.string_of_body/parse_body and XML.Encode/Decode this allows to pass structured values between ML and Scala. * The IsabelleText fonts includes some further glyphs to support the Prover IDE. Potential INCOMPATIBILITY: users who happen to have installed a local copy (which is normally *not* required) need to delete or update it from ~~/lib/fonts/. New in Isabelle2011 (January 2011) ---------------------------------- *** General *** * Experimental Prover IDE based on Isabelle/Scala and jEdit (see src/Tools/jEdit). This also serves as IDE for Isabelle/ML, with useful tooltips and hyperlinks produced from its static analysis. The bundled component provides an executable Isabelle tool that can be run like this: Isabelle2011/bin/isabelle jedit * Significantly improved Isabelle/Isar implementation manual. * System settings: ISABELLE_HOME_USER now includes ISABELLE_IDENTIFIER (and thus refers to something like $HOME/.isabelle/Isabelle2011), while the default heap location within that directory lacks that extra suffix. This isolates multiple Isabelle installations from each other, avoiding problems with old settings in new versions. INCOMPATIBILITY, need to copy/upgrade old user settings manually. * Source files are always encoded as UTF-8, instead of old-fashioned ISO-Latin-1. INCOMPATIBILITY. Isabelle LaTeX documents might require the following package declarations: \usepackage[utf8]{inputenc} \usepackage{textcomp} * Explicit treatment of UTF-8 sequences as Isabelle symbols, such that a Unicode character is treated as a single symbol, not a sequence of non-ASCII bytes as before. Since Isabelle/ML string literals may contain symbols without further backslash escapes, Unicode can now be used here as well. Recall that Symbol.explode in ML provides a consistent view on symbols, while raw explode (or String.explode) merely give a byte-oriented representation. * Theory loader: source files are primarily located via the master directory of each theory node (where the .thy file itself resides). The global load path is still partially available as legacy feature. Minor INCOMPATIBILITY due to subtle change in file lookup: use explicit paths, relatively to the theory. * Special treatment of ML file names has been discontinued. Historically, optional extensions .ML or .sml were added on demand -- at the cost of clarity of file dependencies. Recall that Isabelle/ML files exclusively use the .ML extension. Minor INCOMPATIBILITY. * Various options that affect pretty printing etc. are now properly handled within the context via configuration options, instead of unsynchronized references or print modes. There are both ML Config.T entities and Isar declaration attributes to access these. ML (Config.T) Isar (attribute) eta_contract eta_contract show_brackets show_brackets show_sorts show_sorts show_types show_types show_question_marks show_question_marks show_consts show_consts show_abbrevs show_abbrevs Syntax.ast_trace syntax_ast_trace Syntax.ast_stat syntax_ast_stat Syntax.ambiguity_level syntax_ambiguity_level Goal_Display.goals_limit goals_limit Goal_Display.show_main_goal show_main_goal Method.rule_trace rule_trace Thy_Output.display thy_output_display Thy_Output.quotes thy_output_quotes Thy_Output.indent thy_output_indent Thy_Output.source thy_output_source Thy_Output.break thy_output_break Note that corresponding "..._default" references in ML may only be changed globally at the ROOT session setup, but *not* within a theory. The option "show_abbrevs" supersedes the former print mode "no_abbrevs" with inverted meaning. * More systematic naming of some configuration options. INCOMPATIBILITY. trace_simp ~> simp_trace debug_simp ~> simp_debug * Support for real valued configuration options, using simplistic floating-point notation that coincides with the inner syntax for float_token. * Support for real valued preferences (with approximative PGIP type): front-ends need to accept "pgint" values in float notation. INCOMPATIBILITY. * The IsabelleText font now includes Cyrillic, Hebrew, Arabic from DejaVu Sans. * Discontinued support for Poly/ML 5.0 and 5.1 versions. *** Pure *** * Command 'type_synonym' (with single argument) replaces somewhat outdated 'types', which is still available as legacy feature for some time. * Command 'nonterminal' (with 'and' separated list of arguments) replaces somewhat outdated 'nonterminals'. INCOMPATIBILITY. * Command 'notepad' replaces former 'example_proof' for experimentation in Isar without any result. INCOMPATIBILITY. * Locale interpretation commands 'interpret' and 'sublocale' accept lists of equations to map definitions in a locale to appropriate entities in the context of the interpretation. The 'interpretation' command already provided this functionality. * Diagnostic command 'print_dependencies' prints the locale instances that would be activated if the specified expression was interpreted in the current context. Variant "print_dependencies!" assumes a context without interpretations. * Diagnostic command 'print_interps' prints interpretations in proofs in addition to interpretations in theories. * Discontinued obsolete 'global' and 'local' commands to manipulate the theory name space. Rare INCOMPATIBILITY. The ML functions Sign.root_path and Sign.local_path may be applied directly where this feature is still required for historical reasons. * Discontinued obsolete 'constdefs' command. INCOMPATIBILITY, use 'definition' instead. * The "prems" fact, which refers to the accidental collection of foundational premises in the context, is now explicitly marked as legacy feature and will be discontinued soon. Consider using "assms" of the head statement or reference facts by explicit names. * Document antiquotations @{class} and @{type} print classes and type constructors. * Document antiquotation @{file} checks file/directory entries within the local file system. *** HOL *** * Coercive subtyping: functions can be declared as coercions and type inference will add them as necessary upon input of a term. Theory Complex_Main declares real :: nat => real and real :: int => real as coercions. A coercion function f is declared like this: declare [[coercion f]] To lift coercions through type constructors (e.g. from nat => real to nat list => real list), map functions can be declared, e.g. declare [[coercion_map map]] Currently coercion inference is activated only in theories including real numbers, i.e. descendants of Complex_Main. This is controlled by the configuration option "coercion_enabled", e.g. it can be enabled in other theories like this: declare [[coercion_enabled]] * Command 'partial_function' provides basic support for recursive function definitions over complete partial orders. Concrete instances are provided for i) the option type, ii) tail recursion on arbitrary types, and iii) the heap monad of Imperative_HOL. See src/HOL/ex/Fundefs.thy and src/HOL/Imperative_HOL/ex/Linked_Lists.thy for examples. * Function package: f.psimps rules are no longer implicitly declared as [simp]. INCOMPATIBILITY. * Datatype package: theorems generated for executable equality (class "eq") carry proper names and are treated as default code equations. * Inductive package: now offers command 'inductive_simps' to automatically derive instantiated and simplified equations for inductive predicates, similar to 'inductive_cases'. * Command 'enriched_type' allows to register properties of the functorial structure of types. * Improved infrastructure for term evaluation using code generator techniques, in particular static evaluation conversions. * Code generator: Scala (2.8 or higher) has been added to the target languages. * Code generator: globbing constant expressions "*" and "Theory.*" have been replaced by the more idiomatic "_" and "Theory._". INCOMPATIBILITY. * Code generator: export_code without explicit file declaration prints to standard output. INCOMPATIBILITY. * Code generator: do not print function definitions for case combinators any longer. * Code generator: simplification with rules determined with src/Tools/Code/code_simp.ML and method "code_simp". * Code generator for records: more idiomatic representation of record types. Warning: records are not covered by ancient SML code generation any longer. INCOMPATIBILITY. In cases of need, a suitable rep_datatype declaration helps to succeed then: record 'a foo = ... ... rep_datatype foo_ext ... * Records: logical foundation type for records does not carry a '_type' suffix any longer (obsolete due to authentic syntax). INCOMPATIBILITY. * Quickcheck now by default uses exhaustive testing instead of random testing. Random testing can be invoked by "quickcheck [random]", exhaustive testing by "quickcheck [exhaustive]". * Quickcheck instantiates polymorphic types with small finite datatypes by default. This enables a simple execution mechanism to handle quantifiers and function equality over the finite datatypes. * Quickcheck random generator has been renamed from "code" to "random". INCOMPATIBILITY. * Quickcheck now has a configurable time limit which is set to 30 seconds by default. This can be changed by adding [timeout = n] to the quickcheck command. The time limit for Auto Quickcheck is still set independently. * Quickcheck in locales considers interpretations of that locale for counter example search. * Sledgehammer: - Added "smt" and "remote_smt" provers based on the "smt" proof method. See the Sledgehammer manual for details ("isabelle doc sledgehammer"). - Renamed commands: sledgehammer atp_info ~> sledgehammer running_provers sledgehammer atp_kill ~> sledgehammer kill_provers sledgehammer available_atps ~> sledgehammer available_provers INCOMPATIBILITY. - Renamed options: sledgehammer [atps = ...] ~> sledgehammer [provers = ...] sledgehammer [atp = ...] ~> sledgehammer [prover = ...] sledgehammer [timeout = 77 s] ~> sledgehammer [timeout = 77] (and "ms" and "min" are no longer supported) INCOMPATIBILITY. * Nitpick: - Renamed options: nitpick [timeout = 77 s] ~> nitpick [timeout = 77] nitpick [tac_timeout = 777 ms] ~> nitpick [tac_timeout = 0.777] INCOMPATIBILITY. - Added support for partial quotient types. - Added local versions of the "Nitpick.register_xxx" functions. - Added "whack" option. - Allow registration of quotient types as codatatypes. - Improved "merge_type_vars" option to merge more types. - Removed unsound "fast_descrs" option. - Added custom symmetry breaking for datatypes, making it possible to reach higher cardinalities. - Prevent the expansion of too large definitions. * Proof methods "metis" and "meson" now have configuration options "meson_trace", "metis_trace", and "metis_verbose" that can be enabled to diagnose these tools. E.g. using [[metis_trace = true]] * Auto Solve: Renamed "Auto Solve Direct". The tool is now available manually as command 'solve_direct'. * The default SMT solver Z3 must be enabled explicitly (due to licensing issues) by setting the environment variable Z3_NON_COMMERCIAL in etc/settings of the component, for example. For commercial applications, the SMT solver CVC3 is provided as fall-back; changing the SMT solver is done via the configuration option "smt_solver". * Remote SMT solvers need to be referred to by the "remote_" prefix, i.e. "remote_cvc3" and "remote_z3". * Added basic SMT support for datatypes, records, and typedefs using the oracle mode (no proofs). Direct support of pairs has been dropped in exchange (pass theorems fst_conv snd_conv pair_collapse to the SMT support for a similar behavior). Minor INCOMPATIBILITY. * Changed SMT configuration options: - Renamed: z3_proofs ~> smt_oracle (with inverted meaning) z3_trace_assms ~> smt_trace_used_facts INCOMPATIBILITY. - Added: smt_verbose smt_random_seed smt_datatypes smt_infer_triggers smt_monomorph_limit cvc3_options remote_cvc3_options remote_z3_options yices_options * Boogie output files (.b2i files) need to be declared in the theory header. * Simplification procedure "list_to_set_comprehension" rewrites list comprehensions applied to List.set to set comprehensions. Occasional INCOMPATIBILITY, may be deactivated like this: declare [[simproc del: list_to_set_comprehension]] * Removed old version of primrec package. INCOMPATIBILITY. * Removed simplifier congruence rule of "prod_case", as has for long been the case with "split". INCOMPATIBILITY. * String.literal is a type, but not a datatype. INCOMPATIBILITY. * Removed [split_format ... and ... and ...] version of [split_format]. Potential INCOMPATIBILITY. * Predicate "sorted" now defined inductively, with nice induction rules. INCOMPATIBILITY: former sorted.simps now named sorted_simps. * Constant "contents" renamed to "the_elem", to free the generic name contents for other uses. INCOMPATIBILITY. * Renamed class eq and constant eq (for code generation) to class equal and constant equal, plus renaming of related facts and various tuning. INCOMPATIBILITY. * Dropped type classes mult_mono and mult_mono1. INCOMPATIBILITY. * Removed output syntax "'a ~=> 'b" for "'a => 'b option". INCOMPATIBILITY. * Renamed theory Fset to Cset, type Fset.fset to Cset.set, in order to avoid confusion with finite sets. INCOMPATIBILITY. * Abandoned locales equiv, congruent and congruent2 for equivalence relations. INCOMPATIBILITY: use equivI rather than equiv_intro (same for congruent(2)). * Some previously unqualified names have been qualified: types bool ~> HOL.bool nat ~> Nat.nat constants Trueprop ~> HOL.Trueprop True ~> HOL.True False ~> HOL.False op & ~> HOL.conj op | ~> HOL.disj op --> ~> HOL.implies op = ~> HOL.eq Not ~> HOL.Not The ~> HOL.The All ~> HOL.All Ex ~> HOL.Ex Ex1 ~> HOL.Ex1 Let ~> HOL.Let If ~> HOL.If Ball ~> Set.Ball Bex ~> Set.Bex Suc ~> Nat.Suc Pair ~> Product_Type.Pair fst ~> Product_Type.fst snd ~> Product_Type.snd curry ~> Product_Type.curry op : ~> Set.member Collect ~> Set.Collect INCOMPATIBILITY. * More canonical naming convention for some fundamental definitions: bot_bool_eq ~> bot_bool_def top_bool_eq ~> top_bool_def inf_bool_eq ~> inf_bool_def sup_bool_eq ~> sup_bool_def bot_fun_eq ~> bot_fun_def top_fun_eq ~> top_fun_def inf_fun_eq ~> inf_fun_def sup_fun_eq ~> sup_fun_def INCOMPATIBILITY. * More stylized fact names: expand_fun_eq ~> fun_eq_iff expand_set_eq ~> set_eq_iff set_ext ~> set_eqI nat_number ~> eval_nat_numeral INCOMPATIBILITY. * Refactoring of code-generation specific operations in theory List: constants null ~> List.null facts mem_iff ~> member_def null_empty ~> null_def INCOMPATIBILITY. Note that these were not supposed to be used regularly unless for striking reasons; their main purpose was code generation. Various operations from the Haskell prelude are used for generating Haskell code. * Term "bij f" is now an abbreviation of "bij_betw f UNIV UNIV". Term "surj f" is now an abbreviation of "range f = UNIV". The theorems bij_def and surj_def are unchanged. INCOMPATIBILITY. * Abolished some non-alphabetic type names: "prod" and "sum" replace "*" and "+" respectively. INCOMPATIBILITY. * Name "Plus" of disjoint sum operator "<+>" is now hidden. Write "Sum_Type.Plus" instead. * Constant "split" has been merged with constant "prod_case"; names of ML functions, facts etc. involving split have been retained so far, though. INCOMPATIBILITY. * Dropped old infix syntax "_ mem _" for List.member; use "_ : set _" instead. INCOMPATIBILITY. * Removed lemma "Option.is_none_none" which duplicates "is_none_def". INCOMPATIBILITY. * Former theory Library/Enum is now part of the HOL-Main image. INCOMPATIBILITY: all constants of the Enum theory now have to be referred to by its qualified name. enum ~> Enum.enum nlists ~> Enum.nlists product ~> Enum.product * Theory Library/Monad_Syntax provides do-syntax for monad types. Syntax in Library/State_Monad has been changed to avoid ambiguities. INCOMPATIBILITY. * Theory Library/SetsAndFunctions has been split into Library/Function_Algebras and Library/Set_Algebras; canonical names for instance definitions for functions; various improvements. INCOMPATIBILITY. * Theory Library/Multiset provides stable quicksort implementation of sort_key. * Theory Library/Multiset: renamed empty_idemp ~> empty_neutral. INCOMPATIBILITY. * Session Multivariate_Analysis: introduced a type class for euclidean space. Most theorems are now stated in terms of euclidean spaces instead of finite cartesian products. types real ^ 'n ~> 'a::real_vector ~> 'a::euclidean_space ~> 'a::ordered_euclidean_space (depends on your needs) constants _ $ _ ~> _ $$ _ \ x. _ ~> \\ x. _ CARD('n) ~> DIM('a) Also note that the indices are now natural numbers and not from some finite type. Finite cartesian products of euclidean spaces, products of euclidean spaces the real and complex numbers are instantiated to be euclidean_spaces. INCOMPATIBILITY. * Session Probability: introduced pextreal as positive extended real numbers. Use pextreal as value for measures. Introduce the Radon-Nikodym derivative, product spaces and Fubini's theorem for arbitrary sigma finite measures. Introduces Lebesgue measure based on the integral in Multivariate Analysis. INCOMPATIBILITY. * Session Imperative_HOL: revamped, corrected dozens of inadequacies. INCOMPATIBILITY. * Session SPARK (with image HOL-SPARK) provides commands to load and prove verification conditions generated by the SPARK Ada program verifier. See also src/HOL/SPARK and src/HOL/SPARK/Examples. *** HOL-Algebra *** * Theorems for additive ring operations (locale abelian_monoid and descendants) are generated by interpretation from their multiplicative counterparts. Names (in particular theorem names) have the mandatory qualifier 'add'. Previous theorem names are redeclared for compatibility. * Structure "int_ring" is now an abbreviation (previously a definition). This fits more natural with advanced interpretations. *** HOLCF *** * The domain package now runs in definitional mode by default: The former command 'new_domain' is now called 'domain'. To use the domain package in its original axiomatic mode, use 'domain (unsafe)'. INCOMPATIBILITY. * The new class "domain" is now the default sort. Class "predomain" is an unpointed version of "domain". Theories can be updated by replacing sort annotations as shown below. INCOMPATIBILITY. 'a::type ~> 'a::countable 'a::cpo ~> 'a::predomain 'a::pcpo ~> 'a::domain * The old type class "rep" has been superseded by class "domain". Accordingly, users of the definitional package must remove any "default_sort rep" declarations. INCOMPATIBILITY. * The domain package (definitional mode) now supports unpointed predomain argument types, as long as they are marked 'lazy'. (Strict arguments must be in class "domain".) For example, the following domain definition now works: domain natlist = nil | cons (lazy "nat discr") (lazy "natlist") * Theory HOLCF/Library/HOL_Cpo provides cpo and predomain class instances for types from main HOL: bool, nat, int, char, 'a + 'b, 'a option, and 'a list. Additionally, it configures fixrec and the domain package to work with these types. For example: fixrec isInl :: "('a + 'b) u -> tr" where "isInl$(up$(Inl x)) = TT" | "isInl$(up$(Inr y)) = FF" domain V = VFun (lazy "V -> V") | VCon (lazy "nat") (lazy "V list") * The "(permissive)" option of fixrec has been replaced with a per-equation "(unchecked)" option. See src/HOL/HOLCF/Tutorial/Fixrec_ex.thy for examples. INCOMPATIBILITY. * The "bifinite" class no longer fixes a constant "approx"; the class now just asserts that such a function exists. INCOMPATIBILITY. * Former type "alg_defl" has been renamed to "defl". HOLCF no longer defines an embedding of type 'a defl into udom by default; instances of "bifinite" and "domain" classes are available in src/HOL/HOLCF/Library/Defl_Bifinite.thy. * The syntax "REP('a)" has been replaced with "DEFL('a)". * The predicate "directed" has been removed. INCOMPATIBILITY. * The type class "finite_po" has been removed. INCOMPATIBILITY. * The function "cprod_map" has been renamed to "prod_map". INCOMPATIBILITY. * The monadic bind operator on each powerdomain has new binder syntax similar to sets, e.g. "\\x\xs. t" represents "upper_bind\xs\(\ x. t)". * The infix syntax for binary union on each powerdomain has changed from e.g. "+\" to "\\", for consistency with set syntax. INCOMPATIBILITY. * The constant "UU" has been renamed to "bottom". The syntax "UU" is still supported as an input translation. * Renamed some theorems (the original names are also still available). expand_fun_below ~> fun_below_iff below_fun_ext ~> fun_belowI expand_cfun_eq ~> cfun_eq_iff ext_cfun ~> cfun_eqI expand_cfun_below ~> cfun_below_iff below_cfun_ext ~> cfun_belowI cont2cont_Rep_CFun ~> cont2cont_APP * The Abs and Rep functions for various types have changed names. Related theorem names have also changed to match. INCOMPATIBILITY. Rep_CFun ~> Rep_cfun Abs_CFun ~> Abs_cfun Rep_Sprod ~> Rep_sprod Abs_Sprod ~> Abs_sprod Rep_Ssum ~> Rep_ssum Abs_Ssum ~> Abs_ssum * Lemmas with names of the form *_defined_iff or *_strict_iff have been renamed to *_bottom_iff. INCOMPATIBILITY. * Various changes to bisimulation/coinduction with domain package: - Definitions of "bisim" constants no longer mention definedness. - With mutual recursion, "bisim" predicate is now curried. - With mutual recursion, each type gets a separate coind theorem. - Variable names in bisim_def and coinduct rules have changed. INCOMPATIBILITY. * Case combinators generated by the domain package for type "foo" are now named "foo_case" instead of "foo_when". INCOMPATIBILITY. * Several theorems have been renamed to more accurately reflect the names of constants and types involved. INCOMPATIBILITY. thelub_const ~> lub_const lub_const ~> is_lub_const thelubI ~> lub_eqI is_lub_lub ~> is_lubD2 lubI ~> is_lub_lub unique_lub ~> is_lub_unique is_ub_lub ~> is_lub_rangeD1 lub_bin_chain ~> is_lub_bin_chain lub_fun ~> is_lub_fun thelub_fun ~> lub_fun thelub_cfun ~> lub_cfun thelub_Pair ~> lub_Pair lub_cprod ~> is_lub_prod thelub_cprod ~> lub_prod minimal_cprod ~> minimal_prod inst_cprod_pcpo ~> inst_prod_pcpo UU_I ~> bottomI compact_UU ~> compact_bottom deflation_UU ~> deflation_bottom finite_deflation_UU ~> finite_deflation_bottom * Many legacy theorem names have been discontinued. INCOMPATIBILITY. sq_ord_less_eq_trans ~> below_eq_trans sq_ord_eq_less_trans ~> eq_below_trans refl_less ~> below_refl trans_less ~> below_trans antisym_less ~> below_antisym antisym_less_inverse ~> po_eq_conv [THEN iffD1] box_less ~> box_below rev_trans_less ~> rev_below_trans not_less2not_eq ~> not_below2not_eq less_UU_iff ~> below_UU_iff flat_less_iff ~> flat_below_iff adm_less ~> adm_below adm_not_less ~> adm_not_below adm_compact_not_less ~> adm_compact_not_below less_fun_def ~> below_fun_def expand_fun_less ~> fun_below_iff less_fun_ext ~> fun_belowI less_discr_def ~> below_discr_def discr_less_eq ~> discr_below_eq less_unit_def ~> below_unit_def less_cprod_def ~> below_prod_def prod_lessI ~> prod_belowI Pair_less_iff ~> Pair_below_iff fst_less_iff ~> fst_below_iff snd_less_iff ~> snd_below_iff expand_cfun_less ~> cfun_below_iff less_cfun_ext ~> cfun_belowI injection_less ~> injection_below less_up_def ~> below_up_def not_Iup_less ~> not_Iup_below Iup_less ~> Iup_below up_less ~> up_below Def_inject_less_eq ~> Def_below_Def Def_less_is_eq ~> Def_below_iff spair_less_iff ~> spair_below_iff less_sprod ~> below_sprod spair_less ~> spair_below sfst_less_iff ~> sfst_below_iff ssnd_less_iff ~> ssnd_below_iff fix_least_less ~> fix_least_below dist_less_one ~> dist_below_one less_ONE ~> below_ONE ONE_less_iff ~> ONE_below_iff less_sinlD ~> below_sinlD less_sinrD ~> below_sinrD *** FOL and ZF *** * All constant names are now qualified internally and use proper identifiers, e.g. "IFOL.eq" instead of "op =". INCOMPATIBILITY. *** ML *** * Antiquotation @{assert} inlines a function bool -> unit that raises Fail if the argument is false. Due to inlining the source position of failed assertions is included in the error output. * Discontinued antiquotation @{theory_ref}, which is obsolete since ML text is in practice always evaluated with a stable theory checkpoint. Minor INCOMPATIBILITY, use (Theory.check_thy @{theory}) instead. * Antiquotation @{theory A} refers to theory A from the ancestry of the current context, not any accidental theory loader state as before. Potential INCOMPATIBILITY, subtle change in semantics. * Syntax.pretty_priority (default 0) configures the required priority of pretty-printed output and thus affects insertion of parentheses. * Syntax.default_root (default "any") configures the inner syntax category (nonterminal symbol) for parsing of terms. * Former exception Library.UnequalLengths now coincides with ListPair.UnequalLengths. * Renamed structure MetaSimplifier to Raw_Simplifier. Note that the main functionality is provided by structure Simplifier. * Renamed raw "explode" function to "raw_explode" to emphasize its meaning. Note that internally to Isabelle, Symbol.explode is used in almost all situations. * Discontinued obsolete function sys_error and exception SYS_ERROR. See implementation manual for further details on exceptions in Isabelle/ML. * Renamed setmp_noncritical to Unsynchronized.setmp to emphasize its meaning. * Renamed structure PureThy to Pure_Thy and moved most of its operations to structure Global_Theory, to emphasize that this is rarely-used global-only stuff. * Discontinued Output.debug. Minor INCOMPATIBILITY, use plain writeln instead (or tracing for high-volume output). * Configuration option show_question_marks only affects regular pretty printing of types and terms, not raw Term.string_of_vname. * ML_Context.thm and ML_Context.thms are no longer pervasive. Rare INCOMPATIBILITY, superseded by static antiquotations @{thm} and @{thms} for most purposes. * ML structure Unsynchronized is never opened, not even in Isar interaction mode as before. Old Unsynchronized.set etc. have been discontinued -- use plain := instead. This should be *rare* anyway, since modern tools always work via official context data, notably configuration options. * Parallel and asynchronous execution requires special care concerning interrupts. Structure Exn provides some convenience functions that avoid working directly with raw Interrupt. User code must not absorb interrupts -- intermediate handling (for cleanup etc.) needs to be followed by re-raising of the original exception. Another common source of mistakes are "handle _" patterns, which make the meaning of the program subject to physical effects of the environment. New in Isabelle2009-2 (June 2010) --------------------------------- *** General *** * Authentic syntax for *all* logical entities (type classes, type constructors, term constants): provides simple and robust correspondence between formal entities and concrete syntax. Within the parse tree / AST representations, "constants" are decorated by their category (class, type, const) and spelled out explicitly with their full internal name. Substantial INCOMPATIBILITY concerning low-level syntax declarations and translations (translation rules and translation functions in ML). Some hints on upgrading: - Many existing uses of 'syntax' and 'translations' can be replaced by more modern 'type_notation', 'notation' and 'abbreviation', which are independent of this issue. - 'translations' require markup within the AST; the term syntax provides the following special forms: CONST c -- produces syntax version of constant c from context XCONST c -- literally c, checked as constant from context c -- literally c, if declared by 'syntax' Plain identifiers are treated as AST variables -- occasionally the system indicates accidental variables via the error "rhs contains extra variables". Type classes and type constructors are marked according to their concrete syntax. Some old translations rules need to be written for the "type" category, using type constructor application instead of pseudo-term application of the default category "logic". - 'parse_translation' etc. in ML may use the following antiquotations: @{class_syntax c} -- type class c within parse tree / AST @{term_syntax c} -- type constructor c within parse tree / AST @{const_syntax c} -- ML version of "CONST c" above @{syntax_const c} -- literally c (checked wrt. 'syntax' declarations) - Literal types within 'typed_print_translations', i.e. those *not* represented as pseudo-terms are represented verbatim. Use @{class c} or @{type_name c} here instead of the above syntax antiquotations. Note that old non-authentic syntax was based on unqualified base names, so all of the above "constant" names would coincide. Recall that 'print_syntax' and ML_command "set Syntax.trace_ast" help to diagnose syntax problems. * Type constructors admit general mixfix syntax, not just infix. * Concrete syntax may be attached to local entities without a proof body, too. This works via regular mixfix annotations for 'fix', 'def', 'obtain' etc. or via the explicit 'write' command, which is similar to the 'notation' command in theory specifications. * Discontinued unnamed infix syntax (legacy feature for many years) -- need to specify constant name and syntax separately. Internal ML datatype constructors have been renamed from InfixName to Infix etc. Minor INCOMPATIBILITY. * Schematic theorem statements need to be explicitly markup as such, via commands 'schematic_lemma', 'schematic_theorem', 'schematic_corollary'. Thus the relevance of the proof is made syntactically clear, which impacts performance in a parallel or asynchronous interactive environment. Minor INCOMPATIBILITY. * Use of cumulative prems via "!" in some proof methods has been discontinued (old legacy feature). * References 'trace_simp' and 'debug_simp' have been replaced by configuration options stored in the context. Enabling tracing (the case of debugging is similar) in proofs works via using [[trace_simp = true]] Tracing is then active for all invocations of the simplifier in subsequent goal refinement steps. Tracing may also still be enabled or disabled via the ProofGeneral settings menu. * Separate commands 'hide_class', 'hide_type', 'hide_const', 'hide_fact' replace the former 'hide' KIND command. Minor INCOMPATIBILITY. * Improved parallelism of proof term normalization: usedir -p2 -q0 is more efficient than combinations with -q1 or -q2. *** Pure *** * Proofterms record type-class reasoning explicitly, using the "unconstrain" operation internally. This eliminates all sort constraints from a theorem and proof, introducing explicit OFCLASS-premises. On the proof term level, this operation is automatically applied at theorem boundaries, such that closed proofs are always free of sort constraints. INCOMPATIBILITY for tools that inspect proof terms. * Local theory specifications may depend on extra type variables that are not present in the result type -- arguments TYPE('a) :: 'a itself are added internally. For example: definition unitary :: bool where "unitary = (ALL (x::'a) y. x = y)" * Predicates of locales introduced by classes carry a mandatory "class" prefix. INCOMPATIBILITY. * Vacuous class specifications observe default sort. INCOMPATIBILITY. * Old 'axclass' command has been discontinued. INCOMPATIBILITY, use 'class' instead. * Command 'code_reflect' allows to incorporate generated ML code into runtime environment; replaces immature code_datatype antiquotation. INCOMPATIBILITY. * Code generator: simple concept for abstract datatypes obeying invariants. * Code generator: details of internal data cache have no impact on the user space functionality any longer. * Methods "unfold_locales" and "intro_locales" ignore non-locale subgoals. This is more appropriate for interpretations with 'where'. INCOMPATIBILITY. * Command 'example_proof' opens an empty proof body. This allows to experiment with Isar, without producing any persistent result. * Commands 'type_notation' and 'no_type_notation' declare type syntax within a local theory context, with explicit checking of the constructors involved (in contrast to the raw 'syntax' versions). * Commands 'types' and 'typedecl' now work within a local theory context -- without introducing dependencies on parameters or assumptions, which is not possible in Isabelle/Pure. * Command 'defaultsort' has been renamed to 'default_sort', it works within a local theory context. Minor INCOMPATIBILITY. *** HOL *** * Command 'typedef' now works within a local theory context -- without introducing dependencies on parameters or assumptions, which is not possible in Isabelle/Pure/HOL. Note that the logical environment may contain multiple interpretations of local typedefs (with different non-emptiness proofs), even in a global theory context. * New package for quotient types. Commands 'quotient_type' and 'quotient_definition' may be used for defining types and constants by quotient constructions. An example is the type of integers created by quotienting pairs of natural numbers: fun intrel :: "(nat * nat) => (nat * nat) => bool" where "intrel (x, y) (u, v) = (x + v = u + y)" quotient_type int = "nat * nat" / intrel by (auto simp add: equivp_def expand_fun_eq) quotient_definition "0::int" is "(0::nat, 0::nat)" The method "lifting" can be used to lift of theorems from the underlying "raw" type to the quotient type. The example src/HOL/Quotient_Examples/FSet.thy includes such a quotient construction and provides a reasoning infrastructure for finite sets. * Renamed Library/Quotient.thy to Library/Quotient_Type.thy to avoid clash with new theory Quotient in Main HOL. * Moved the SMT binding into the main HOL session, eliminating separate HOL-SMT session. * List membership infix mem operation is only an input abbreviation. INCOMPATIBILITY. * Theory Library/Word.thy has been removed. Use library Word/Word.thy for future developements; former Library/Word.thy is still present in the AFP entry RSAPPS. * Theorem Int.int_induct renamed to Int.int_of_nat_induct and is no longer shadowed. INCOMPATIBILITY. * Dropped theorem duplicate comp_arith; use semiring_norm instead. INCOMPATIBILITY. * Dropped theorem RealPow.real_sq_order; use power2_le_imp_le instead. INCOMPATIBILITY. * Dropped normalizing_semiring etc; use the facts in semiring classes instead. INCOMPATIBILITY. * Dropped several real-specific versions of lemmas about floor and ceiling; use the generic lemmas from theory "Archimedean_Field" instead. INCOMPATIBILITY. floor_number_of_eq ~> floor_number_of le_floor_eq_number_of ~> number_of_le_floor le_floor_eq_zero ~> zero_le_floor le_floor_eq_one ~> one_le_floor floor_less_eq_number_of ~> floor_less_number_of floor_less_eq_zero ~> floor_less_zero floor_less_eq_one ~> floor_less_one less_floor_eq_number_of ~> number_of_less_floor less_floor_eq_zero ~> zero_less_floor less_floor_eq_one ~> one_less_floor floor_le_eq_number_of ~> floor_le_number_of floor_le_eq_zero ~> floor_le_zero floor_le_eq_one ~> floor_le_one floor_subtract_number_of ~> floor_diff_number_of floor_subtract_one ~> floor_diff_one ceiling_number_of_eq ~> ceiling_number_of ceiling_le_eq_number_of ~> ceiling_le_number_of ceiling_le_zero_eq ~> ceiling_le_zero ceiling_le_eq_one ~> ceiling_le_one less_ceiling_eq_number_of ~> number_of_less_ceiling less_ceiling_eq_zero ~> zero_less_ceiling less_ceiling_eq_one ~> one_less_ceiling ceiling_less_eq_number_of ~> ceiling_less_number_of ceiling_less_eq_zero ~> ceiling_less_zero ceiling_less_eq_one ~> ceiling_less_one le_ceiling_eq_number_of ~> number_of_le_ceiling le_ceiling_eq_zero ~> zero_le_ceiling le_ceiling_eq_one ~> one_le_ceiling ceiling_subtract_number_of ~> ceiling_diff_number_of ceiling_subtract_one ~> ceiling_diff_one * Theory "Finite_Set": various folding_XXX locales facilitate the application of the various fold combinators on finite sets. * Library theory "RBT" renamed to "RBT_Impl"; new library theory "RBT" provides abstract red-black tree type which is backed by "RBT_Impl" as implementation. INCOMPATIBILITY. * Theory Library/Coinductive_List has been removed -- superseded by AFP/thys/Coinductive. * Theory PReal, including the type "preal" and related operations, has been removed. INCOMPATIBILITY. * Real: new development using Cauchy Sequences. * Split off theory "Big_Operators" containing setsum, setprod, Inf_fin, Sup_fin, Min, Max from theory Finite_Set. INCOMPATIBILITY. * Theory "Rational" renamed to "Rat", for consistency with "Nat", "Int" etc. INCOMPATIBILITY. * Constant Rat.normalize needs to be qualified. INCOMPATIBILITY. * New set of rules "ac_simps" provides combined assoc / commute rewrites for all interpretations of the appropriate generic locales. * Renamed theory "OrderedGroup" to "Groups" and split theory "Ring_and_Field" into theories "Rings" and "Fields"; for more appropriate and more consistent names suitable for name prefixes within the HOL theories. INCOMPATIBILITY. * Some generic constants have been put to appropriate theories: - less_eq, less: Orderings - zero, one, plus, minus, uminus, times, abs, sgn: Groups - inverse, divide: Rings INCOMPATIBILITY. * More consistent naming of type classes involving orderings (and lattices): lower_semilattice ~> semilattice_inf upper_semilattice ~> semilattice_sup dense_linear_order ~> dense_linorder pordered_ab_group_add ~> ordered_ab_group_add pordered_ab_group_add_abs ~> ordered_ab_group_add_abs pordered_ab_semigroup_add ~> ordered_ab_semigroup_add pordered_ab_semigroup_add_imp_le ~> ordered_ab_semigroup_add_imp_le pordered_cancel_ab_semigroup_add ~> ordered_cancel_ab_semigroup_add pordered_cancel_comm_semiring ~> ordered_cancel_comm_semiring pordered_cancel_semiring ~> ordered_cancel_semiring pordered_comm_monoid_add ~> ordered_comm_monoid_add pordered_comm_ring ~> ordered_comm_ring pordered_comm_semiring ~> ordered_comm_semiring pordered_ring ~> ordered_ring pordered_ring_abs ~> ordered_ring_abs pordered_semiring ~> ordered_semiring ordered_ab_group_add ~> linordered_ab_group_add ordered_ab_semigroup_add ~> linordered_ab_semigroup_add ordered_cancel_ab_semigroup_add ~> linordered_cancel_ab_semigroup_add ordered_comm_semiring_strict ~> linordered_comm_semiring_strict ordered_field ~> linordered_field ordered_field_no_lb ~> linordered_field_no_lb ordered_field_no_ub ~> linordered_field_no_ub ordered_field_dense_linear_order ~> dense_linordered_field ordered_idom ~> linordered_idom ordered_ring ~> linordered_ring ordered_ring_le_cancel_factor ~> linordered_ring_le_cancel_factor ordered_ring_less_cancel_factor ~> linordered_ring_less_cancel_factor ordered_ring_strict ~> linordered_ring_strict ordered_semidom ~> linordered_semidom ordered_semiring ~> linordered_semiring ordered_semiring_1 ~> linordered_semiring_1 ordered_semiring_1_strict ~> linordered_semiring_1_strict ordered_semiring_strict ~> linordered_semiring_strict The following slightly odd type classes have been moved to a separate theory Library/Lattice_Algebras: lordered_ab_group_add ~> lattice_ab_group_add lordered_ab_group_add_abs ~> lattice_ab_group_add_abs lordered_ab_group_add_meet ~> semilattice_inf_ab_group_add lordered_ab_group_add_join ~> semilattice_sup_ab_group_add lordered_ring ~> lattice_ring INCOMPATIBILITY. * Refined field classes: - classes division_ring_inverse_zero, field_inverse_zero, linordered_field_inverse_zero include rule inverse 0 = 0 -- subsumes former division_by_zero class; - numerous lemmas have been ported from field to division_ring. INCOMPATIBILITY. * Refined algebra theorem collections: - dropped theorem group group_simps, use algebra_simps instead; - dropped theorem group ring_simps, use field_simps instead; - proper theorem collection field_simps subsumes former theorem groups field_eq_simps and field_simps; - dropped lemma eq_minus_self_iff which is a duplicate for equal_neg_zero. INCOMPATIBILITY. * Theory Finite_Set and List: some lemmas have been generalized from sets to lattices: fun_left_comm_idem_inter ~> fun_left_comm_idem_inf fun_left_comm_idem_union ~> fun_left_comm_idem_sup inter_Inter_fold_inter ~> inf_Inf_fold_inf union_Union_fold_union ~> sup_Sup_fold_sup Inter_fold_inter ~> Inf_fold_inf Union_fold_union ~> Sup_fold_sup inter_INTER_fold_inter ~> inf_INFI_fold_inf union_UNION_fold_union ~> sup_SUPR_fold_sup INTER_fold_inter ~> INFI_fold_inf UNION_fold_union ~> SUPR_fold_sup * Theory "Complete_Lattice": lemmas top_def and bot_def have been replaced by the more convenient lemmas Inf_empty and Sup_empty. Dropped lemmas Inf_insert_simp and Sup_insert_simp, which are subsumed by Inf_insert and Sup_insert. Lemmas Inf_UNIV and Sup_UNIV replace former Inf_Univ and Sup_Univ. Lemmas inf_top_right and sup_bot_right subsume inf_top and sup_bot respectively. INCOMPATIBILITY. * Reorganized theory Multiset: swapped notation of pointwise and multiset order: - pointwise ordering is instance of class order with standard syntax <= and <; - multiset ordering has syntax <=# and <#; partial order properties are provided by means of interpretation with prefix multiset_order; - less duplication, less historical organization of sections, conversion from associations lists to multisets, rudimentary code generation; - use insert_DiffM2 [symmetric] instead of elem_imp_eq_diff_union, if needed. Renamed: multiset_eq_conv_count_eq ~> multiset_ext_iff multi_count_ext ~> multiset_ext diff_union_inverse2 ~> diff_union_cancelR INCOMPATIBILITY. * Theory Permutation: replaced local "remove" by List.remove1. * Code generation: ML and OCaml code is decorated with signatures. * Theory List: added transpose. * Library/Nat_Bijection.thy is a collection of bijective functions between nat and other types, which supersedes the older libraries Library/Nat_Int_Bij.thy and HOLCF/NatIso.thy. INCOMPATIBILITY. Constants: Nat_Int_Bij.nat2_to_nat ~> prod_encode Nat_Int_Bij.nat_to_nat2 ~> prod_decode Nat_Int_Bij.int_to_nat_bij ~> int_encode Nat_Int_Bij.nat_to_int_bij ~> int_decode Countable.pair_encode ~> prod_encode NatIso.prod2nat ~> prod_encode NatIso.nat2prod ~> prod_decode NatIso.sum2nat ~> sum_encode NatIso.nat2sum ~> sum_decode NatIso.list2nat ~> list_encode NatIso.nat2list ~> list_decode NatIso.set2nat ~> set_encode NatIso.nat2set ~> set_decode Lemmas: Nat_Int_Bij.bij_nat_to_int_bij ~> bij_int_decode Nat_Int_Bij.nat2_to_nat_inj ~> inj_prod_encode Nat_Int_Bij.nat2_to_nat_surj ~> surj_prod_encode Nat_Int_Bij.nat_to_nat2_inj ~> inj_prod_decode Nat_Int_Bij.nat_to_nat2_surj ~> surj_prod_decode Nat_Int_Bij.i2n_n2i_id ~> int_encode_inverse Nat_Int_Bij.n2i_i2n_id ~> int_decode_inverse Nat_Int_Bij.surj_nat_to_int_bij ~> surj_int_encode Nat_Int_Bij.surj_int_to_nat_bij ~> surj_int_decode Nat_Int_Bij.inj_nat_to_int_bij ~> inj_int_encode Nat_Int_Bij.inj_int_to_nat_bij ~> inj_int_decode Nat_Int_Bij.bij_nat_to_int_bij ~> bij_int_encode Nat_Int_Bij.bij_int_to_nat_bij ~> bij_int_decode * Sledgehammer: - Renamed ATP commands: atp_info ~> sledgehammer running_atps atp_kill ~> sledgehammer kill_atps atp_messages ~> sledgehammer messages atp_minimize ~> sledgehammer minimize print_atps ~> sledgehammer available_atps INCOMPATIBILITY. - Added user's manual ("isabelle doc sledgehammer"). - Added option syntax and "sledgehammer_params" to customize Sledgehammer's behavior. See the manual for details. - Modified the Isar proof reconstruction code so that it produces direct proofs rather than proofs by contradiction. (This feature is still experimental.) - Made Isar proof reconstruction work for SPASS, remote ATPs, and in full-typed mode. - Added support for TPTP syntax for SPASS via the "spass_tptp" ATP. * Nitpick: - Added and implemented "binary_ints" and "bits" options. - Added "std" option and implemented support for nonstandard models. - Added and implemented "finitize" option to improve the precision of infinite datatypes based on a monotonicity analysis. - Added support for quotient types. - Added support for "specification" and "ax_specification" constructs. - Added support for local definitions (for "function" and "termination" proofs). - Added support for term postprocessors. - Optimized "Multiset.multiset" and "FinFun.finfun". - Improved efficiency of "destroy_constrs" optimization. - Fixed soundness bugs related to "destroy_constrs" optimization and record getters. - Fixed soundness bug related to higher-order constructors. - Fixed soundness bug when "full_descrs" is enabled. - Improved precision of set constructs. - Added "atoms" option. - Added cache to speed up repeated Kodkod invocations on the same problems. - Renamed "MiniSatJNI", "zChaffJNI", "BerkMinAlloy", and "SAT4JLight" to "MiniSat_JNI", "zChaff_JNI", "BerkMin_Alloy", and "SAT4J_Light". INCOMPATIBILITY. - Removed "skolemize", "uncurry", "sym_break", "flatten_prop", "sharing_depth", and "show_skolems" options. INCOMPATIBILITY. - Removed "nitpick_intro" attribute. INCOMPATIBILITY. * Method "induct" now takes instantiations of the form t, where t is not a variable, as a shorthand for "x == t", where x is a fresh variable. If this is not intended, t has to be enclosed in parentheses. By default, the equalities generated by definitional instantiations are pre-simplified, which may cause parameters of inductive cases to disappear, or may even delete some of the inductive cases. Use "induct (no_simp)" instead of "induct" to restore the old behaviour. The (no_simp) option is also understood by the "cases" and "nominal_induct" methods, which now perform pre-simplification, too. INCOMPATIBILITY. *** HOLCF *** * Variable names in lemmas generated by the domain package have changed; the naming scheme is now consistent with the HOL datatype package. Some proof scripts may be affected, INCOMPATIBILITY. * The domain package no longer defines the function "foo_copy" for recursive domain "foo". The reach lemma is now stated directly in terms of "foo_take". Lemmas and proofs that mention "foo_copy" must be reformulated in terms of "foo_take", INCOMPATIBILITY. * Most definedness lemmas generated by the domain package (previously of the form "x ~= UU ==> foo$x ~= UU") now have an if-and-only-if form like "foo$x = UU <-> x = UU", which works better as a simp rule. Proofs that used definedness lemmas as intro rules may break, potential INCOMPATIBILITY. * Induction and casedist rules generated by the domain package now declare proper case_names (one called "bottom", and one named for each constructor). INCOMPATIBILITY. * For mutually-recursive domains, separate "reach" and "take_lemma" rules are generated for each domain, INCOMPATIBILITY. foo_bar.reach ~> foo.reach bar.reach foo_bar.take_lemmas ~> foo.take_lemma bar.take_lemma * Some lemmas generated by the domain package have been renamed for consistency with the datatype package, INCOMPATIBILITY. foo.ind ~> foo.induct foo.finite_ind ~> foo.finite_induct foo.coind ~> foo.coinduct foo.casedist ~> foo.exhaust foo.exhaust ~> foo.nchotomy * For consistency with other definition packages, the fixrec package now generates qualified theorem names, INCOMPATIBILITY. foo_simps ~> foo.simps foo_unfold ~> foo.unfold foo_induct ~> foo.induct * The "fixrec_simp" attribute has been removed. The "fixrec_simp" method and internal fixrec proofs now use the default simpset instead. INCOMPATIBILITY. * The "contlub" predicate has been removed. Proof scripts should use lemma contI2 in place of monocontlub2cont, INCOMPATIBILITY. * The "admw" predicate has been removed, INCOMPATIBILITY. * The constants cpair, cfst, and csnd have been removed in favor of Pair, fst, and snd from Isabelle/HOL, INCOMPATIBILITY. *** ML *** * Antiquotations for basic formal entities: @{class NAME} -- type class @{class_syntax NAME} -- syntax representation of the above @{type_name NAME} -- logical type @{type_abbrev NAME} -- type abbreviation @{nonterminal NAME} -- type of concrete syntactic category @{type_syntax NAME} -- syntax representation of any of the above @{const_name NAME} -- logical constant (INCOMPATIBILITY) @{const_abbrev NAME} -- abbreviated constant @{const_syntax NAME} -- syntax representation of any of the above * Antiquotation @{syntax_const NAME} ensures that NAME refers to a raw syntax constant (cf. 'syntax' command). * Antiquotation @{make_string} inlines a function to print arbitrary values similar to the ML toplevel. The result is compiler dependent and may fall back on "?" in certain situations. * Diagnostic commands 'ML_val' and 'ML_command' may refer to antiquotations @{Isar.state} and @{Isar.goal}. This replaces impure Isar.state() and Isar.goal(), which belong to the old TTY loop and do not work with the asynchronous Isar document model. * Configuration options now admit dynamic default values, depending on the context or even global references. * SHA1.digest digests strings according to SHA-1 (see RFC 3174). It uses an efficient external library if available (for Poly/ML). * Renamed some important ML structures, while keeping the old names for some time as aliases within the structure Legacy: OuterKeyword ~> Keyword OuterLex ~> Token OuterParse ~> Parse OuterSyntax ~> Outer_Syntax PrintMode ~> Print_Mode SpecParse ~> Parse_Spec ThyInfo ~> Thy_Info ThyLoad ~> Thy_Load ThyOutput ~> Thy_Output TypeInfer ~> Type_Infer Note that "open Legacy" simplifies porting of sources, but forgetting to remove it again will complicate porting again in the future. * Most operations that refer to a global context are named accordingly, e.g. Simplifier.global_context or ProofContext.init_global. There are some situations where a global context actually works, but under normal circumstances one needs to pass the proper local context through the code! * Discontinued old TheoryDataFun with its copy/init operation -- data needs to be pure. Functor Theory_Data_PP retains the traditional Pretty.pp argument to merge, which is absent in the standard Theory_Data version. * Sorts.certify_sort and derived "cert" operations for types and terms no longer minimize sorts. Thus certification at the boundary of the inference kernel becomes invariant under addition of class relations, which is an important monotonicity principle. Sorts are now minimized in the syntax layer only, at the boundary between the end-user and the system. Subtle INCOMPATIBILITY, may have to use Sign.minimize_sort explicitly in rare situations. * Renamed old-style Drule.standard to Drule.export_without_context, to emphasize that this is in no way a standard operation. INCOMPATIBILITY. * Subgoal.FOCUS (and variants): resulting goal state is normalized as usual for resolution. Rare INCOMPATIBILITY. * Renamed varify/unvarify operations to varify_global/unvarify_global to emphasize that these only work in a global situation (which is quite rare). * Curried take and drop in library.ML; negative length is interpreted as infinity (as in chop). Subtle INCOMPATIBILITY. * Proof terms: type substitutions on proof constants now use canonical order of type variables. INCOMPATIBILITY for tools working with proof terms. * Raw axioms/defs may no longer carry sort constraints, and raw defs may no longer carry premises. User-level specifications are transformed accordingly by Thm.add_axiom/add_def. *** System *** * Discontinued special HOL_USEDIR_OPTIONS for the main HOL image; ISABELLE_USEDIR_OPTIONS applies uniformly to all sessions. Note that proof terms are enabled unconditionally in the new HOL-Proofs image. * Discontinued old ISABELLE and ISATOOL environment settings (legacy feature since Isabelle2009). Use ISABELLE_PROCESS and ISABELLE_TOOL, respectively. * Old lib/scripts/polyml-platform is superseded by the ISABELLE_PLATFORM setting variable, which defaults to the 32 bit variant, even on a 64 bit machine. The following example setting prefers 64 bit if available: ML_PLATFORM="${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM}" * The preliminary Isabelle/jEdit application demonstrates the emerging Isabelle/Scala layer for advanced prover interaction and integration. See src/Tools/jEdit or "isabelle jedit" provided by the properly built component. * "IsabelleText" is a Unicode font derived from Bitstream Vera Mono and Bluesky TeX fonts. It provides the usual Isabelle symbols, similar to the default assignment of the document preparation system (cf. isabellesym.sty). The Isabelle/Scala class Isabelle_System provides some operations for direct access to the font without asking the user for manual installation. New in Isabelle2009-1 (December 2009) ------------------------------------- *** General *** * Discontinued old form of "escaped symbols" such as \\. Only one backslash should be used, even in ML sources. *** Pure *** * Locale interpretation propagates mixins along the locale hierarchy. The currently only available mixins are the equations used to map local definitions to terms of the target domain of an interpretation. * Reactivated diagnostic command 'print_interps'. Use "print_interps loc" to print all interpretations of locale "loc" in the theory. Interpretations in proofs are not shown. * Thoroughly revised locales tutorial. New section on conditional interpretation. * On instantiation of classes, remaining undefined class parameters are formally declared. INCOMPATIBILITY. *** Document preparation *** * New generalized style concept for printing terms: @{foo (style) ...} instead of @{foo_style style ...} (old form is still retained for backward compatibility). Styles can be also applied for antiquotations prop, term_type and typeof. *** HOL *** * New proof method "smt" for a combination of first-order logic with equality, linear and nonlinear (natural/integer/real) arithmetic, and fixed-size bitvectors; there is also basic support for higher-order features (esp. lambda abstractions). It is an incomplete decision procedure based on external SMT solvers using the oracle mechanism; for the SMT solver Z3, this method is proof-producing. Certificates are provided to avoid calling the external solvers solely for re-checking proofs. Due to a remote SMT service there is no need for installing SMT solvers locally. See src/HOL/SMT. * New commands to load and prove verification conditions generated by the Boogie program verifier or derived systems (e.g. the Verifying C Compiler (VCC) or Spec#). See src/HOL/Boogie. * New counterexample generator tool 'nitpick' based on the Kodkod relational model finder. See src/HOL/Tools/Nitpick and src/HOL/Nitpick_Examples. * New commands 'code_pred' and 'values' to invoke the predicate compiler and to enumerate values of inductive predicates. * A tabled implementation of the reflexive transitive closure. * New implementation of quickcheck uses generic code generator; default generators are provided for all suitable HOL types, records and datatypes. Old quickcheck can be re-activated importing theory Library/SML_Quickcheck. * New testing tool Mirabelle for automated proof tools. Applies several tools and tactics like sledgehammer, metis, or quickcheck, to every proof step in a theory. To be used in batch mode via the "mirabelle" utility. * New proof method "sos" (sum of squares) for nonlinear real arithmetic (originally due to John Harison). It requires theory Library/Sum_Of_Squares. It is not a complete decision procedure but works well in practice on quantifier-free real arithmetic with +, -, *, ^, =, <= and <, i.e. boolean combinations of equalities and inequalities between polynomials. It makes use of external semidefinite programming solvers. Method "sos" generates a certificate that can be pasted into the proof thus avoiding the need to call an external tool every time the proof is checked. See src/HOL/Library/Sum_Of_Squares. * New method "linarith" invokes existing linear arithmetic decision procedure only. * New command 'atp_minimal' reduces result produced by Sledgehammer. * New Sledgehammer option "Full Types" in Proof General settings menu. Causes full type information to be output to the ATPs. This slows ATPs down considerably but eliminates a source of unsound "proofs" that fail later. * New method "metisFT": A version of metis that uses full type information in order to avoid failures of proof reconstruction. * New evaluator "approximate" approximates an real valued term using the same method as the approximation method. * Method "approximate" now supports arithmetic expressions as boundaries of intervals and implements interval splitting and Taylor series expansion. * ML antiquotation @{code_datatype} inserts definition of a datatype generated by the code generator; e.g. see src/HOL/Predicate.thy. * New theory SupInf of the supremum and infimum operators for sets of reals. * New theory Probability, which contains a development of measure theory, eventually leading to Lebesgue integration and probability. * Extended Multivariate Analysis to include derivation and Brouwer's fixpoint theorem. * Reorganization of number theory, INCOMPATIBILITY: - new number theory development for nat and int, in theories Divides and GCD as well as in new session Number_Theory - some constants and facts now suffixed with _nat and _int accordingly - former session NumberTheory now named Old_Number_Theory, including theories Legacy_GCD and Primes (prefer Number_Theory if possible) - moved theory Pocklington from src/HOL/Library to src/HOL/Old_Number_Theory * Theory GCD includes functions Gcd/GCD and Lcm/LCM for the gcd and lcm of finite and infinite sets. It is shown that they form a complete lattice. * Class semiring_div requires superclass no_zero_divisors and proof of div_mult_mult1; theorems div_mult_mult1, div_mult_mult2, div_mult_mult1_if, div_mult_mult1 and div_mult_mult2 have been generalized to class semiring_div, subsuming former theorems zdiv_zmult_zmult1, zdiv_zmult_zmult1_if, zdiv_zmult_zmult1 and zdiv_zmult_zmult2. div_mult_mult1 is now [simp] by default. INCOMPATIBILITY. * Refinements to lattice classes and sets: - less default intro/elim rules in locale variant, more default intro/elim rules in class variant: more uniformity - lemma ge_sup_conv renamed to le_sup_iff, in accordance with le_inf_iff - dropped lemma alias inf_ACI for inf_aci (same for sup_ACI and sup_aci) - renamed ACI to inf_sup_aci - new class "boolean_algebra" - class "complete_lattice" moved to separate theory "Complete_Lattice"; corresponding constants (and abbreviations) renamed and with authentic syntax: Set.Inf ~> Complete_Lattice.Inf Set.Sup ~> Complete_Lattice.Sup Set.INFI ~> Complete_Lattice.INFI Set.SUPR ~> Complete_Lattice.SUPR Set.Inter ~> Complete_Lattice.Inter Set.Union ~> Complete_Lattice.Union Set.INTER ~> Complete_Lattice.INTER Set.UNION ~> Complete_Lattice.UNION - authentic syntax for Set.Pow Set.image - mere abbreviations: Set.empty (for bot) Set.UNIV (for top) Set.inter (for inf, formerly Set.Int) Set.union (for sup, formerly Set.Un) Complete_Lattice.Inter (for Inf) Complete_Lattice.Union (for Sup) Complete_Lattice.INTER (for INFI) Complete_Lattice.UNION (for SUPR) - object-logic definitions as far as appropriate INCOMPATIBILITY. Care is required when theorems Int_subset_iff or Un_subset_iff are explicitly deleted as default simp rules; then also their lattice counterparts le_inf_iff and le_sup_iff have to be deleted to achieve the desired effect. * Rules inf_absorb1, inf_absorb2, sup_absorb1, sup_absorb2 are no simp rules by default any longer; the same applies to min_max.inf_absorb1 etc. INCOMPATIBILITY. * Rules sup_Int_eq and sup_Un_eq are no longer declared as pred_set_conv by default. INCOMPATIBILITY. * Power operations on relations and functions are now one dedicated constant "compow" with infix syntax "^^". Power operation on multiplicative monoids retains syntax "^" and is now defined generic in class power. INCOMPATIBILITY. * Relation composition "R O S" now has a more standard argument order: "R O S = {(x, z). EX y. (x, y) : R & (y, z) : S}". INCOMPATIBILITY, rewrite propositions with "S O R" --> "R O S". Proofs may occasionally break, since the O_assoc rule was not rewritten like this. Fix using O_assoc[symmetric]. The same applies to the curried version "R OO S". * Function "Inv" is renamed to "inv_into" and function "inv" is now an abbreviation for "inv_into UNIV". Lemmas are renamed accordingly. INCOMPATIBILITY. * Most rules produced by inductive and datatype package have mandatory prefixes. INCOMPATIBILITY. * Changed "DERIV_intros" to a dynamic fact, which can be augmented by the attribute of the same name. Each of the theorems in the list DERIV_intros assumes composition with an additional function and matches a variable to the derivative, which has to be solved by the Simplifier. Hence (auto intro!: DERIV_intros) computes the derivative of most elementary terms. Former Maclauren.DERIV_tac and Maclauren.deriv_tac should be replaced by (auto intro!: DERIV_intros). INCOMPATIBILITY. * Code generator attributes follow the usual underscore convention: code_unfold replaces code unfold code_post replaces code post etc. INCOMPATIBILITY. * Renamed methods: sizechange -> size_change induct_scheme -> induction_schema INCOMPATIBILITY. * Discontinued abbreviation "arbitrary" of constant "undefined". INCOMPATIBILITY, use "undefined" directly. * Renamed theorems: Suc_eq_add_numeral_1 -> Suc_eq_plus1 Suc_eq_add_numeral_1_left -> Suc_eq_plus1_left Suc_plus1 -> Suc_eq_plus1 *anti_sym -> *antisym* vector_less_eq_def -> vector_le_def INCOMPATIBILITY. * Added theorem List.map_map as [simp]. Removed List.map_compose. INCOMPATIBILITY. * Removed predicate "M hassize n" (<--> card M = n & finite M). INCOMPATIBILITY. *** HOLCF *** * Theory Representable defines a class "rep" of domains that are representable (via an ep-pair) in the universal domain type "udom". Instances are provided for all type constructors defined in HOLCF. * The 'new_domain' command is a purely definitional version of the domain package, for representable domains. Syntax is identical to the old domain package. The 'new_domain' package also supports indirect recursion using previously-defined type constructors. See src/HOLCF/ex/New_Domain.thy for examples. * Method "fixrec_simp" unfolds one step of a fixrec-defined constant on the left-hand side of an equation, and then performs simplification. Rewriting is done using rules declared with the "fixrec_simp" attribute. The "fixrec_simp" method is intended as a replacement for "fixpat"; see src/HOLCF/ex/Fixrec_ex.thy for examples. * The pattern-match compiler in 'fixrec' can now handle constructors with HOL function types. Pattern-match combinators for the Pair constructor are pre-configured. * The 'fixrec' package now produces better fixed-point induction rules for mutually-recursive definitions: Induction rules have conclusions of the form "P foo bar" instead of "P ". * The constant "sq_le" (with infix syntax "<<" or "\") has been renamed to "below". The name "below" now replaces "less" in many theorem names. (Legacy theorem names using "less" are still supported as well.) * The 'fixrec' package now supports "bottom patterns". Bottom patterns can be used to generate strictness rules, or to make functions more strict (much like the bang-patterns supported by the Glasgow Haskell Compiler). See src/HOLCF/ex/Fixrec_ex.thy for examples. *** ML *** * Support for Poly/ML 5.3.0, with improved reporting of compiler errors and run-time exceptions, including detailed source positions. * Structure Name_Space (formerly NameSpace) now manages uniquely identified entries, with some additional information such as source position, logical grouping etc. * Theory and context data is now introduced by the simplified and modernized functors Theory_Data, Proof_Data, Generic_Data. Data needs to be pure, but the old TheoryDataFun for mutable data (with explicit copy operation) is still available for some time. * Structure Synchronized (cf. src/Pure/Concurrent/synchronized.ML) provides a high-level programming interface to synchronized state variables with atomic update. This works via pure function application within a critical section -- its runtime should be as short as possible; beware of deadlocks if critical code is nested, either directly or indirectly via other synchronized variables! * Structure Unsynchronized (cf. src/Pure/ML-Systems/unsynchronized.ML) wraps raw ML references, explicitly indicating their non-thread-safe behaviour. The Isar toplevel keeps this structure open, to accommodate Proof General as well as quick and dirty interactive experiments with references. * PARALLEL_CHOICE and PARALLEL_GOALS provide basic support for parallel tactical reasoning. * Tacticals Subgoal.FOCUS, Subgoal.FOCUS_PREMS, Subgoal.FOCUS_PARAMS are similar to SUBPROOF, but are slightly more flexible: only the specified parts of the subgoal are imported into the context, and the body tactic may introduce new subgoals and schematic variables. * Old tactical METAHYPS, which does not observe the proof context, has been renamed to Old_Goals.METAHYPS and awaits deletion. Use SUBPROOF or Subgoal.FOCUS etc. * Renamed functor TableFun to Table, and GraphFun to Graph. (Since functors have their own ML name space there is no point to mark them separately.) Minor INCOMPATIBILITY. * Renamed NamedThmsFun to Named_Thms. INCOMPATIBILITY. * Renamed several structures FooBar to Foo_Bar. Occasional, INCOMPATIBILITY. * Operations of structure Skip_Proof no longer require quick_and_dirty mode, which avoids critical setmp. * Eliminated old Attrib.add_attributes, Method.add_methods and related combinators for "args". INCOMPATIBILITY, need to use simplified Attrib/Method.setup introduced in Isabelle2009. * Proper context for simpset_of, claset_of, clasimpset_of. May fall back on global_simpset_of, global_claset_of, global_clasimpset_of as last resort. INCOMPATIBILITY. * Display.pretty_thm now requires a proper context (cf. former ProofContext.pretty_thm). May fall back on Display.pretty_thm_global or even Display.pretty_thm_without_context as last resort. INCOMPATIBILITY. * Discontinued Display.pretty_ctyp/cterm etc. INCOMPATIBILITY, use Syntax.pretty_typ/term directly, preferably with proper context instead of global theory. *** System *** * Further fine tuning of parallel proof checking, scales up to 8 cores (max. speedup factor 5.0). See also Goal.parallel_proofs in ML and usedir option -q. * Support for additional "Isabelle components" via etc/components, see also the system manual. * The isabelle makeall tool now operates on all components with IsaMakefile, not just hardwired "logics". * Removed "compress" option from isabelle-process and isabelle usedir; this is always enabled. * Discontinued support for Poly/ML 4.x versions. * Isabelle tool "wwwfind" provides web interface for 'find_theorems' on a given logic image. This requires the lighttpd webserver and is currently supported on Linux only. New in Isabelle2009 (April 2009) -------------------------------- *** General *** * Simplified main Isabelle executables, with less surprises on case-insensitive file-systems (such as Mac OS). - The main Isabelle tool wrapper is now called "isabelle" instead of "isatool." - The former "isabelle" alias for "isabelle-process" has been removed (should rarely occur to regular users). - The former "isabelle-interface" and its alias "Isabelle" have been removed (interfaces are now regular Isabelle tools). Within scripts and make files, the Isabelle environment variables ISABELLE_TOOL and ISABELLE_PROCESS replace old ISATOOL and ISABELLE, respectively. (The latter are still available as legacy feature.) The old isabelle-interface wrapper could react in confusing ways if the interface was uninstalled or changed otherwise. Individual interface tool configuration is now more explicit, see also the Isabelle system manual. In particular, Proof General is now available via "isabelle emacs". INCOMPATIBILITY, need to adapt derivative scripts. Users may need to purge installed copies of Isabelle executables and re-run "isabelle install -p ...", or use symlinks. * The default for ISABELLE_HOME_USER is now ~/.isabelle instead of the old ~/isabelle, which was slightly non-standard and apt to cause surprises on case-insensitive file-systems (such as Mac OS). INCOMPATIBILITY, need to move existing ~/isabelle/etc, ~/isabelle/heaps, ~/isabelle/browser_info to the new place. Special care is required when using older releases of Isabelle. Note that ISABELLE_HOME_USER can be changed in Isabelle/etc/settings of any Isabelle distribution, in order to use the new ~/.isabelle uniformly. * Proofs of fully specified statements are run in parallel on multi-core systems. A speedup factor of 2.5 to 3.2 can be expected on a regular 4-core machine, if the initial heap space is made reasonably large (cf. Poly/ML option -H). (Requires Poly/ML 5.2.1 or later.) * The main reference manuals ("isar-ref", "implementation", and "system") have been updated and extended. Formally checked references as hyperlinks are now available uniformly. *** Pure *** * Complete re-implementation of locales. INCOMPATIBILITY in several respects. The most important changes are listed below. See the Tutorial on Locales ("locales" manual) for details. - In locale expressions, instantiation replaces renaming. Parameters must be declared in a for clause. To aid compatibility with previous parameter inheritance, in locale declarations, parameters that are not 'touched' (instantiation position "_" or omitted) are implicitly added with their syntax at the beginning of the for clause. - Syntax from abbreviations and definitions in locales is available in locale expressions and context elements. The latter is particularly useful in locale declarations. - More flexible mechanisms to qualify names generated by locale expressions. Qualifiers (prefixes) may be specified in locale expressions, and can be marked as mandatory (syntax: "name!:") or optional (syntax "name?:"). The default depends for plain "name:" depends on the situation where a locale expression is used: in commands 'locale' and 'sublocale' prefixes are optional, in 'interpretation' and 'interpret' prefixes are mandatory. The old implicit qualifiers derived from the parameter names of a locale are no longer generated. - Command "sublocale l < e" replaces "interpretation l < e". The instantiation clause in "interpretation" and "interpret" (square brackets) is no longer available. Use locale expressions. - When converting proof scripts, mandatory qualifiers in 'interpretation' and 'interpret' should be retained by default, even if this is an INCOMPATIBILITY compared to former behavior. In the worst case, use the "name?:" form for non-mandatory ones. Qualifiers in locale expressions range over a single locale instance only. - Dropped locale element "includes". This is a major INCOMPATIBILITY. In existing theorem specifications replace the includes element by the respective context elements of the included locale, omitting those that are already present in the theorem specification. Multiple assume elements of a locale should be replaced by a single one involving the locale predicate. In the proof body, declarations (most notably theorems) may be regained by interpreting the respective locales in the proof context as required (command "interpret"). If using "includes" in replacement of a target solely because the parameter types in the theorem are not as general as in the target, consider declaring a new locale with additional type constraints on the parameters (context element "constrains"). - Discontinued "locale (open)". INCOMPATIBILITY. - Locale interpretation commands no longer attempt to simplify goal. INCOMPATIBILITY: in rare situations the generated goal differs. Use methods intro_locales and unfold_locales to clarify. - Locale interpretation commands no longer accept interpretation attributes. INCOMPATIBILITY. * Class declaration: so-called "base sort" must not be given in import list any longer, but is inferred from the specification. Particularly in HOL, write class foo = ... instead of class foo = type + ... * Class target: global versions of theorems stemming do not carry a parameter prefix any longer. INCOMPATIBILITY. * Class 'instance' command no longer accepts attached definitions. INCOMPATIBILITY, use proper 'instantiation' target instead. * Recovered hiding of consts, which was accidentally broken in Isabelle2007. Potential INCOMPATIBILITY, ``hide const c'' really makes c inaccessible; consider using ``hide (open) const c'' instead. * Slightly more coherent Pure syntax, with updated documentation in isar-ref manual. Removed locales meta_term_syntax and meta_conjunction_syntax: TERM and &&& (formerly &&) are now permanent, INCOMPATIBILITY in rare situations. Note that &&& should not be used directly in regular applications. * There is a new syntactic category "float_const" for signed decimal fractions (e.g. 123.45 or -123.45). * Removed exotic 'token_translation' command. INCOMPATIBILITY, use ML interface with 'setup' command instead. * Command 'local_setup' is similar to 'setup', but operates on a local theory context. * The 'axiomatization' command now only works within a global theory context. INCOMPATIBILITY. * Goal-directed proof now enforces strict proof irrelevance wrt. sort hypotheses. Sorts required in the course of reasoning need to be covered by the constraints in the initial statement, completed by the type instance information of the background theory. Non-trivial sort hypotheses, which rarely occur in practice, may be specified via vacuous propositions of the form SORT_CONSTRAINT('a::c). For example: lemma assumes "SORT_CONSTRAINT('a::empty)" shows False ... The result contains an implicit sort hypotheses as before -- SORT_CONSTRAINT premises are eliminated as part of the canonical rule normalization. * Generalized Isar history, with support for linear undo, direct state addressing etc. * Changed defaults for unify configuration options: unify_trace_bound = 50 (formerly 25) unify_search_bound = 60 (formerly 30) * Different bookkeeping for code equations (INCOMPATIBILITY): a) On theory merge, the last set of code equations for a particular constant is taken (in accordance with the policy applied by other parts of the code generator framework). b) Code equations stemming from explicit declarations (e.g. code attribute) gain priority over default code equations stemming from definition, primrec, fun etc. * Keyword 'code_exception' now named 'code_abort'. INCOMPATIBILITY. * Unified theorem tables for both code generators. Thus [code func] has disappeared and only [code] remains. INCOMPATIBILITY. * Command 'find_consts' searches for constants based on type and name patterns, e.g. find_consts "_ => bool" By default, matching is against subtypes, but it may be restricted to the whole type. Searching by name is possible. Multiple queries are conjunctive and queries may be negated by prefixing them with a hyphen: find_consts strict: "_ => bool" name: "Int" -"int => int" * New 'find_theorems' criterion "solves" matches theorems that directly solve the current goal (modulo higher-order unification). * Auto solve feature for main theorem statements: whenever a new goal is stated, "find_theorems solves" is called; any theorems that could solve the lemma directly are listed as part of the goal state. Cf. associated options in Proof General Isabelle settings menu, enabled by default, with reasonable timeout for pathological cases of higher-order unification. *** Document preparation *** * Antiquotation @{lemma} now imitates a regular terminal proof, demanding keyword 'by' and supporting the full method expression syntax just like the Isar command 'by'. *** HOL *** * Integrated main parts of former image HOL-Complex with HOL. Entry points Main and Complex_Main remain as before. * Logic image HOL-Plain provides a minimal HOL with the most important tools available (inductive, datatype, primrec, ...). This facilitates experimentation and tool development. Note that user applications (and library theories) should never refer to anything below theory Main, as before. * Logic image HOL-Main stops at theory Main, and thus facilitates experimentation due to shorter build times. * Logic image HOL-NSA contains theories of nonstandard analysis which were previously part of former HOL-Complex. Entry point Hyperreal remains valid, but theories formerly using Complex_Main should now use new entry point Hypercomplex. * Generic ATP manager for Sledgehammer, based on ML threads instead of Posix processes. Avoids potentially expensive forking of the ML process. New thread-based implementation also works on non-Unix platforms (Cygwin). Provers are no longer hardwired, but defined within the theory via plain ML wrapper functions. Basic Sledgehammer commands are covered in the isar-ref manual. * Wrapper scripts for remote SystemOnTPTP service allows to use sledgehammer without local ATP installation (Vampire etc.). Other provers may be included via suitable ML wrappers, see also src/HOL/ATP_Linkup.thy. * ATP selection (E/Vampire/Spass) is now via Proof General's settings menu. * The metis method no longer fails because the theorem is too trivial (contains the empty clause). * The metis method now fails in the usual manner, rather than raising an exception, if it determines that it cannot prove the theorem. * Method "coherent" implements a prover for coherent logic (see also src/Tools/coherent.ML). * Constants "undefined" and "default" replace "arbitrary". Usually "undefined" is the right choice to replace "arbitrary", though logically there is no difference. INCOMPATIBILITY. * Command "value" now integrates different evaluation mechanisms. The result of the first successful evaluation mechanism is printed. In square brackets a particular named evaluation mechanisms may be specified (currently, [SML], [code] or [nbe]). See further src/HOL/ex/Eval_Examples.thy. * Normalization by evaluation now allows non-leftlinear equations. Declare with attribute [code nbe]. * Methods "case_tac" and "induct_tac" now refer to the very same rules as the structured Isar versions "cases" and "induct", cf. the corresponding "cases" and "induct" attributes. Mutual induction rules are now presented as a list of individual projections (e.g. foo_bar.inducts for types foo and bar); the old format with explicit HOL conjunction is no longer supported. INCOMPATIBILITY, in rare situations a different rule is selected --- notably nested tuple elimination instead of former prod.exhaust: use explicit (case_tac t rule: prod.exhaust) here. * Attributes "cases", "induct", "coinduct" support "del" option. * Removed fact "case_split_thm", which duplicates "case_split". * The option datatype has been moved to a new theory Option. Renamed option_map to Option.map, and o2s to Option.set, INCOMPATIBILITY. * New predicate "strict_mono" classifies strict functions on partial orders. With strict functions on linear orders, reasoning about (in)equalities is facilitated by theorems "strict_mono_eq", "strict_mono_less_eq" and "strict_mono_less". * Some set operations are now proper qualified constants with authentic syntax. INCOMPATIBILITY: op Int ~> Set.Int op Un ~> Set.Un INTER ~> Set.INTER UNION ~> Set.UNION Inter ~> Set.Inter Union ~> Set.Union {} ~> Set.empty UNIV ~> Set.UNIV * Class complete_lattice with operations Inf, Sup, INFI, SUPR now in theory Set. * Auxiliary class "itself" has disappeared -- classes without any parameter are treated as expected by the 'class' command. * Leibnitz's Series for Pi and the arcus tangens and logarithm series. * Common decision procedures (Cooper, MIR, Ferrack, Approximation, Dense_Linear_Order) are now in directory HOL/Decision_Procs. * Theory src/HOL/Decision_Procs/Approximation provides the new proof method "approximation". It proves formulas on real values by using interval arithmetic. In the formulas are also the transcendental functions sin, cos, tan, atan, ln, exp and the constant pi are allowed. For examples see src/HOL/Descision_Procs/ex/Approximation_Ex.thy. * Theory "Reflection" now resides in HOL/Library. * Entry point to Word library now simply named "Word". INCOMPATIBILITY. * Made source layout more coherent with logical distribution structure: src/HOL/Library/RType.thy ~> src/HOL/Typerep.thy src/HOL/Library/Code_Message.thy ~> src/HOL/ src/HOL/Library/GCD.thy ~> src/HOL/ src/HOL/Library/Order_Relation.thy ~> src/HOL/ src/HOL/Library/Parity.thy ~> src/HOL/ src/HOL/Library/Univ_Poly.thy ~> src/HOL/ src/HOL/Real/ContNotDenum.thy ~> src/HOL/Library/ src/HOL/Real/Lubs.thy ~> src/HOL/ src/HOL/Real/PReal.thy ~> src/HOL/ src/HOL/Real/Rational.thy ~> src/HOL/ src/HOL/Real/RComplete.thy ~> src/HOL/ src/HOL/Real/RealDef.thy ~> src/HOL/ src/HOL/Real/RealPow.thy ~> src/HOL/ src/HOL/Real/Real.thy ~> src/HOL/ src/HOL/Complex/Complex_Main.thy ~> src/HOL/ src/HOL/Complex/Complex.thy ~> src/HOL/ src/HOL/Complex/FrechetDeriv.thy ~> src/HOL/Library/ src/HOL/Complex/Fundamental_Theorem_Algebra.thy ~> src/HOL/Library/ src/HOL/Hyperreal/Deriv.thy ~> src/HOL/ src/HOL/Hyperreal/Fact.thy ~> src/HOL/ src/HOL/Hyperreal/Integration.thy ~> src/HOL/ src/HOL/Hyperreal/Lim.thy ~> src/HOL/ src/HOL/Hyperreal/Ln.thy ~> src/HOL/ src/HOL/Hyperreal/Log.thy ~> src/HOL/ src/HOL/Hyperreal/MacLaurin.thy ~> src/HOL/ src/HOL/Hyperreal/NthRoot.thy ~> src/HOL/ src/HOL/Hyperreal/Series.thy ~> src/HOL/ src/HOL/Hyperreal/SEQ.thy ~> src/HOL/ src/HOL/Hyperreal/Taylor.thy ~> src/HOL/ src/HOL/Hyperreal/Transcendental.thy ~> src/HOL/ src/HOL/Real/Float ~> src/HOL/Library/ src/HOL/Real/HahnBanach ~> src/HOL/HahnBanach src/HOL/Real/RealVector.thy ~> src/HOL/ src/HOL/arith_data.ML ~> src/HOL/Tools src/HOL/hologic.ML ~> src/HOL/Tools src/HOL/simpdata.ML ~> src/HOL/Tools src/HOL/int_arith1.ML ~> src/HOL/Tools/int_arith.ML src/HOL/int_factor_simprocs.ML ~> src/HOL/Tools src/HOL/nat_simprocs.ML ~> src/HOL/Tools src/HOL/Real/float_arith.ML ~> src/HOL/Tools src/HOL/Real/float_syntax.ML ~> src/HOL/Tools src/HOL/Real/rat_arith.ML ~> src/HOL/Tools src/HOL/Real/real_arith.ML ~> src/HOL/Tools src/HOL/Library/Array.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Heap_Monad.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Heap.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Imperative_HOL.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Ref.thy ~> src/HOL/Imperative_HOL src/HOL/Library/Relational.thy ~> src/HOL/Imperative_HOL * If methods "eval" and "evaluation" encounter a structured proof state with !!/==>, only the conclusion is evaluated to True (if possible), avoiding strange error messages. * Method "sizechange" automates termination proofs using (a modification of) the size-change principle. Requires SAT solver. See src/HOL/ex/Termination.thy for examples. * Simplifier: simproc for let expressions now unfolds if bound variable occurs at most once in let expression body. INCOMPATIBILITY. * Method "arith": Linear arithmetic now ignores all inequalities when fast_arith_neq_limit is exceeded, instead of giving up entirely. * New attribute "arith" for facts that should always be used automatically by arithmetic. It is intended to be used locally in proofs, e.g. assumes [arith]: "x > 0" Global usage is discouraged because of possible performance impact. * New classes "top" and "bot" with corresponding operations "top" and "bot" in theory Orderings; instantiation of class "complete_lattice" requires instantiation of classes "top" and "bot". INCOMPATIBILITY. * Changed definition lemma "less_fun_def" in order to provide an instance for preorders on functions; use lemma "less_le" instead. INCOMPATIBILITY. * Theory Orderings: class "wellorder" moved here, with explicit induction rule "less_induct" as assumption. For instantiation of "wellorder" by means of predicate "wf", use rule wf_wellorderI. INCOMPATIBILITY. * Theory Orderings: added class "preorder" as superclass of "order". INCOMPATIBILITY: Instantiation proofs for order, linorder etc. slightly changed. Some theorems named order_class.* now named preorder_class.*. * Theory Relation: renamed "refl" to "refl_on", "reflexive" to "refl, "diag" to "Id_on". * Theory Finite_Set: added a new fold combinator of type ('a => 'b => 'b) => 'b => 'a set => 'b Occasionally this is more convenient than the old fold combinator which is now defined in terms of the new one and renamed to fold_image. * Theories Ring_and_Field and OrderedGroup: The lemmas "group_simps" and "ring_simps" have been replaced by "algebra_simps" (which can be extended with further lemmas!). At the moment both still exist but the former will disappear at some point. * Theory Power: Lemma power_Suc is now declared as a simp rule in class recpower. Type-specific simp rules for various recpower types have been removed. INCOMPATIBILITY, rename old lemmas as follows: rat_power_0 -> power_0 rat_power_Suc -> power_Suc realpow_0 -> power_0 realpow_Suc -> power_Suc complexpow_0 -> power_0 complexpow_Suc -> power_Suc power_poly_0 -> power_0 power_poly_Suc -> power_Suc * Theories Ring_and_Field and Divides: Definition of "op dvd" has been moved to separate class dvd in Ring_and_Field; a couple of lemmas on dvd has been generalized to class comm_semiring_1. Likewise a bunch of lemmas from Divides has been generalized from nat to class semiring_div. INCOMPATIBILITY. This involves the following theorem renames resulting from duplicate elimination: dvd_def_mod ~> dvd_eq_mod_eq_0 zero_dvd_iff ~> dvd_0_left_iff dvd_0 ~> dvd_0_right DIVISION_BY_ZERO_DIV ~> div_by_0 DIVISION_BY_ZERO_MOD ~> mod_by_0 mult_div ~> div_mult_self2_is_id mult_mod ~> mod_mult_self2_is_0 * Theory IntDiv: removed many lemmas that are instances of class-based generalizations (from Divides and Ring_and_Field). INCOMPATIBILITY, rename old lemmas as follows: dvd_diff -> nat_dvd_diff dvd_zminus_iff -> dvd_minus_iff mod_add1_eq -> mod_add_eq mod_mult1_eq -> mod_mult_right_eq mod_mult1_eq' -> mod_mult_left_eq mod_mult_distrib_mod -> mod_mult_eq nat_mod_add_left_eq -> mod_add_left_eq nat_mod_add_right_eq -> mod_add_right_eq nat_mod_div_trivial -> mod_div_trivial nat_mod_mod_trivial -> mod_mod_trivial zdiv_zadd_self1 -> div_add_self1 zdiv_zadd_self2 -> div_add_self2 zdiv_zmult_self1 -> div_mult_self2_is_id zdiv_zmult_self2 -> div_mult_self1_is_id zdvd_triv_left -> dvd_triv_left zdvd_triv_right -> dvd_triv_right zdvd_zmult_cancel_disj -> dvd_mult_cancel_left zmod_eq0_zdvd_iff -> dvd_eq_mod_eq_0[symmetric] zmod_zadd_left_eq -> mod_add_left_eq zmod_zadd_right_eq -> mod_add_right_eq zmod_zadd_self1 -> mod_add_self1 zmod_zadd_self2 -> mod_add_self2 zmod_zadd1_eq -> mod_add_eq zmod_zdiff1_eq -> mod_diff_eq zmod_zdvd_zmod -> mod_mod_cancel zmod_zmod_cancel -> mod_mod_cancel zmod_zmult_self1 -> mod_mult_self2_is_0 zmod_zmult_self2 -> mod_mult_self1_is_0 zmod_1 -> mod_by_1 zdiv_1 -> div_by_1 zdvd_abs1 -> abs_dvd_iff zdvd_abs2 -> dvd_abs_iff zdvd_refl -> dvd_refl zdvd_trans -> dvd_trans zdvd_zadd -> dvd_add zdvd_zdiff -> dvd_diff zdvd_zminus_iff -> dvd_minus_iff zdvd_zminus2_iff -> minus_dvd_iff zdvd_zmultD -> dvd_mult_right zdvd_zmultD2 -> dvd_mult_left zdvd_zmult_mono -> mult_dvd_mono zdvd_0_right -> dvd_0_right zdvd_0_left -> dvd_0_left_iff zdvd_1_left -> one_dvd zminus_dvd_iff -> minus_dvd_iff * Theory Rational: 'Fract k 0' now equals '0'. INCOMPATIBILITY. * The real numbers offer decimal input syntax: 12.34 is translated into 1234/10^2. This translation is not reversed upon output. * Theory Library/Polynomial defines an abstract type 'a poly of univariate polynomials with coefficients of type 'a. In addition to the standard ring operations, it also supports div and mod. Code generation is also supported, using list-style constructors. * Theory Library/Inner_Product defines a class of real_inner for real inner product spaces, with an overloaded operation inner :: 'a => 'a => real. Class real_inner is a subclass of real_normed_vector from theory RealVector. * Theory Library/Product_Vector provides instances for the product type 'a * 'b of several classes from RealVector and Inner_Product. Definitions of addition, subtraction, scalar multiplication, norms, and inner products are included. * Theory Library/Bit defines the field "bit" of integers modulo 2. In addition to the field operations, numerals and case syntax are also supported. * Theory Library/Diagonalize provides constructive version of Cantor's first diagonalization argument. * Theory Library/GCD: Curried operations gcd, lcm (for nat) and zgcd, zlcm (for int); carried together from various gcd/lcm developements in the HOL Distribution. Constants zgcd and zlcm replace former igcd and ilcm; corresponding theorems renamed accordingly. INCOMPATIBILITY, may recover tupled syntax as follows: hide (open) const gcd abbreviation gcd where "gcd == (%(a, b). GCD.gcd a b)" notation (output) GCD.gcd ("gcd '(_, _')") The same works for lcm, zgcd, zlcm. * Theory Library/Nat_Infinity: added addition, numeral syntax and more instantiations for algebraic structures. Removed some duplicate theorems. Changes in simp rules. INCOMPATIBILITY. * ML antiquotation @{code} takes a constant as argument and generates corresponding code in background and inserts name of the corresponding resulting ML value/function/datatype constructor binding in place. All occurrences of @{code} with a single ML block are generated simultaneously. Provides a generic and safe interface for instrumentalizing code generation. See src/HOL/Decision_Procs/Ferrack.thy for a more ambitious application. In future you ought to refrain from ad-hoc compiling generated SML code on the ML toplevel. Note that (for technical reasons) @{code} cannot refer to constants for which user-defined serializations are set. Refer to the corresponding ML counterpart directly in that cases. * Command 'rep_datatype': instead of theorem names the command now takes a list of terms denoting the constructors of the type to be represented as datatype. The characteristic theorems have to be proven. INCOMPATIBILITY. Also observe that the following theorems have disappeared in favour of existing ones: unit_induct ~> unit.induct prod_induct ~> prod.induct sum_induct ~> sum.induct Suc_Suc_eq ~> nat.inject Suc_not_Zero Zero_not_Suc ~> nat.distinct *** HOL-Algebra *** * New locales for orders and lattices where the equivalence relation is not restricted to equality. INCOMPATIBILITY: all order and lattice locales use a record structure with field eq for the equivalence. * New theory of factorial domains. * Units_l_inv and Units_r_inv are now simp rules by default. INCOMPATIBILITY. Simplifier proof that require deletion of l_inv and/or r_inv will now also require deletion of these lemmas. * Renamed the following theorems, INCOMPATIBILITY: UpperD ~> Upper_memD LowerD ~> Lower_memD least_carrier ~> least_closed greatest_carrier ~> greatest_closed greatest_Lower_above ~> greatest_Lower_below one_zero ~> carrier_one_zero one_not_zero ~> carrier_one_not_zero (collision with assumption) *** HOL-Nominal *** * Nominal datatypes can now contain type-variables. * Commands 'nominal_inductive' and 'equivariance' work with local theory targets. * Nominal primrec can now works with local theory targets and its specification syntax now conforms to the general format as seen in 'inductive' etc. * Method "perm_simp" honours the standard simplifier attributes (no_asm), (no_asm_use) etc. * The new predicate #* is defined like freshness, except that on the left hand side can be a set or list of atoms. * Experimental command 'nominal_inductive2' derives strong induction principles for inductive definitions. In contrast to 'nominal_inductive', which can only deal with a fixed number of binders, it can deal with arbitrary expressions standing for sets of atoms to be avoided. The only inductive definition we have at the moment that needs this generalisation is the typing rule for Lets in the algorithm W: Gamma |- t1 : T1 (x,close Gamma T1)::Gamma |- t2 : T2 x#Gamma ----------------------------------------------------------------- Gamma |- Let x be t1 in t2 : T2 In this rule one wants to avoid all the binders that are introduced by "close Gamma T1". We are looking for other examples where this feature might be useful. Please let us know. *** HOLCF *** * Reimplemented the simplification procedure for proving continuity subgoals. The new simproc is extensible; users can declare additional continuity introduction rules with the attribute [cont2cont]. * The continuity simproc now uses a different introduction rule for solving continuity subgoals on terms with lambda abstractions. In some rare cases the new simproc may fail to solve subgoals that the old one could solve, and "simp add: cont2cont_LAM" may be necessary. Potential INCOMPATIBILITY. * Command 'fixrec': specification syntax now conforms to the general format as seen in 'inductive' etc. See src/HOLCF/ex/Fixrec_ex.thy for examples. INCOMPATIBILITY. *** ZF *** * Proof of Zorn's Lemma for partial orders. *** ML *** * Multithreading for Poly/ML 5.1/5.2 is no longer supported, only for Poly/ML 5.2.1 or later. Important note: the TimeLimit facility depends on multithreading, so timouts will not work before Poly/ML 5.2.1! * High-level support for concurrent ML programming, see src/Pure/Cuncurrent. The data-oriented model of "future values" is particularly convenient to organize independent functional computations. The concept of "synchronized variables" provides a higher-order interface for components with shared state, avoiding the delicate details of mutexes and condition variables. (Requires Poly/ML 5.2.1 or later.) * ML bindings produced via Isar commands are stored within the Isar context (theory or proof). Consequently, commands like 'use' and 'ML' become thread-safe and work with undo as expected (concerning top-level bindings, not side-effects on global references). INCOMPATIBILITY, need to provide proper Isar context when invoking the compiler at runtime; really global bindings need to be given outside a theory. (Requires Poly/ML 5.2 or later.) * Command 'ML_prf' is analogous to 'ML' but works within a proof context. Top-level ML bindings are stored within the proof context in a purely sequential fashion, disregarding the nested proof structure. ML bindings introduced by 'ML_prf' are discarded at the end of the proof. (Requires Poly/ML 5.2 or later.) * Simplified ML attribute and method setup, cf. functions Attrib.setup and Method.setup, as well as Isar commands 'attribute_setup' and 'method_setup'. INCOMPATIBILITY for 'method_setup', need to simplify existing code accordingly, or use plain 'setup' together with old Method.add_method. * Simplified ML oracle interface Thm.add_oracle promotes 'a -> cterm to 'a -> thm, while results are always tagged with an authentic oracle name. The Isar command 'oracle' is now polymorphic, no argument type is specified. INCOMPATIBILITY, need to simplify existing oracle code accordingly. Note that extra performance may be gained by producing the cterm carefully, avoiding slow Thm.cterm_of. * Simplified interface for defining document antiquotations via ThyOutput.antiquotation, ThyOutput.output, and optionally ThyOutput.maybe_pretty_source. INCOMPATIBILITY, need to simplify user antiquotations accordingly, see src/Pure/Thy/thy_output.ML for common examples. * More systematic treatment of long names, abstract name bindings, and name space operations. Basic operations on qualified names have been move from structure NameSpace to Long_Name, e.g. Long_Name.base_name, Long_Name.append. Old type bstring has been mostly replaced by abstract type binding (see structure Binding), which supports precise qualification by packages and local theory targets, as well as proper tracking of source positions. INCOMPATIBILITY, need to wrap old bstring values into Binding.name, or better pass through abstract bindings everywhere. See further src/Pure/General/long_name.ML, src/Pure/General/binding.ML and src/Pure/General/name_space.ML * Result facts (from PureThy.note_thms, ProofContext.note_thms, LocalTheory.note etc.) now refer to the *full* internal name, not the bstring as before. INCOMPATIBILITY, not detected by ML type-checking! * Disposed old type and term read functions (Sign.read_def_typ, Sign.read_typ, Sign.read_def_terms, Sign.read_term, Thm.read_def_cterms, Thm.read_cterm etc.). INCOMPATIBILITY, should use regular Syntax.read_typ, Syntax.read_term, Syntax.read_typ_global, Syntax.read_term_global etc.; see also OldGoals.read_term as last resort for legacy applications. * Disposed old declarations, tactics, tactic combinators that refer to the simpset or claset of an implicit theory (such as Addsimps, Simp_tac, SIMPSET). INCOMPATIBILITY, should use @{simpset} etc. in embedded ML text, or local_simpset_of with a proper context passed as explicit runtime argument. * Rules and tactics that read instantiations (read_instantiate, res_inst_tac, thin_tac, subgoal_tac etc.) now demand a proper proof context, which is required for parsing and type-checking. Moreover, the variables are specified as plain indexnames, not string encodings thereof. INCOMPATIBILITY. * Generic Toplevel.add_hook interface allows to analyze the result of transactions. E.g. see src/Pure/ProofGeneral/proof_general_pgip.ML for theorem dependency output of transactions resulting in a new theory state. * ML antiquotations: block-structured compilation context indicated by \ ... \; additional antiquotation forms: @{binding name} - basic name binding @{let ?pat = term} - term abbreviation (HO matching) @{note name = fact} - fact abbreviation @{thm fact} - singleton fact (with attributes) @{thms fact} - general fact (with attributes) @{lemma prop by method} - singleton goal @{lemma prop by meth1 meth2} - singleton goal @{lemma prop1 ... propN by method} - general goal @{lemma prop1 ... propN by meth1 meth2} - general goal @{lemma (open) ...} - open derivation *** System *** * The Isabelle "emacs" tool provides a specific interface to invoke Proof General / Emacs, with more explicit failure if that is not installed (the old isabelle-interface script silently falls back on isabelle-process). The PROOFGENERAL_HOME setting determines the installation location of the Proof General distribution. * Isabelle/lib/classes/Pure.jar provides basic support to integrate the Isabelle process into a JVM/Scala application. See Isabelle/lib/jedit/plugin for a minimal example. (The obsolete Java process wrapper has been discontinued.) * Added homegrown Isabelle font with unicode layout, see lib/fonts. * Various status messages (with exact source position information) are emitted, if proper markup print mode is enabled. This allows user-interface components to provide detailed feedback on internal prover operations. New in Isabelle2008 (June 2008) ------------------------------- *** General *** * The Isabelle/Isar Reference Manual (isar-ref) has been reorganized and updated, with formally checked references as hyperlinks. * Theory loader: use_thy (and similar operations) no longer set the implicit ML context, which was occasionally hard to predict and in conflict with concurrency. INCOMPATIBILITY, use ML within Isar which provides a proper context already. * Theory loader: old-style ML proof scripts being *attached* to a thy file are no longer supported. INCOMPATIBILITY, regular 'uses' and 'use' within a theory file will do the job. * Name space merge now observes canonical order, i.e. the second space is inserted into the first one, while existing entries in the first space take precedence. INCOMPATIBILITY in rare situations, may try to swap theory imports. * Syntax: symbol \ is now considered a letter. Potential INCOMPATIBILITY in identifier syntax etc. * Outer syntax: string tokens no longer admit escaped white space, which was an accidental (undocumented) feature. INCOMPATIBILITY, use white space without escapes. * Outer syntax: string tokens may contain arbitrary character codes specified via 3 decimal digits (as in SML). E.g. "foo\095bar" for "foo_bar". *** Pure *** * Context-dependent token translations. Default setup reverts locally fixed variables, and adds hilite markup for undeclared frees. * Unused theorems can be found using the new command 'unused_thms'. There are three ways of invoking it: (1) unused_thms Only finds unused theorems in the current theory. (2) unused_thms thy_1 ... thy_n - Finds unused theorems in the current theory and all of its ancestors, excluding the theories thy_1 ... thy_n and all of their ancestors. (3) unused_thms thy_1 ... thy_n - thy'_1 ... thy'_m Finds unused theorems in the theories thy'_1 ... thy'_m and all of their ancestors, excluding the theories thy_1 ... thy_n and all of their ancestors. In order to increase the readability of the list produced by unused_thms, theorems that have been created by a particular instance of a theory command such as 'inductive' or 'function' are considered to belong to the same "group", meaning that if at least one theorem in this group is used, the other theorems in the same group are no longer reported as unused. Moreover, if all theorems in the group are unused, only one theorem in the group is displayed. Note that proof objects have to be switched on in order for unused_thms to work properly (i.e. !proofs must be >= 1, which is usually the case when using Proof General with the default settings). * Authentic naming of facts disallows ad-hoc overwriting of previous theorems within the same name space. INCOMPATIBILITY, need to remove duplicate fact bindings, or even accidental fact duplications. Note that tools may maintain dynamically scoped facts systematically, using PureThy.add_thms_dynamic. * Command 'hide' now allows to hide from "fact" name space as well. * Eliminated destructive theorem database, simpset, claset, and clasimpset. Potential INCOMPATIBILITY, really need to observe linear update of theories within ML code. * Eliminated theory ProtoPure and CPure, leaving just one Pure theory. INCOMPATIBILITY, object-logics depending on former Pure require additional setup PureThy.old_appl_syntax_setup; object-logics depending on former CPure need to refer to Pure. * Commands 'use' and 'ML' are now purely functional, operating on theory/local_theory. Removed former 'ML_setup' (on theory), use 'ML' instead. Added 'ML_val' as mere diagnostic replacement for 'ML'. INCOMPATIBILITY. * Command 'setup': discontinued implicit version with ML reference. * Instantiation target allows for simultaneous specification of class instance operations together with an instantiation proof. Type-checking phase allows to refer to class operations uniformly. See src/HOL/Complex/Complex.thy for an Isar example and src/HOL/Library/Eval.thy for an ML example. * Indexing of literal facts: be more serious about including only facts from the visible specification/proof context, but not the background context (locale etc.). Affects `prop` notation and method "fact". INCOMPATIBILITY: need to name facts explicitly in rare situations. * Method "cases", "induct", "coinduct": removed obsolete/undocumented "(open)" option, which used to expose internal bound variables to the proof text. * Isar statements: removed obsolete case "rule_context". INCOMPATIBILITY, better use explicit fixes/assumes. * Locale proofs: default proof step now includes 'unfold_locales'; hence 'proof' without argument may be used to unfold locale predicates. *** Document preparation *** * Simplified pdfsetup.sty: color/hyperref is used unconditionally for both pdf and dvi (hyperlinks usually work in xdvi as well); removed obsolete thumbpdf setup (contemporary PDF viewers do this on the spot); renamed link color from "darkblue" to "linkcolor" (default value unchanged, can be redefined via \definecolor); no longer sets "a4paper" option (unnecessary or even intrusive). * Antiquotation @{lemma A method} proves proposition A by the given method (either a method name or a method name plus (optional) method arguments in parentheses) and prints A just like @{prop A}. *** HOL *** * New primrec package. Specification syntax conforms in style to definition/function/.... No separate induction rule is provided. The "primrec" command distinguishes old-style and new-style specifications by syntax. The former primrec package is now named OldPrimrecPackage. When adjusting theories, beware: constants stemming from new-style primrec specifications have authentic syntax. * Metis prover is now an order of magnitude faster, and also works with multithreading. * Metis: the maximum number of clauses that can be produced from a theorem is now given by the attribute max_clauses. Theorems that exceed this number are ignored, with a warning printed. * Sledgehammer no longer produces structured proofs by default. To enable, declare [[sledgehammer_full = true]]. Attributes reconstruction_modulus, reconstruction_sorts renamed sledgehammer_modulus, sledgehammer_sorts. INCOMPATIBILITY. * Method "induct_scheme" derives user-specified induction rules from well-founded induction and completeness of patterns. This factors out some operations that are done internally by the function package and makes them available separately. See src/HOL/ex/Induction_Scheme.thy for examples. * More flexible generation of measure functions for termination proofs: Measure functions can be declared by proving a rule of the form "is_measure f" and giving it the [measure_function] attribute. The "is_measure" predicate is logically meaningless (always true), and just guides the heuristic. To find suitable measure functions, the termination prover sets up the goal "is_measure ?f" of the appropriate type and generates all solutions by Prolog-style backward proof using the declared rules. This setup also deals with rules like "is_measure f ==> is_measure (list_size f)" which accommodates nested datatypes that recurse through lists. Similar rules are predeclared for products and option types. * Turned the type of sets "'a set" into an abbreviation for "'a => bool" INCOMPATIBILITIES: - Definitions of overloaded constants on sets have to be replaced by definitions on => and bool. - Some definitions of overloaded operators on sets can now be proved using the definitions of the operators on => and bool. Therefore, the following theorems have been renamed: subset_def -> subset_eq psubset_def -> psubset_eq set_diff_def -> set_diff_eq Compl_def -> Compl_eq Sup_set_def -> Sup_set_eq Inf_set_def -> Inf_set_eq sup_set_def -> sup_set_eq inf_set_def -> inf_set_eq - Due to the incompleteness of the HO unification algorithm, some rules such as subst may require manual instantiation, if some of the unknowns in the rule is a set. - Higher order unification and forward proofs: The proof pattern have "P (S::'a set)" <...> then have "EX S. P S" .. no longer works (due to the incompleteness of the HO unification algorithm) and must be replaced by the pattern have "EX S. P S" proof show "P S" <...> qed - Calculational reasoning with subst (or similar rules): The proof pattern have "P (S::'a set)" <...> also have "S = T" <...> finally have "P T" . no longer works (for similar reasons as the previous example) and must be replaced by something like have "P (S::'a set)" <...> moreover have "S = T" <...> ultimately have "P T" by simp - Tactics or packages written in ML code: Code performing pattern matching on types via Type ("set", [T]) => ... must be rewritten. Moreover, functions like strip_type or binder_types no longer return the right value when applied to a type of the form T1 => ... => Tn => U => bool rather than T1 => ... => Tn => U set * Merged theories Wellfounded_Recursion, Accessible_Part and Wellfounded_Relations to theory Wellfounded. * Explicit class "eq" for executable equality. INCOMPATIBILITY. * Class finite no longer treats UNIV as class parameter. Use class enum from theory Library/Enum instead to achieve a similar effect. INCOMPATIBILITY. * Theory List: rule list_induct2 now has explicitly named cases "Nil" and "Cons". INCOMPATIBILITY. * HOL (and FOL): renamed variables in rules imp_elim and swap. Potential INCOMPATIBILITY. * Theory Product_Type: duplicated lemmas split_Pair_apply and injective_fst_snd removed, use split_eta and prod_eqI instead. Renamed upd_fst to apfst and upd_snd to apsnd. INCOMPATIBILITY. * Theory Nat: removed redundant lemmas that merely duplicate lemmas of the same name in theory Orderings: less_trans less_linear le_imp_less_or_eq le_less_trans less_le_trans less_not_sym less_asym Renamed less_imp_le to less_imp_le_nat, and less_irrefl to less_irrefl_nat. Potential INCOMPATIBILITY due to more general types and different variable names. * Library/Option_ord.thy: Canonical order on option type. * Library/RBT.thy: Red-black trees, an efficient implementation of finite maps. * Library/Countable.thy: Type class for countable types. * Theory Int: The representation of numerals has changed. The infix operator BIT and the bit datatype with constructors B0 and B1 have disappeared. INCOMPATIBILITY, use "Int.Bit0 x" and "Int.Bit1 y" in place of "x BIT bit.B0" and "y BIT bit.B1", respectively. Theorems involving BIT, B0, or B1 have been renamed with "Bit0" or "Bit1" accordingly. * Theory Nat: definition of <= and < on natural numbers no longer depend on well-founded relations. INCOMPATIBILITY. Definitions le_def and less_def have disappeared. Consider lemmas not_less [symmetric, where ?'a = nat] and less_eq [symmetric] instead. * Theory Finite_Set: locales ACf, ACe, ACIf, ACIfSL and ACIfSLlin (whose purpose mainly is for various fold_set functionals) have been abandoned in favor of the existing algebraic classes ab_semigroup_mult, comm_monoid_mult, ab_semigroup_idem_mult, lower_semilattice (resp. upper_semilattice) and linorder. INCOMPATIBILITY. * Theory Transitive_Closure: induct and cases rules now declare proper case_names ("base" and "step"). INCOMPATIBILITY. * Theorem Inductive.lfp_ordinal_induct generalized to complete lattices. The form set-specific version is available as Inductive.lfp_ordinal_induct_set. * Renamed theorems "power.simps" to "power_int.simps". INCOMPATIBILITY. * Class semiring_div provides basic abstract properties of semirings with division and modulo operations. Subsumes former class dvd_mod. * Merged theories IntDef, Numeral and IntArith into unified theory Int. INCOMPATIBILITY. * Theory Library/Code_Index: type "index" now represents natural numbers rather than integers. INCOMPATIBILITY. * New class "uminus" with operation "uminus" (split of from class "minus" which now only has operation "minus", binary). INCOMPATIBILITY. * Constants "card", "internal_split", "option_map" now with authentic syntax. INCOMPATIBILITY. * Definitions subset_def, psubset_def, set_diff_def, Compl_def, le_bool_def, less_bool_def, le_fun_def, less_fun_def, inf_bool_def, sup_bool_def, Inf_bool_def, Sup_bool_def, inf_fun_def, sup_fun_def, Inf_fun_def, Sup_fun_def, inf_set_def, sup_set_def, Inf_set_def, Sup_set_def, le_def, less_def, option_map_def now with object equality. INCOMPATIBILITY. * Records. Removed K_record, and replaced it by pure lambda term %x. c. The simplifier setup is now more robust against eta expansion. INCOMPATIBILITY: in cases explicitly referring to K_record. * Library/Multiset: {#a, b, c#} abbreviates {#a#} + {#b#} + {#c#}. * Library/ListVector: new theory of arithmetic vector operations. * Library/Order_Relation: new theory of various orderings as sets of pairs. Defines preorders, partial orders, linear orders and well-orders on sets and on types. *** ZF *** * Renamed some theories to allow to loading both ZF and HOL in the same session: Datatype -> Datatype_ZF Inductive -> Inductive_ZF Int -> Int_ZF IntDiv -> IntDiv_ZF Nat -> Nat_ZF List -> List_ZF Main -> Main_ZF INCOMPATIBILITY: ZF theories that import individual theories below Main might need to be adapted. Regular theory Main is still available, as trivial extension of Main_ZF. *** ML *** * ML within Isar: antiquotation @{const name} or @{const name(typargs)} produces statically-checked Const term. * Functor NamedThmsFun: data is available to the user as dynamic fact (of the same name). Removed obsolete print command. * Removed obsolete "use_legacy_bindings" function. * The ``print mode'' is now a thread-local value derived from a global template (the former print_mode reference), thus access becomes non-critical. The global print_mode reference is for session management only; user-code should use print_mode_value, print_mode_active, PrintMode.setmp etc. INCOMPATIBILITY. * Functions system/system_out provide a robust way to invoke external shell commands, with propagation of interrupts (requires Poly/ML 5.2.1). Do not use OS.Process.system etc. from the basis library! *** System *** * Default settings: PROOFGENERAL_OPTIONS no longer impose xemacs --- in accordance with Proof General 3.7, which prefers GNU emacs. * isatool tty runs Isabelle process with plain tty interaction; optional line editor may be specified via ISABELLE_LINE_EDITOR setting, the default settings attempt to locate "ledit" and "rlwrap". * isatool browser now works with Cygwin as well, using general "javapath" function defined in Isabelle process environment. * YXML notation provides a simple and efficient alternative to standard XML transfer syntax. See src/Pure/General/yxml.ML and isatool yxml as described in the Isabelle system manual. * JVM class isabelle.IsabelleProcess (located in Isabelle/lib/classes) provides general wrapper for managing an Isabelle process in a robust fashion, with ``cooked'' output from stdin/stderr. * Rudimentary Isabelle plugin for jEdit (see Isabelle/lib/jedit), based on Isabelle/JVM process wrapper (see Isabelle/lib/classes). * Removed obsolete THIS_IS_ISABELLE_BUILD feature. NB: the documented way of changing the user's settings is via ISABELLE_HOME_USER/etc/settings, which is a fully featured bash script. * Multithreading.max_threads := 0 refers to the number of actual CPU cores of the underlying machine, which is a good starting point for optimal performance tuning. The corresponding usedir option -M allows "max" as an alias for "0". WARNING: does not work on certain versions of Mac OS (with Poly/ML 5.1). * isabelle-process: non-ML sessions are run with "nice", to reduce the adverse effect of Isabelle flooding interactive front-ends (notably ProofGeneral / XEmacs). New in Isabelle2007 (November 2007) ----------------------------------- *** General *** * More uniform information about legacy features, notably a warning/error of "Legacy feature: ...", depending on the state of the tolerate_legacy_features flag (default true). FUTURE INCOMPATIBILITY: legacy features will disappear eventually. * Theory syntax: the header format ``theory A = B + C:'' has been discontinued in favour of ``theory A imports B C begin''. Use isatool fixheaders to convert existing theory files. INCOMPATIBILITY. * Theory syntax: the old non-Isar theory file format has been discontinued altogether. Note that ML proof scripts may still be used with Isar theories; migration is usually quite simple with the ML function use_legacy_bindings. INCOMPATIBILITY. * Theory syntax: some popular names (e.g. 'class', 'declaration', 'fun', 'help', 'if') are now keywords. INCOMPATIBILITY, use double quotes. * Theory loader: be more serious about observing the static theory header specifications (including optional directories), but not the accidental file locations of previously successful loads. The strict update policy of former update_thy is now already performed by use_thy, so the former has been removed; use_thys updates several theories simultaneously, just as 'imports' within a theory header specification, but without merging the results. Potential INCOMPATIBILITY: may need to refine theory headers and commands ROOT.ML which depend on load order. * Theory loader: optional support for content-based file identification, instead of the traditional scheme of full physical path plus date stamp; configured by the ISABELLE_FILE_IDENT setting (cf. the system manual). The new scheme allows to work with non-finished theories in persistent session images, such that source files may be moved later on without requiring reloads. * Theory loader: old-style ML proof scripts being *attached* to a thy file (with the same base name as the theory) are considered a legacy feature, which will disappear eventually. Even now, the theory loader no longer maintains dependencies on such files. * Syntax: the scope for resolving ambiguities via type-inference is now limited to individual terms, instead of whole simultaneous specifications as before. This greatly reduces the complexity of the syntax module and improves flexibility by separating parsing and type-checking. INCOMPATIBILITY: additional type-constraints (explicit 'fixes' etc.) are required in rare situations. * Syntax: constants introduced by new-style packages ('definition', 'abbreviation' etc.) are passed through the syntax module in ``authentic mode''. This means that associated mixfix annotations really stick to such constants, independently of potential name space ambiguities introduced later on. INCOMPATIBILITY: constants in parse trees are represented slightly differently, may need to adapt syntax translations accordingly. Use CONST marker in 'translations' and @{const_syntax} antiquotation in 'parse_translation' etc. * Legacy goal package: reduced interface to the bare minimum required to keep existing proof scripts running. Most other user-level functions are now part of the OldGoals structure, which is *not* open by default (consider isatool expandshort before open OldGoals). Removed top_sg, prin, printyp, pprint_term/typ altogether, because these tend to cause confusion about the actual goal (!) context being used here, which is not necessarily the same as the_context(). * Command 'find_theorems': supports "*" wild-card in "name:" criterion; "with_dups" option. Certain ProofGeneral versions might support a specific search form (see ProofGeneral/CHANGES). * The ``prems limit'' option (cf. ProofContext.prems_limit) is now -1 by default, which means that "prems" (and also "fixed variables") are suppressed from proof state output. Note that the ProofGeneral settings mechanism allows to change and save options persistently, but older versions of Isabelle will fail to start up if a negative prems limit is imposed. * Local theory targets may be specified by non-nested blocks of ``context/locale/class ... begin'' followed by ``end''. The body may contain definitions, theorems etc., including any derived mechanism that has been implemented on top of these primitives. This concept generalizes the existing ``theorem (in ...)'' towards more versatility and scalability. * Proof General interface: proper undo of final 'end' command; discontinued Isabelle/classic mode (ML proof scripts). *** Document preparation *** * Added antiquotation @{theory name} which prints the given name, after checking that it refers to a valid ancestor theory in the current context. * Added antiquotations @{ML_type text} and @{ML_struct text} which check the given source text as ML type/structure, printing verbatim. * Added antiquotation @{abbrev "c args"} which prints the abbreviation "c args == rhs" given in the current context. (Any number of arguments may be given on the LHS.) *** Pure *** * The 'class' package offers a combination of axclass and locale to achieve Haskell-like type classes in Isabelle. Definitions and theorems within a class context produce both relative results (with implicit parameters according to the locale context), and polymorphic constants with qualified polymorphism (according to the class context). Within the body context of a 'class' target, a separate syntax layer ("user space type system") takes care of converting between global polymorphic consts and internal locale representation. See src/HOL/ex/Classpackage.thy for examples (as well as main HOL). "isatool doc classes" provides a tutorial. * Generic code generator framework allows to generate executable code for ML and Haskell (including Isabelle classes). A short usage sketch: internal compilation: export_code in SML writing SML code to a file: export_code in SML writing OCaml code to a file: export_code in OCaml writing Haskell code to a bunch of files: export_code in Haskell evaluating closed propositions to True/False using code generation: method ``eval'' Reasonable default setup of framework in HOL. Theorem attributs for selecting and transforming function equations theorems: [code fun]: select a theorem as function equation for a specific constant [code fun del]: deselect a theorem as function equation for a specific constant [code inline]: select an equation theorem for unfolding (inlining) in place [code inline del]: deselect an equation theorem for unfolding (inlining) in place User-defined serializations (target in {SML, OCaml, Haskell}): code_const {(target) }+ code_type {(target) }+ code_instance {(target)}+ where instance ::= :: code_class {(target) }+ where class target syntax ::= {where { == }+}? code_instance and code_class only are effective to target Haskell. For example usage see src/HOL/ex/Codegenerator.thy and src/HOL/ex/Codegenerator_Pretty.thy. A separate tutorial on code generation from Isabelle/HOL theories is available via "isatool doc codegen". * Code generator: consts in 'consts_code' Isar commands are now referred to by usual term syntax (including optional type annotations). * Command 'no_translations' removes translation rules from theory syntax. * Overloaded definitions are now actually checked for acyclic dependencies. The overloading scheme is slightly more general than that of Haskell98, although Isabelle does not demand an exact correspondence to type class and instance declarations. INCOMPATIBILITY, use ``defs (unchecked overloaded)'' to admit more exotic versions of overloading -- at the discretion of the user! Polymorphic constants are represented via type arguments, i.e. the instantiation that matches an instance against the most general declaration given in the signature. For example, with the declaration c :: 'a => 'a => 'a, an instance c :: nat => nat => nat is represented as c(nat). Overloading is essentially simultaneous structural recursion over such type arguments. Incomplete specification patterns impose global constraints on all occurrences, e.g. c('a * 'a) on the LHS means that more general c('a * 'b) will be disallowed on any RHS. Command 'print_theory' outputs the normalized system of recursive equations, see section "definitions". * Configuration options are maintained within the theory or proof context (with name and type bool/int/string), providing a very simple interface to a poor-man's version of general context data. Tools may declare options in ML (e.g. using Attrib.config_int) and then refer to these values using Config.get etc. Users may change options via an associated attribute of the same name. This form of context declaration works particularly well with commands 'declare' or 'using', for example ``declare [[foo = 42]]''. Thus it has become very easy to avoid global references, which would not observe Isar toplevel undo/redo and fail to work with multithreading. Various global ML references of Pure and HOL have been turned into configuration options: Unify.search_bound unify_search_bound Unify.trace_bound unify_trace_bound Unify.trace_simp unify_trace_simp Unify.trace_types unify_trace_types Simplifier.simp_depth_limit simp_depth_limit Blast.depth_limit blast_depth_limit DatatypeProp.dtK datatype_distinctness_limit fast_arith_neq_limit fast_arith_neq_limit fast_arith_split_limit fast_arith_split_limit * Named collections of theorems may be easily installed as context data using the functor NamedThmsFun (see also src/Pure/Tools/named_thms.ML). The user may add or delete facts via attributes; there is also a toplevel print command. This facility is just a common case of general context data, which is the preferred way for anything more complex than just a list of facts in canonical order. * Isar: command 'declaration' augments a local theory by generic declaration functions written in ML. This enables arbitrary content being added to the context, depending on a morphism that tells the difference of the original declaration context wrt. the application context encountered later on. * Isar: proper interfaces for simplification procedures. Command 'simproc_setup' declares named simprocs (with match patterns, and body text in ML). Attribute "simproc" adds/deletes simprocs in the current context. ML antiquotation @{simproc name} retrieves named simprocs. * Isar: an extra pair of brackets around attribute declarations abbreviates a theorem reference involving an internal dummy fact, which will be ignored later --- only the effect of the attribute on the background context will persist. This form of in-place declarations is particularly useful with commands like 'declare' and 'using', for example ``have A using [[simproc a]] by simp''. * Isar: method "assumption" (and implicit closing of subproofs) now takes simple non-atomic goal assumptions into account: after applying an assumption as a rule the resulting subgoals are solved by atomic assumption steps. This is particularly useful to finish 'obtain' goals, such as "!!x. (!!x. P x ==> thesis) ==> P x ==> thesis", without referring to the original premise "!!x. P x ==> thesis" in the Isar proof context. POTENTIAL INCOMPATIBILITY: method "assumption" is more permissive. * Isar: implicit use of prems from the Isar proof context is considered a legacy feature. Common applications like ``have A .'' may be replaced by ``have A by fact'' or ``note `A`''. In general, referencing facts explicitly here improves readability and maintainability of proof texts. * Isar: improper proof element 'guess' is like 'obtain', but derives the obtained context from the course of reasoning! For example: assume "EX x y. A x & B y" -- "any previous fact" then guess x and y by clarify This technique is potentially adventurous, depending on the facts and proof tools being involved here. * Isar: known facts from the proof context may be specified as literal propositions, using ASCII back-quote syntax. This works wherever named facts used to be allowed so far, in proof commands, proof methods, attributes etc. Literal facts are retrieved from the context according to unification of type and term parameters. For example, provided that "A" and "A ==> B" and "!!x. P x ==> Q x" are known theorems in the current context, then these are valid literal facts: `A` and `A ==> B` and `!!x. P x ==> Q x" as well as `P a ==> Q a` etc. There is also a proof method "fact" which does the same composition for explicit goal states, e.g. the following proof texts coincide with certain special cases of literal facts: have "A" by fact == note `A` have "A ==> B" by fact == note `A ==> B` have "!!x. P x ==> Q x" by fact == note `!!x. P x ==> Q x` have "P a ==> Q a" by fact == note `P a ==> Q a` * Isar: ":" (colon) is no longer a symbolic identifier character in outer syntax. Thus symbolic identifiers may be used without additional white space in declarations like this: ``assume *: A''. * Isar: 'print_facts' prints all local facts of the current context, both named and unnamed ones. * Isar: 'def' now admits simultaneous definitions, e.g.: def x == "t" and y == "u" * Isar: added command 'unfolding', which is structurally similar to 'using', but affects both the goal state and facts by unfolding given rewrite rules. Thus many occurrences of the 'unfold' method or 'unfolded' attribute may be replaced by first-class proof text. * Isar: methods 'unfold' / 'fold', attributes 'unfolded' / 'folded', and command 'unfolding' now all support object-level equalities (potentially conditional). The underlying notion of rewrite rule is analogous to the 'rule_format' attribute, but *not* that of the Simplifier (which is usually more generous). * Isar: the new attribute [rotated n] (default n = 1) rotates the premises of a theorem by n. Useful in conjunction with drule. * Isar: the goal restriction operator [N] (default N = 1) evaluates a method expression within a sandbox consisting of the first N sub-goals, which need to exist. For example, ``simp_all [3]'' simplifies the first three sub-goals, while (rule foo, simp_all)[] simplifies all new goals that emerge from applying rule foo to the originally first one. * Isar: schematic goals are no longer restricted to higher-order patterns; e.g. ``lemma "?P(?x)" by (rule TrueI)'' now works as expected. * Isar: the conclusion of a long theorem statement is now either 'shows' (a simultaneous conjunction, as before), or 'obtains' (essentially a disjunction of cases with local parameters and assumptions). The latter allows to express general elimination rules adequately; in this notation common elimination rules look like this: lemma exE: -- "EX x. P x ==> (!!x. P x ==> thesis) ==> thesis" assumes "EX x. P x" obtains x where "P x" lemma conjE: -- "A & B ==> (A ==> B ==> thesis) ==> thesis" assumes "A & B" obtains A and B lemma disjE: -- "A | B ==> (A ==> thesis) ==> (B ==> thesis) ==> thesis" assumes "A | B" obtains A | B The subsequent classical rules even refer to the formal "thesis" explicitly: lemma classical: -- "(~ thesis ==> thesis) ==> thesis" obtains "~ thesis" lemma Peirce's_Law: -- "((thesis ==> something) ==> thesis) ==> thesis" obtains "thesis ==> something" The actual proof of an 'obtains' statement is analogous to that of the Isar proof element 'obtain', only that there may be several cases. Optional case names may be specified in parentheses; these will be available both in the present proof and as annotations in the resulting rule, for later use with the 'cases' method (cf. attribute case_names). * Isar: the assumptions of a long theorem statement are available as "assms" fact in the proof context. This is more appropriate than the (historical) "prems", which refers to all assumptions of the current context, including those from the target locale, proof body etc. * Isar: 'print_statement' prints theorems from the current theory or proof context in long statement form, according to the syntax of a top-level lemma. * Isar: 'obtain' takes an optional case name for the local context introduction rule (default "that"). * Isar: removed obsolete 'concl is' patterns. INCOMPATIBILITY, use explicit (is "_ ==> ?foo") in the rare cases where this still happens to occur. * Pure: syntax "CONST name" produces a fully internalized constant according to the current context. This is particularly useful for syntax translations that should refer to internal constant representations independently of name spaces. * Pure: syntax constant for foo (binder "FOO ") is called "foo_binder" instead of "FOO ". This allows multiple binder declarations to coexist in the same context. INCOMPATIBILITY. * Isar/locales: 'notation' provides a robust interface to the 'syntax' primitive that also works in a locale context (both for constants and fixed variables). Type declaration and internal syntactic representation of given constants retrieved from the context. Likewise, the 'no_notation' command allows to remove given syntax annotations from the current context. * Isar/locales: new derived specification elements 'axiomatization', 'definition', 'abbreviation', which support type-inference, admit object-level specifications (equality, equivalence). See also the isar-ref manual. Examples: axiomatization eq (infix "===" 50) where eq_refl: "x === x" and eq_subst: "x === y ==> P x ==> P y" definition "f x y = x + y + 1" definition g where "g x = f x x" abbreviation neq (infix "=!=" 50) where "x =!= y == ~ (x === y)" These specifications may be also used in a locale context. Then the constants being introduced depend on certain fixed parameters, and the constant name is qualified by the locale base name. An internal abbreviation takes care for convenient input and output, making the parameters implicit and using the original short name. See also src/HOL/ex/Abstract_NAT.thy for an example of deriving polymorphic entities from a monomorphic theory. Presently, abbreviations are only available 'in' a target locale, but not inherited by general import expressions. Also note that 'abbreviation' may be used as a type-safe replacement for 'syntax' + 'translations' in common applications. The "no_abbrevs" print mode prevents folding of abbreviations in term output. Concrete syntax is attached to specified constants in internal form, independently of name spaces. The parse tree representation is slightly different -- use 'notation' instead of raw 'syntax', and 'translations' with explicit "CONST" markup to accommodate this. * Pure/Isar: unified syntax for new-style specification mechanisms (e.g. 'definition', 'abbreviation', or 'inductive' in HOL) admits full type inference and dummy patterns ("_"). For example: definition "K x _ = x" inductive conj for A B where "A ==> B ==> conj A B" * Pure: command 'print_abbrevs' prints all constant abbreviations of the current context. Print mode "no_abbrevs" prevents inversion of abbreviations on output. * Isar/locales: improved parameter handling: use of locales "var" and "struct" no longer necessary; - parameter renamings are no longer required to be injective. For example, this allows to define endomorphisms as locale endom = homom mult mult h. * Isar/locales: changed the way locales with predicates are defined. Instead of accumulating the specification, the imported expression is now an interpretation. INCOMPATIBILITY: different normal form of locale expressions. In particular, in interpretations of locales with predicates, goals repesenting already interpreted fragments are not removed automatically. Use methods `intro_locales' and `unfold_locales'; see below. * Isar/locales: new methods `intro_locales' and `unfold_locales' provide backward reasoning on locales predicates. The methods are aware of interpretations and discharge corresponding goals. `intro_locales' is less aggressive then `unfold_locales' and does not unfold predicates to assumptions. * Isar/locales: the order in which locale fragments are accumulated has changed. This enables to override declarations from fragments due to interpretations -- for example, unwanted simp rules. * Isar/locales: interpretation in theories and proof contexts has been extended. One may now specify (and prove) equations, which are unfolded in interpreted theorems. This is useful for replacing defined concepts (constants depending on locale parameters) by concepts already existing in the target context. Example: interpretation partial_order ["op <= :: [int, int] => bool"] where "partial_order.less (op <=) (x::int) y = (x < y)" Typically, the constant `partial_order.less' is created by a definition specification element in the context of locale partial_order. * Method "induct": improved internal context management to support local fixes and defines on-the-fly. Thus explicit meta-level connectives !! and ==> are rarely required anymore in inductive goals (using object-logic connectives for this purpose has been long obsolete anyway). Common proof patterns are explained in src/HOL/Induct/Common_Patterns.thy, see also src/HOL/Isar_examples/Puzzle.thy and src/HOL/Lambda for realistic examples. * Method "induct": improved handling of simultaneous goals. Instead of introducing object-level conjunction, the statement is now split into several conclusions, while the corresponding symbolic cases are nested accordingly. INCOMPATIBILITY, proofs need to be structured explicitly, see src/HOL/Induct/Common_Patterns.thy, for example. * Method "induct": mutual induction rules are now specified as a list of rule sharing the same induction cases. HOL packages usually provide foo_bar.inducts for mutually defined items foo and bar (e.g. inductive predicates/sets or datatypes). INCOMPATIBILITY, users need to specify mutual induction rules differently, i.e. like this: (induct rule: foo_bar.inducts) (induct set: foo bar) (induct pred: foo bar) (induct type: foo bar) The ML function ProjectRule.projections turns old-style rules into the new format. * Method "coinduct": dual of induction, see src/HOL/Library/Coinductive_List.thy for various examples. * Method "cases", "induct", "coinduct": the ``(open)'' option is considered a legacy feature. * Attribute "symmetric" produces result with standardized schematic variables (index 0). Potential INCOMPATIBILITY. * Simplifier: by default the simplifier trace only shows top level rewrites now. That is, trace_simp_depth_limit is set to 1 by default. Thus there is less danger of being flooded by the trace. The trace indicates where parts have been suppressed. * Provers/classical: removed obsolete classical version of elim_format attribute; classical elim/dest rules are now treated uniformly when manipulating the claset. * Provers/classical: stricter checks to ensure that supplied intro, dest and elim rules are well-formed; dest and elim rules must have at least one premise. * Provers/classical: attributes dest/elim/intro take an optional weight argument for the rule (just as the Pure versions). Weights are ignored by automated tools, but determine the search order of single rule steps. * Syntax: input syntax now supports dummy variable binding "%_. b", where the body does not mention the bound variable. Note that dummy patterns implicitly depend on their context of bounds, which makes "{_. _}" match any set comprehension as expected. Potential INCOMPATIBILITY -- parse translations need to cope with syntactic constant "_idtdummy" in the binding position. * Syntax: removed obsolete syntactic constant "_K" and its associated parse translation. INCOMPATIBILITY -- use dummy abstraction instead, for example "A -> B" => "Pi A (%_. B)". * Pure: 'class_deps' command visualizes the subclass relation, using the graph browser tool. * Pure: 'print_theory' now suppresses certain internal declarations by default; use '!' option for full details. *** HOL *** * Method "metis" proves goals by applying the Metis general-purpose resolution prover (see also http://gilith.com/software/metis/). Examples are in the directory MetisExamples. WARNING: the Isabelle/HOL-Metis integration does not yet work properly with multi-threading. * Command 'sledgehammer' invokes external automatic theorem provers as background processes. It generates calls to the "metis" method if successful. These can be pasted into the proof. Users do not have to wait for the automatic provers to return. WARNING: does not really work with multi-threading. * New "auto_quickcheck" feature tests outermost goal statements for potential counter-examples. Controlled by ML references auto_quickcheck (default true) and auto_quickcheck_time_limit (default 5000 milliseconds). Fails silently if statements is outside of executable fragment, or any other codgenerator problem occurs. * New constant "undefined" with axiom "undefined x = undefined". * Added class "HOL.eq", allowing for code generation with polymorphic equality. * Some renaming of class constants due to canonical name prefixing in the new 'class' package: HOL.abs ~> HOL.abs_class.abs HOL.divide ~> HOL.divide_class.divide 0 ~> HOL.zero_class.zero 1 ~> HOL.one_class.one op + ~> HOL.plus_class.plus op - ~> HOL.minus_class.minus uminus ~> HOL.minus_class.uminus op * ~> HOL.times_class.times op < ~> HOL.ord_class.less op <= > HOL.ord_class.less_eq Nat.power ~> Power.power_class.power Nat.size ~> Nat.size_class.size Numeral.number_of ~> Numeral.number_class.number_of FixedPoint.Inf ~> Lattices.complete_lattice_class.Inf FixedPoint.Sup ~> Lattices.complete_lattice_class.Sup Orderings.min ~> Orderings.ord_class.min Orderings.max ~> Orderings.ord_class.max Divides.op div ~> Divides.div_class.div Divides.op mod ~> Divides.div_class.mod Divides.op dvd ~> Divides.div_class.dvd INCOMPATIBILITY. Adaptions may be required in the following cases: a) User-defined constants using any of the names "plus", "minus", "times", "less" or "less_eq". The standard syntax translations for "+", "-" and "*" may go wrong. INCOMPATIBILITY: use more specific names. b) Variables named "plus", "minus", "times", "less", "less_eq" INCOMPATIBILITY: use more specific names. c) Permutative equations (e.g. "a + b = b + a") Since the change of names also changes the order of terms, permutative rewrite rules may get applied in a different order. Experience shows that this is rarely the case (only two adaptions in the whole Isabelle distribution). INCOMPATIBILITY: rewrite proofs d) ML code directly refering to constant names This in general only affects hand-written proof tactics, simprocs and so on. INCOMPATIBILITY: grep your sourcecode and replace names. Consider using @{const_name} antiquotation. * New class "default" with associated constant "default". * Function "sgn" is now overloaded and available on int, real, complex (and other numeric types), using class "sgn". Two possible defs of sgn are given as equational assumptions in the classes sgn_if and sgn_div_norm; ordered_idom now also inherits from sgn_if. INCOMPATIBILITY. * Locale "partial_order" now unified with class "order" (cf. theory Orderings), added parameter "less". INCOMPATIBILITY. * Renamings in classes "order" and "linorder": facts "refl", "trans" and "cases" to "order_refl", "order_trans" and "linorder_cases", to avoid clashes with HOL "refl" and "trans". INCOMPATIBILITY. * Classes "order" and "linorder": potential INCOMPATIBILITY due to changed order of proof goals in instance proofs. * The transitivity reasoner for partial and linear orders is set up for classes "order" and "linorder". Instances of the reasoner are available in all contexts importing or interpreting the corresponding locales. Method "order" invokes the reasoner separately; the reasoner is also integrated with the Simplifier as a solver. Diagnostic command 'print_orders' shows the available instances of the reasoner in the current context. * Localized monotonicity predicate in theory "Orderings"; integrated lemmas max_of_mono and min_of_mono with this predicate. INCOMPATIBILITY. * Formulation of theorem "dense" changed slightly due to integration with new class dense_linear_order. * Uniform lattice theory development in HOL. constants "meet" and "join" now named "inf" and "sup" constant "Meet" now named "Inf" classes "meet_semilorder" and "join_semilorder" now named "lower_semilattice" and "upper_semilattice" class "lorder" now named "lattice" class "comp_lat" now named "complete_lattice" Instantiation of lattice classes allows explicit definitions for "inf" and "sup" operations (or "Inf" and "Sup" for complete lattices). INCOMPATIBILITY. Theorem renames: meet_left_le ~> inf_le1 meet_right_le ~> inf_le2 join_left_le ~> sup_ge1 join_right_le ~> sup_ge2 meet_join_le ~> inf_sup_ord le_meetI ~> le_infI join_leI ~> le_supI le_meet ~> le_inf_iff le_join ~> ge_sup_conv meet_idempotent ~> inf_idem join_idempotent ~> sup_idem meet_comm ~> inf_commute join_comm ~> sup_commute meet_leI1 ~> le_infI1 meet_leI2 ~> le_infI2 le_joinI1 ~> le_supI1 le_joinI2 ~> le_supI2 meet_assoc ~> inf_assoc join_assoc ~> sup_assoc meet_left_comm ~> inf_left_commute meet_left_idempotent ~> inf_left_idem join_left_comm ~> sup_left_commute join_left_idempotent ~> sup_left_idem meet_aci ~> inf_aci join_aci ~> sup_aci le_def_meet ~> le_iff_inf le_def_join ~> le_iff_sup join_absorp2 ~> sup_absorb2 join_absorp1 ~> sup_absorb1 meet_absorp1 ~> inf_absorb1 meet_absorp2 ~> inf_absorb2 meet_join_absorp ~> inf_sup_absorb join_meet_absorp ~> sup_inf_absorb distrib_join_le ~> distrib_sup_le distrib_meet_le ~> distrib_inf_le add_meet_distrib_left ~> add_inf_distrib_left add_join_distrib_left ~> add_sup_distrib_left is_join_neg_meet ~> is_join_neg_inf is_meet_neg_join ~> is_meet_neg_sup add_meet_distrib_right ~> add_inf_distrib_right add_join_distrib_right ~> add_sup_distrib_right add_meet_join_distribs ~> add_sup_inf_distribs join_eq_neg_meet ~> sup_eq_neg_inf meet_eq_neg_join ~> inf_eq_neg_sup add_eq_meet_join ~> add_eq_inf_sup meet_0_imp_0 ~> inf_0_imp_0 join_0_imp_0 ~> sup_0_imp_0 meet_0_eq_0 ~> inf_0_eq_0 join_0_eq_0 ~> sup_0_eq_0 neg_meet_eq_join ~> neg_inf_eq_sup neg_join_eq_meet ~> neg_sup_eq_inf join_eq_if ~> sup_eq_if mono_meet ~> mono_inf mono_join ~> mono_sup meet_bool_eq ~> inf_bool_eq join_bool_eq ~> sup_bool_eq meet_fun_eq ~> inf_fun_eq join_fun_eq ~> sup_fun_eq meet_set_eq ~> inf_set_eq join_set_eq ~> sup_set_eq meet1_iff ~> inf1_iff meet2_iff ~> inf2_iff meet1I ~> inf1I meet2I ~> inf2I meet1D1 ~> inf1D1 meet2D1 ~> inf2D1 meet1D2 ~> inf1D2 meet2D2 ~> inf2D2 meet1E ~> inf1E meet2E ~> inf2E join1_iff ~> sup1_iff join2_iff ~> sup2_iff join1I1 ~> sup1I1 join2I1 ~> sup2I1 join1I1 ~> sup1I1 join2I2 ~> sup1I2 join1CI ~> sup1CI join2CI ~> sup2CI join1E ~> sup1E join2E ~> sup2E is_meet_Meet ~> is_meet_Inf Meet_bool_def ~> Inf_bool_def Meet_fun_def ~> Inf_fun_def Meet_greatest ~> Inf_greatest Meet_lower ~> Inf_lower Meet_set_def ~> Inf_set_def Sup_def ~> Sup_Inf Sup_bool_eq ~> Sup_bool_def Sup_fun_eq ~> Sup_fun_def Sup_set_eq ~> Sup_set_def listsp_meetI ~> listsp_infI listsp_meet_eq ~> listsp_inf_eq meet_min ~> inf_min join_max ~> sup_max * Added syntactic class "size"; overloaded constant "size" now has type "'a::size ==> bool" * Internal reorganisation of `size' of datatypes: size theorems "foo.size" are no longer subsumed by "foo.simps" (but are still simplification rules by default!); theorems "prod.size" now named "*.size". * Class "div" now inherits from class "times" rather than "type". INCOMPATIBILITY. * HOL/Finite_Set: "name-space" locales Lattice, Distrib_lattice, Linorder etc. have disappeared; operations defined in terms of fold_set now are named Inf_fin, Sup_fin. INCOMPATIBILITY. * HOL/Nat: neq0_conv no longer declared as iff. INCOMPATIBILITY. * HOL-Word: New extensive library and type for generic, fixed size machine words, with arithmetic, bit-wise, shifting and rotating operations, reflection into int, nat, and bool lists, automation for linear arithmetic (by automatic reflection into nat or int), including lemmas on overflow and monotonicity. Instantiated to all appropriate arithmetic type classes, supporting automatic simplification of numerals on all operations. * Library/Boolean_Algebra: locales for abstract boolean algebras. * Library/Numeral_Type: numbers as types, e.g. TYPE(32). * Code generator library theories: - Code_Integer represents HOL integers by big integer literals in target languages. - Code_Char represents HOL characters by character literals in target languages. - Code_Char_chr like Code_Char, but also offers treatment of character codes; includes Code_Integer. - Executable_Set allows to generate code for finite sets using lists. - Executable_Rat implements rational numbers as triples (sign, enumerator, denominator). - Executable_Real implements a subset of real numbers, namly those representable by rational numbers. - Efficient_Nat implements natural numbers by integers, which in general will result in higher efficency; pattern matching with 0/Suc is eliminated; includes Code_Integer. - Code_Index provides an additional datatype index which is mapped to target-language built-in integers. - Code_Message provides an additional datatype message_string which is isomorphic to strings; messages are mapped to target-language strings. * New package for inductive predicates An n-ary predicate p with m parameters z_1, ..., z_m can now be defined via inductive p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" for z_1 :: U_1 and ... and z_n :: U_m where rule_1: "... ==> p z_1 ... z_m t_1_1 ... t_1_n" | ... with full support for type-inference, rather than consts s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" abbreviation p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" where "p z_1 ... z_m x_1 ... x_n == (x_1, ..., x_n) : s z_1 ... z_m" inductive "s z_1 ... z_m" intros rule_1: "... ==> (t_1_1, ..., t_1_n) : s z_1 ... z_m" ... For backward compatibility, there is a wrapper allowing inductive sets to be defined with the new package via inductive_set s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" for z_1 :: U_1 and ... and z_n :: U_m where rule_1: "... ==> (t_1_1, ..., t_1_n) : s z_1 ... z_m" | ... or inductive_set s :: "U_1 => ... => U_m => (T_1 * ... * T_n) set" and p :: "U_1 => ... => U_m => T_1 => ... => T_n => bool" for z_1 :: U_1 and ... and z_n :: U_m where "p z_1 ... z_m x_1 ... x_n == (x_1, ..., x_n) : s z_1 ... z_m" | rule_1: "... ==> p z_1 ... z_m t_1_1 ... t_1_n" | ... if the additional syntax "p ..." is required. Numerous examples can be found in the subdirectories src/HOL/Auth, src/HOL/Bali, src/HOL/Induct, and src/HOL/MicroJava. INCOMPATIBILITIES: - Since declaration and definition of inductive sets or predicates is no longer separated, abbreviations involving the newly introduced sets or predicates must be specified together with the introduction rules after the 'where' keyword (see above), rather than before the actual inductive definition. - The variables in induction and elimination rules are now quantified in the order of their occurrence in the introduction rules, rather than in alphabetical order. Since this may break some proofs, these proofs either have to be repaired, e.g. by reordering the variables a_i_1 ... a_i_{k_i} in Isar 'case' statements of the form case (rule_i a_i_1 ... a_i_{k_i}) or the old order of quantification has to be restored by explicitly adding meta-level quantifiers in the introduction rules, i.e. | rule_i: "!!a_i_1 ... a_i_{k_i}. ... ==> p z_1 ... z_m t_i_1 ... t_i_n" - The format of the elimination rules is now p z_1 ... z_m x_1 ... x_n ==> (!!a_1_1 ... a_1_{k_1}. x_1 = t_1_1 ==> ... ==> x_n = t_1_n ==> ... ==> P) ==> ... ==> P for predicates and (x_1, ..., x_n) : s z_1 ... z_m ==> (!!a_1_1 ... a_1_{k_1}. x_1 = t_1_1 ==> ... ==> x_n = t_1_n ==> ... ==> P) ==> ... ==> P for sets rather than x : s z_1 ... z_m ==> (!!a_1_1 ... a_1_{k_1}. x = (t_1_1, ..., t_1_n) ==> ... ==> P) ==> ... ==> P This may require terms in goals to be expanded to n-tuples (e.g. using case_tac or simplification with the split_paired_all rule) before the above elimination rule is applicable. - The elimination or case analysis rules for (mutually) inductive sets or predicates are now called "p_1.cases" ... "p_k.cases". The list of rules "p_1_..._p_k.elims" is no longer available. * New package "function"/"fun" for general recursive functions, supporting mutual and nested recursion, definitions in local contexts, more general pattern matching and partiality. See HOL/ex/Fundefs.thy for small examples, and the separate tutorial on the function package. The old recdef "package" is still available as before, but users are encouraged to use the new package. * Method "lexicographic_order" automatically synthesizes termination relations as lexicographic combinations of size measures. * Case-expressions allow arbitrary constructor-patterns (including "_") and take their order into account, like in functional programming. Internally, this is translated into nested case-expressions; missing cases are added and mapped to the predefined constant "undefined". In complicated cases printing may no longer show the original input but the internal form. Lambda-abstractions allow the same form of pattern matching: "% pat1 => e1 | ..." is an abbreviation for "%x. case x of pat1 => e1 | ..." where x is a new variable. * IntDef: The constant "int :: nat => int" has been removed; now "int" is an abbreviation for "of_nat :: nat => int". The simplification rules for "of_nat" have been changed to work like "int" did previously. Potential INCOMPATIBILITY: - "of_nat (Suc m)" simplifies to "1 + of_nat m" instead of "of_nat m + 1" - of_nat_diff and of_nat_mult are no longer default simp rules * Method "algebra" solves polynomial equations over (semi)rings using Groebner bases. The (semi)ring structure is defined by locales and the tool setup depends on that generic context. Installing the method for a specific type involves instantiating the locale and possibly adding declarations for computation on the coefficients. The method is already instantiated for natural numbers and for the axiomatic class of idoms with numerals. See also the paper by Chaieb and Wenzel at CALCULEMUS 2007 for the general principles underlying this architecture of context-aware proof-tools. * Method "ferrack" implements quantifier elimination over special-purpose dense linear orders using locales (analogous to "algebra"). The method is already installed for class {ordered_field,recpower,number_ring} which subsumes real, hyperreal, rat, etc. * Former constant "List.op @" now named "List.append". Use ML antiquotations @{const_name List.append} or @{term " ... @ ... "} to circumvent possible incompatibilities when working on ML level. * primrec: missing cases mapped to "undefined" instead of "arbitrary". * New function listsum :: 'a list => 'a for arbitrary monoids. Special syntax: "SUM x <- xs. f x" (and latex variants) * New syntax for Haskell-like list comprehension (input only), eg. [(x,y). x <- xs, y <- ys, x ~= y], see also src/HOL/List.thy. * The special syntax for function "filter" has changed from [x : xs. P] to [x <- xs. P] to avoid an ambiguity caused by list comprehension syntax, and for uniformity. INCOMPATIBILITY. * [a..b] is now defined for arbitrary linear orders. It used to be defined on nat only, as an abbreviation for [a.. B" for equality on bool (with priority 25 like -->); output depends on the "iff" print_mode, the default is "A = B" (with priority 50). * Relations less (<) and less_eq (<=) are also available on type bool. Modified syntax to disallow nesting without explicit parentheses, e.g. "(x < y) < z" or "x < (y < z)", but NOT "x < y < z". Potential INCOMPATIBILITY. * "LEAST x:A. P" expands to "LEAST x. x:A & P" (input only). * Relation composition operator "op O" now has precedence 75 and binds stronger than union and intersection. INCOMPATIBILITY. * The old set interval syntax "{m..n(}" (and relatives) has been removed. Use "{m.. ==> False", equivalences (i.e. "=" on type bool) are handled, variable names of the form "lit_" are no longer reserved, significant speedup. * Methods "sat" and "satx" can now replay MiniSat proof traces. zChaff is still supported as well. * 'inductive' and 'datatype': provide projections of mutual rules, bundled as foo_bar.inducts; * Library: moved theories Parity, GCD, Binomial, Infinite_Set to Library. * Library: moved theory Accessible_Part to main HOL. * Library: added theory Coinductive_List of potentially infinite lists as greatest fixed-point. * Library: added theory AssocList which implements (finite) maps as association lists. * Method "evaluation" solves goals (i.e. a boolean expression) efficiently by compiling it to ML. The goal is "proved" (via an oracle) if it evaluates to True. * Linear arithmetic now splits certain operators (e.g. min, max, abs) also when invoked by the simplifier. This results in the Simplifier being more powerful on arithmetic goals. INCOMPATIBILITY. Configuration option fast_arith_split_limit=0 recovers the old behavior. * Support for hex (0x20) and binary (0b1001) numerals. * New method: reify eqs (t), where eqs are equations for an interpretation I :: 'a list => 'b => 'c and t::'c is an optional parameter, computes a term s::'b and a list xs::'a list and proves the theorem I xs s = t. This is also known as reification or quoting. The resulting theorem is applied to the subgoal to substitute t with I xs s. If t is omitted, the subgoal itself is reified. * New method: reflection corr_thm eqs (t). The parameters eqs and (t) are as explained above. corr_thm is a theorem for I vs (f t) = I vs t, where f is supposed to be a computable function (in the sense of code generattion). The method uses reify to compute s and xs as above then applies corr_thm and uses normalization by evaluation to "prove" f s = r and finally gets the theorem t = r, which is again applied to the subgoal. An Example is available in src/HOL/ex/ReflectionEx.thy. * Reflection: Automatic reification now handels binding, an example is available in src/HOL/ex/ReflectionEx.thy * HOL-Statespace: ``State Spaces: The Locale Way'' introduces a command 'statespace' that is similar to 'record', but introduces an abstract specification based on the locale infrastructure instead of HOL types. This leads to extra flexibility in composing state spaces, in particular multiple inheritance and renaming of components. *** HOL-Complex *** * Hyperreal: Functions root and sqrt are now defined on negative real inputs so that root n (- x) = - root n x and sqrt (- x) = - sqrt x. Nonnegativity side conditions have been removed from many lemmas, so that more subgoals may now be solved by simplification; potential INCOMPATIBILITY. * Real: new type classes formalize real normed vector spaces and algebras, using new overloaded constants scaleR :: real => 'a => 'a and norm :: 'a => real. * Real: constant of_real :: real => 'a::real_algebra_1 injects from reals into other types. The overloaded constant Reals :: 'a set is now defined as range of_real; potential INCOMPATIBILITY. * Real: proper support for ML code generation, including 'quickcheck'. Reals are implemented as arbitrary precision rationals. * Hyperreal: Several constants that previously worked only for the reals have been generalized, so they now work over arbitrary vector spaces. Type annotations may need to be added in some cases; potential INCOMPATIBILITY. Infinitesimal :: ('a::real_normed_vector) star set HFinite :: ('a::real_normed_vector) star set HInfinite :: ('a::real_normed_vector) star set approx :: ('a::real_normed_vector) star => 'a star => bool monad :: ('a::real_normed_vector) star => 'a star set galaxy :: ('a::real_normed_vector) star => 'a star set (NS)LIMSEQ :: [nat => 'a::real_normed_vector, 'a] => bool (NS)convergent :: (nat => 'a::real_normed_vector) => bool (NS)Bseq :: (nat => 'a::real_normed_vector) => bool (NS)Cauchy :: (nat => 'a::real_normed_vector) => bool (NS)LIM :: ['a::real_normed_vector => 'b::real_normed_vector, 'a, 'b] => bool is(NS)Cont :: ['a::real_normed_vector => 'b::real_normed_vector, 'a] => bool deriv :: ['a::real_normed_field => 'a, 'a, 'a] => bool sgn :: 'a::real_normed_vector => 'a exp :: 'a::{recpower,real_normed_field,banach} => 'a * Complex: Some complex-specific constants are now abbreviations for overloaded ones: complex_of_real = of_real, cmod = norm, hcmod = hnorm. Other constants have been entirely removed in favor of the polymorphic versions (INCOMPATIBILITY): approx <-- capprox HFinite <-- CFinite HInfinite <-- CInfinite Infinitesimal <-- CInfinitesimal monad <-- cmonad galaxy <-- cgalaxy (NS)LIM <-- (NS)CLIM, (NS)CRLIM is(NS)Cont <-- is(NS)Contc, is(NS)contCR (ns)deriv <-- (ns)cderiv *** HOL-Algebra *** * Formalisation of ideals and the quotient construction over rings. * Order and lattice theory no longer based on records. INCOMPATIBILITY. * Renamed lemmas least_carrier -> least_closed and greatest_carrier -> greatest_closed. INCOMPATIBILITY. * Method algebra is now set up via an attribute. For examples see Ring.thy. INCOMPATIBILITY: the method is now weaker on combinations of algebraic structures. * Renamed theory CRing to Ring. *** HOL-Nominal *** * Substantial, yet incomplete support for nominal datatypes (binding structures) based on HOL-Nominal logic. See src/HOL/Nominal and src/HOL/Nominal/Examples. Prospective users should consult http://isabelle.in.tum.de/nominal/ *** ML *** * ML basics: just one true type int, which coincides with IntInf.int (even on SML/NJ). * ML within Isar: antiquotations allow to embed statically-checked formal entities in the source, referring to the context available at compile-time. For example: ML {* @{sort "{zero,one}"} *} ML {* @{typ "'a => 'b"} *} ML {* @{term "%x. x"} *} ML {* @{prop "x == y"} *} ML {* @{ctyp "'a => 'b"} *} ML {* @{cterm "%x. x"} *} ML {* @{cprop "x == y"} *} ML {* @{thm asm_rl} *} ML {* @{thms asm_rl} *} ML {* @{type_name c} *} ML {* @{type_syntax c} *} ML {* @{const_name c} *} ML {* @{const_syntax c} *} ML {* @{context} *} ML {* @{theory} *} ML {* @{theory Pure} *} ML {* @{theory_ref} *} ML {* @{theory_ref Pure} *} ML {* @{simpset} *} ML {* @{claset} *} ML {* @{clasimpset} *} The same works for sources being ``used'' within an Isar context. * ML in Isar: improved error reporting; extra verbosity with ML_Context.trace enabled. * Pure/General/table.ML: the join operations now works via exceptions DUP/SAME instead of type option. This is simpler in simple cases, and admits slightly more efficient complex applications. * Pure: 'advanced' translation functions (parse_translation etc.) now use Context.generic instead of just theory. * Pure: datatype Context.generic joins theory/Proof.context and provides some facilities for code that works in either kind of context, notably GenericDataFun for uniform theory and proof data. * Pure: simplified internal attribute type, which is now always Context.generic * thm -> Context.generic * thm. Global (theory) vs. local (Proof.context) attributes have been discontinued, while minimizing code duplication. Thm.rule_attribute and Thm.declaration_attribute build canonical attributes; see also structure Context for further operations on Context.generic, notably GenericDataFun. INCOMPATIBILITY, need to adapt attribute type declarations and definitions. * Context data interfaces (Theory/Proof/GenericDataFun): removed name/print, uninitialized data defaults to ad-hoc copy of empty value, init only required for impure data. INCOMPATIBILITY: empty really need to be empty (no dependencies on theory content!) * Pure/kernel: consts certification ignores sort constraints given in signature declarations. (This information is not relevant to the logic, but only for type inference.) SIGNIFICANT INTERNAL CHANGE, potential INCOMPATIBILITY. * Pure: axiomatic type classes are now purely definitional, with explicit proofs of class axioms and super class relations performed internally. See Pure/axclass.ML for the main internal interfaces -- notably AxClass.define_class supercedes AxClass.add_axclass, and AxClass.axiomatize_class/classrel/arity supersede Sign.add_classes/classrel/arities. * Pure/Isar: Args/Attrib parsers operate on Context.generic -- global/local versions on theory vs. Proof.context have been discontinued; Attrib.syntax and Method.syntax have been adapted accordingly. INCOMPATIBILITY, need to adapt parser expressions for attributes, methods, etc. * Pure: several functions of signature "... -> theory -> theory * ..." have been reoriented to "... -> theory -> ... * theory" in order to allow natural usage in combination with the ||>, ||>>, |-> and fold_map combinators. * Pure: official theorem names (closed derivations) and additional comments (tags) are now strictly separate. Name hints -- which are maintained as tags -- may be attached any time without affecting the derivation. * Pure: primitive rule lift_rule now takes goal cterm instead of an actual goal state (thm). Use Thm.lift_rule (Thm.cprem_of st i) to achieve the old behaviour. * Pure: the "Goal" constant is now called "prop", supporting a slightly more general idea of ``protecting'' meta-level rule statements. * Pure: Logic.(un)varify only works in a global context, which is now enforced instead of silently assumed. INCOMPATIBILITY, may use Logic.legacy_(un)varify as temporary workaround. * Pure: structure Name provides scalable operations for generating internal variable names, notably Name.variants etc. This replaces some popular functions from term.ML: Term.variant -> Name.variant Term.variantlist -> Name.variant_list Term.invent_names -> Name.invent_list Note that low-level renaming rarely occurs in new code -- operations from structure Variable are used instead (see below). * Pure: structure Variable provides fundamental operations for proper treatment of fixed/schematic variables in a context. For example, Variable.import introduces fixes for schematics of given facts and Variable.export reverses the effect (up to renaming) -- this replaces various freeze_thaw operations. * Pure: structure Goal provides simple interfaces for init/conclude/finish and tactical prove operations (replacing former Tactic.prove). Goal.prove is the canonical way to prove results within a given context; Goal.prove_global is a degraded version for theory level goals, including a global Drule.standard. Note that OldGoals.prove_goalw_cterm has long been obsolete, since it is ill-behaved in a local proof context (e.g. with local fixes/assumes or in a locale context). * Pure/Syntax: generic interfaces for parsing (Syntax.parse_term etc.) and type checking (Syntax.check_term etc.), with common combinations (Syntax.read_term etc.). These supersede former Sign.read_term etc. which are considered legacy and await removal. * Pure/Syntax: generic interfaces for type unchecking (Syntax.uncheck_terms etc.) and unparsing (Syntax.unparse_term etc.), with common combinations (Syntax.pretty_term, Syntax.string_of_term etc.). Former Sign.pretty_term, Sign.string_of_term etc. are still available for convenience, but refer to the very same operations using a mere theory instead of a full context. * Isar: simplified treatment of user-level errors, using exception ERROR of string uniformly. Function error now merely raises ERROR, without any side effect on output channels. The Isar toplevel takes care of proper display of ERROR exceptions. ML code may use plain handle/can/try; cat_error may be used to concatenate errors like this: ... handle ERROR msg => cat_error msg "..." Toplevel ML code (run directly or through the Isar toplevel) may be embedded into the Isar toplevel with exception display/debug like this: Isar.toplevel (fn () => ...) INCOMPATIBILITY, removed special transform_error facilities, removed obsolete variants of user-level exceptions (ERROR_MESSAGE, Context.PROOF, ProofContext.CONTEXT, Proof.STATE, ProofHistory.FAIL) -- use plain ERROR instead. * Isar: theory setup now has type (theory -> theory), instead of a list. INCOMPATIBILITY, may use #> to compose setup functions. * Isar: ML toplevel pretty printer for type Proof.context, subject to ProofContext.debug/verbose flags. * Isar: Toplevel.theory_to_proof admits transactions that modify the theory before entering a proof state. Transactions now always see a quasi-functional intermediate checkpoint, both in interactive and batch mode. * Isar: simplified interfaces for outer syntax. Renamed OuterSyntax.add_keywords to OuterSyntax.keywords. Removed OuterSyntax.add_parsers -- this functionality is now included in OuterSyntax.command etc. INCOMPATIBILITY. * Simplifier: the simpset of a running simplification process now contains a proof context (cf. Simplifier.the_context), which is the very context that the initial simpset has been retrieved from (by simpset_of/local_simpset_of). Consequently, all plug-in components (solver, looper etc.) may depend on arbitrary proof data. * Simplifier.inherit_context inherits the proof context (plus the local bounds) of the current simplification process; any simproc etc. that calls the Simplifier recursively should do this! Removed former Simplifier.inherit_bounds, which is already included here -- INCOMPATIBILITY. Tools based on low-level rewriting may even have to specify an explicit context using Simplifier.context/theory_context. * Simplifier/Classical Reasoner: more abstract interfaces change_simpset/claset for modifying the simpset/claset reference of a theory; raw versions simpset/claset_ref etc. have been discontinued -- INCOMPATIBILITY. * Provers: more generic wrt. syntax of object-logics, avoid hardwired "Trueprop" etc. *** System *** * settings: the default heap location within ISABELLE_HOME_USER now includes ISABELLE_IDENTIFIER. This simplifies use of multiple Isabelle installations. * isabelle-process: option -S (secure mode) disables some critical operations, notably runtime compilation and evaluation of ML source code. * Basic Isabelle mode for jEdit, see Isabelle/lib/jedit/. * Support for parallel execution, using native multicore support of Poly/ML 5.1. The theory loader exploits parallelism when processing independent theories, according to the given theory header specifications. The maximum number of worker threads is specified via usedir option -M or the "max-threads" setting in Proof General. A speedup factor of 1.5--3.5 can be expected on a 4-core machine, and up to 6 on a 8-core machine. User-code needs to observe certain guidelines for thread-safe programming, see appendix A in the Isar Implementation manual. New in Isabelle2005 (October 2005) ---------------------------------- *** General *** * Theory headers: the new header syntax for Isar theories is theory imports ... uses ... begin where the 'uses' part is optional. The previous syntax theory = + ... + : will disappear in the next release. Use isatool fixheaders to convert existing theory files. Note that there is no change in ancient non-Isar theories now, but these will disappear soon. * Theory loader: parent theories can now also be referred to via relative and absolute paths. * Command 'find_theorems' searches for a list of criteria instead of a list of constants. Known criteria are: intro, elim, dest, name:string, simp:term, and any term. Criteria can be preceded by '-' to select theorems that do not match. Intro, elim, dest select theorems that match the current goal, name:s selects theorems whose fully qualified name contain s, and simp:term selects all simplification rules whose lhs match term. Any other term is interpreted as pattern and selects all theorems matching the pattern. Available in ProofGeneral under 'ProofGeneral -> Find Theorems' or C-c C-f. Example: C-c C-f (100) "(_::nat) + _ + _" intro -name: "HOL." prints the last 100 theorems matching the pattern "(_::nat) + _ + _", matching the current goal as introduction rule and not having "HOL." in their name (i.e. not being defined in theory HOL). * Command 'thms_containing' has been discontinued in favour of 'find_theorems'; INCOMPATIBILITY. * Communication with Proof General is now 8bit clean, which means that Unicode text in UTF-8 encoding may be used within theory texts (both formal and informal parts). Cf. option -U of the Isabelle Proof General interface. Here are some simple examples (cf. src/HOL/ex): http://isabelle.in.tum.de/library/HOL/ex/Hebrew.html http://isabelle.in.tum.de/library/HOL/ex/Chinese.html * Improved efficiency of the Simplifier and, to a lesser degree, the Classical Reasoner. Typical big applications run around 2 times faster. *** Document preparation *** * Commands 'display_drafts' and 'print_drafts' perform simple output of raw sources. Only those symbols that do not require additional LaTeX packages (depending on comments in isabellesym.sty) are displayed properly, everything else is left verbatim. isatool display and isatool print are used as front ends (these are subject to the DVI/PDF_VIEWER and PRINT_COMMAND settings, respectively). * Command tags control specific markup of certain regions of text, notably folding and hiding. Predefined tags include "theory" (for theory begin and end), "proof" for proof commands, and "ML" for commands involving ML code; the additional tags "visible" and "invisible" are unused by default. Users may give explicit tag specifications in the text, e.g. ''by %invisible (auto)''. The interpretation of tags is determined by the LaTeX job during document preparation: see option -V of isatool usedir, or options -n and -t of isatool document, or even the LaTeX macros \isakeeptag, \isafoldtag, \isadroptag. Several document versions may be produced at the same time via isatool usedir (the generated index.html will link all of them). Typical specifications include ''-V document=theory,proof,ML'' to present theory/proof/ML parts faithfully, ''-V outline=/proof,/ML'' to fold proof and ML commands, and ''-V mutilated=-theory,-proof,-ML'' to omit these parts without any formal replacement text. The Isabelle site default settings produce ''document'' and ''outline'' versions as specified above. * Several new antiquotations: @{term_type term} prints a term with its type annotated; @{typeof term} prints the type of a term; @{const const} is the same as @{term const}, but checks that the argument is a known logical constant; @{term_style style term} and @{thm_style style thm} print a term or theorem applying a "style" to it @{ML text} Predefined styles are 'lhs' and 'rhs' printing the lhs/rhs of definitions, equations, inequations etc., 'concl' printing only the conclusion of a meta-logical statement theorem, and 'prem1' .. 'prem19' to print the specified premise. TermStyle.add_style provides an ML interface for introducing further styles. See also the "LaTeX Sugar" document practical applications. The ML antiquotation prints type-checked ML expressions verbatim. * Markup commands 'chapter', 'section', 'subsection', 'subsubsection', and 'text' support optional locale specification '(in loc)', which specifies the default context for interpreting antiquotations. For example: 'text (in lattice) {* @{thm inf_assoc}*}'. * Option 'locale=NAME' of antiquotations specifies an alternative context interpreting the subsequent argument. For example: @{thm [locale=lattice] inf_assoc}. * Proper output of proof terms (@{prf ...} and @{full_prf ...}) within a proof context. * Proper output of antiquotations for theory commands involving a proof context (such as 'locale' or 'theorem (in loc) ...'). * Delimiters of outer tokens (string etc.) now produce separate LaTeX macros (\isachardoublequoteopen, isachardoublequoteclose etc.). * isatool usedir: new option -C (default true) controls whether option -D should include a copy of the original document directory; -C false prevents unwanted effects such as copying of administrative CVS data. *** Pure *** * Considerably improved version of 'constdefs' command. Now performs automatic type-inference of declared constants; additional support for local structure declarations (cf. locales and HOL records), see also isar-ref manual. Potential INCOMPATIBILITY: need to observe strictly sequential dependencies of definitions within a single 'constdefs' section; moreover, the declared name needs to be an identifier. If all fails, consider to fall back on 'consts' and 'defs' separately. * Improved indexed syntax and implicit structures. First of all, indexed syntax provides a notational device for subscripted application, using the new syntax \<^bsub>term\<^esub> for arbitrary expressions. Secondly, in a local context with structure declarations, number indexes \<^sub>n or the empty index (default number 1) refer to a certain fixed variable implicitly; option show_structs controls printing of implicit structures. Typical applications of these concepts involve record types and locales. * New command 'no_syntax' removes grammar declarations (and translations) resulting from the given syntax specification, which is interpreted in the same manner as for the 'syntax' command. * 'Advanced' translation functions (parse_translation etc.) may depend on the signature of the theory context being presently used for parsing/printing, see also isar-ref manual. * Improved 'oracle' command provides a type-safe interface to turn an ML expression of type theory -> T -> term into a primitive rule of type theory -> T -> thm (i.e. the functionality of Thm.invoke_oracle is already included here); see also FOL/ex/IffExample.thy; INCOMPATIBILITY. * axclass: name space prefix for class "c" is now "c_class" (was "c" before); "cI" is no longer bound, use "c.intro" instead. INCOMPATIBILITY. This change avoids clashes of fact bindings for axclasses vs. locales. * Improved internal renaming of symbolic identifiers -- attach primes instead of base 26 numbers. * New flag show_question_marks controls printing of leading question marks in schematic variable names. * In schematic variable names, *any* symbol following \<^isub> or \<^isup> is now treated as part of the base name. For example, the following works without printing of awkward ".0" indexes: lemma "x\<^isub>1 = x\<^isub>2 ==> x\<^isub>2 = x\<^isub>1" by simp * Inner syntax includes (*(*nested*) comments*). * Pretty printer now supports unbreakable blocks, specified in mixfix annotations as "(00...)". * Clear separation of logical types and nonterminals, where the latter may only occur in 'syntax' specifications or type abbreviations. Before that distinction was only partially implemented via type class "logic" vs. "{}". Potential INCOMPATIBILITY in rare cases of improper use of 'types'/'consts' instead of 'nonterminals'/'syntax'. Some very exotic syntax specifications may require further adaption (e.g. Cube/Cube.thy). * Removed obsolete type class "logic", use the top sort {} instead. Note that non-logical types should be declared as 'nonterminals' rather than 'types'. INCOMPATIBILITY for new object-logic specifications. * Attributes 'induct' and 'cases': type or set names may now be locally fixed variables as well. * Simplifier: can now control the depth to which conditional rewriting is traced via the PG menu Isabelle -> Settings -> Trace Simp Depth Limit. * Simplifier: simplification procedures may now take the current simpset into account (cf. Simplifier.simproc(_i) / mk_simproc interface), which is very useful for calling the Simplifier recursively. Minor INCOMPATIBILITY: the 'prems' argument of simprocs is gone -- use prems_of_ss on the simpset instead. Moreover, the low-level mk_simproc no longer applies Logic.varify internally, to allow for use in a context of fixed variables. * thin_tac now works even if the assumption being deleted contains !! or ==>. More generally, erule now works even if the major premise of the elimination rule contains !! or ==>. * Method 'rules' has been renamed to 'iprover'. INCOMPATIBILITY. * Reorganized bootstrapping of the Pure theories; CPure is now derived from Pure, which contains all common declarations already. Both theories are defined via plain Isabelle/Isar .thy files. INCOMPATIBILITY: elements of CPure (such as the CPure.intro / CPure.elim / CPure.dest attributes) now appear in the Pure name space; use isatool fixcpure to adapt your theory and ML sources. * New syntax 'name(i-j, i-, i, ...)' for referring to specific selections of theorems in named facts via index ranges. * 'print_theorems': in theory mode, really print the difference wrt. the last state (works for interactive theory development only), in proof mode print all local facts (cf. 'print_facts'); * 'hide': option '(open)' hides only base names. * More efficient treatment of intermediate checkpoints in interactive theory development. * Code generator is now invoked via code_module (incremental code generation) and code_library (modular code generation, ML structures for each theory). INCOMPATIBILITY: new keywords 'file' and 'contains' must be quoted when used as identifiers. * New 'value' command for reading, evaluating and printing terms using the code generator. INCOMPATIBILITY: command keyword 'value' must be quoted when used as identifier. *** Locales *** * New commands for the interpretation of locale expressions in theories (1), locales (2) and proof contexts (3). These generate proof obligations from the expression specification. After the obligations have been discharged, theorems of the expression are added to the theory, target locale or proof context. The synopsis of the commands is a follows: (1) interpretation expr inst (2) interpretation target < expr (3) interpret expr inst Interpretation in theories and proof contexts require a parameter instantiation of terms from the current context. This is applied to specifications and theorems of the interpreted expression. Interpretation in locales only permits parameter renaming through the locale expression. Interpretation is smart in that interpretations that are active already do not occur in proof obligations, neither are instantiated theorems stored in duplicate. Use 'print_interps' to inspect active interpretations of a particular locale. For details, see the Isar Reference manual. Examples can be found in HOL/Finite_Set.thy and HOL/Algebra/UnivPoly.thy. INCOMPATIBILITY: former 'instantiate' has been withdrawn, use 'interpret' instead. * New context element 'constrains' for adding type constraints to parameters. * Context expressions: renaming of parameters with syntax redeclaration. * Locale declaration: 'includes' disallowed. * Proper static binding of attribute syntax -- i.e. types / terms / facts mentioned as arguments are always those of the locale definition context, independently of the context of later invocations. Moreover, locale operations (renaming and type / term instantiation) are applied to attribute arguments as expected. INCOMPATIBILITY of the ML interface: always pass Attrib.src instead of actual attributes; rare situations may require Attrib.attribute to embed those attributes into Attrib.src that lack concrete syntax. Attribute implementations need to cooperate properly with the static binding mechanism. Basic parsers Args.XXX_typ/term/prop and Attrib.XXX_thm etc. already do the right thing without further intervention. Only unusual applications -- such as "where" or "of" (cf. src/Pure/Isar/attrib.ML), which process arguments depending both on the context and the facts involved -- may have to assign parsed values to argument tokens explicitly. * Changed parameter management in theorem generation for long goal statements with 'includes'. INCOMPATIBILITY: produces a different theorem statement in rare situations. * Locale inspection command 'print_locale' omits notes elements. Use 'print_locale!' to have them included in the output. *** Provers *** * Provers/hypsubst.ML: improved version of the subst method, for single-step rewriting: it now works in bound variable contexts. New is 'subst (asm)', for rewriting an assumption. INCOMPATIBILITY: may rewrite a different subterm than the original subst method, which is still available as 'simplesubst'. * Provers/quasi.ML: new transitivity reasoners for transitivity only and quasi orders. * Provers/trancl.ML: new transitivity reasoner for transitive and reflexive-transitive closure of relations. * Provers/blast.ML: new reference depth_limit to make blast's depth limit (previously hard-coded with a value of 20) user-definable. * Provers/simplifier.ML has been moved to Pure, where Simplifier.setup is peformed already. Object-logics merely need to finish their initial simpset configuration as before. INCOMPATIBILITY. *** HOL *** * Symbolic syntax of Hilbert Choice Operator is now as follows: syntax (epsilon) "_Eps" :: "[pttrn, bool] => 'a" ("(3\_./ _)" [0, 10] 10) The symbol \ is displayed as the alternative epsilon of LaTeX and x-symbol; use option '-m epsilon' to get it actually printed. Moreover, the mathematically important symbolic identifier \ becomes available as variable, constant etc. INCOMPATIBILITY, * "x > y" abbreviates "y < x" and "x >= y" abbreviates "y <= x". Similarly for all quantifiers: "ALL x > y" etc. The x-symbol for >= is \. New transitivity rules have been added to HOL/Orderings.thy to support corresponding Isar calculations. * "{x:A. P}" abbreviates "{x. x:A & P}", and similarly for "\" instead of ":". * theory SetInterval: changed the syntax for open intervals: Old New {..n(} {.. {\1<\.\.} \.\.\([^(}]*\)(} -> \.\.<\1} * Theory Commutative_Ring (in Library): method comm_ring for proving equalities in commutative rings; method 'algebra' provides a generic interface. * Theory Finite_Set: changed the syntax for 'setsum', summation over finite sets: "setsum (%x. e) A", which used to be "\x:A. e", is now either "SUM x:A. e" or "\x \ A. e". The bound variable can be a tuple pattern. Some new syntax forms are available: "\x | P. e" for "setsum (%x. e) {x. P}" "\x = a..b. e" for "setsum (%x. e) {a..b}" "\x = a..x < k. e" for "setsum (%x. e) {..x < k. e" used to be based on a separate function "Summation", which has been discontinued. * theory Finite_Set: in structured induction proofs, the insert case is now 'case (insert x F)' instead of the old counterintuitive 'case (insert F x)'. * The 'refute' command has been extended to support a much larger fragment of HOL, including axiomatic type classes, constdefs and typedefs, inductive datatypes and recursion. * New tactics 'sat' and 'satx' to prove propositional tautologies. Requires zChaff with proof generation to be installed. See HOL/ex/SAT_Examples.thy for examples. * Datatype induction via method 'induct' now preserves the name of the induction variable. For example, when proving P(xs::'a list) by induction on xs, the induction step is now P(xs) ==> P(a#xs) rather than P(list) ==> P(a#list) as previously. Potential INCOMPATIBILITY in unstructured proof scripts. * Reworked implementation of records. Improved scalability for records with many fields, avoiding performance problems for type inference. Records are no longer composed of nested field types, but of nested extension types. Therefore the record type only grows linear in the number of extensions and not in the number of fields. The top-level (users) view on records is preserved. Potential INCOMPATIBILITY only in strange cases, where the theory depends on the old record representation. The type generated for a record is called _ext_type. Flag record_quick_and_dirty_sensitive can be enabled to skip the proofs triggered by a record definition or a simproc (if quick_and_dirty is enabled). Definitions of large records can take quite long. New simproc record_upd_simproc for simplification of multiple record updates enabled by default. Moreover, trivial updates are also removed: r(|x := x r|) = r. INCOMPATIBILITY: old proofs break occasionally, since simplification is more powerful by default. * typedef: proper support for polymorphic sets, which contain extra type-variables in the term. * Simplifier: automatically reasons about transitivity chains involving "trancl" (r^+) and "rtrancl" (r^*) by setting up tactics provided by Provers/trancl.ML as additional solvers. INCOMPATIBILITY: old proofs break occasionally as simplification may now solve more goals than previously. * Simplifier: converts x <= y into x = y if assumption y <= x is present. Works for all partial orders (class "order"), in particular numbers and sets. For linear orders (e.g. numbers) it treats ~ x < y just like y <= x. * Simplifier: new simproc for "let x = a in f x". If a is a free or bound variable or a constant then the let is unfolded. Otherwise first a is simplified to b, and then f b is simplified to g. If possible we abstract b from g arriving at "let x = b in h x", otherwise we unfold the let and arrive at g. The simproc can be enabled/disabled by the reference use_let_simproc. Potential INCOMPATIBILITY since simplification is more powerful by default. * Classical reasoning: the meson method now accepts theorems as arguments. * Prover support: pre-release of the Isabelle-ATP linkup, which runs background jobs to provide advice on the provability of subgoals. * Theory OrderedGroup and Ring_and_Field: various additions and improvements to faciliate calculations involving equalities and inequalities. The following theorems have been eliminated or modified (INCOMPATIBILITY): abs_eq now named abs_of_nonneg abs_of_ge_0 now named abs_of_nonneg abs_minus_eq now named abs_of_nonpos imp_abs_id now named abs_of_nonneg imp_abs_neg_id now named abs_of_nonpos mult_pos now named mult_pos_pos mult_pos_le now named mult_nonneg_nonneg mult_pos_neg_le now named mult_nonneg_nonpos mult_pos_neg2_le now named mult_nonneg_nonpos2 mult_neg now named mult_neg_neg mult_neg_le now named mult_nonpos_nonpos * The following lemmas in Ring_and_Field have been added to the simplifier: zero_le_square not_square_less_zero The following lemmas have been deleted from Real/RealPow: realpow_zero_zero realpow_two realpow_less zero_le_power realpow_two_le abs_realpow_two realpow_two_abs * Theory Parity: added rules for simplifying exponents. * Theory List: The following theorems have been eliminated or modified (INCOMPATIBILITY): list_all_Nil now named list_all.simps(1) list_all_Cons now named list_all.simps(2) list_all_conv now named list_all_iff set_mem_eq now named mem_iff * Theories SetsAndFunctions and BigO (see HOL/Library) support asymptotic "big O" calculations. See the notes in BigO.thy. *** HOL-Complex *** * Theory RealDef: better support for embedding natural numbers and integers in the reals. The following theorems have been eliminated or modified (INCOMPATIBILITY): exp_ge_add_one_self now requires no hypotheses real_of_int_add reversed direction of equality (use [symmetric]) real_of_int_minus reversed direction of equality (use [symmetric]) real_of_int_diff reversed direction of equality (use [symmetric]) real_of_int_mult reversed direction of equality (use [symmetric]) * Theory RComplete: expanded support for floor and ceiling functions. * Theory Ln is new, with properties of the natural logarithm * Hyperreal: There is a new type constructor "star" for making nonstandard types. The old type names are now type synonyms: hypreal = real star hypnat = nat star hcomplex = complex star * Hyperreal: Many groups of similarly-defined constants have been replaced by polymorphic versions (INCOMPATIBILITY): star_of <-- hypreal_of_real, hypnat_of_nat, hcomplex_of_complex starset <-- starsetNat, starsetC *s* <-- *sNat*, *sc* starset_n <-- starsetNat_n, starsetC_n *sn* <-- *sNatn*, *scn* InternalSets <-- InternalNatSets, InternalCSets starfun <-- starfun{Nat,Nat2,C,RC,CR} *f* <-- *fNat*, *fNat2*, *fc*, *fRc*, *fcR* starfun_n <-- starfun{Nat,Nat2,C,RC,CR}_n *fn* <-- *fNatn*, *fNat2n*, *fcn*, *fRcn*, *fcRn* InternalFuns <-- InternalNatFuns, InternalNatFuns2, Internal{C,RC,CR}Funs * Hyperreal: Many type-specific theorems have been removed in favor of theorems specific to various axiomatic type classes (INCOMPATIBILITY): add_commute <-- {hypreal,hypnat,hcomplex}_add_commute add_assoc <-- {hypreal,hypnat,hcomplex}_add_assocs OrderedGroup.add_0 <-- {hypreal,hypnat,hcomplex}_add_zero_left OrderedGroup.add_0_right <-- {hypreal,hcomplex}_add_zero_right right_minus <-- hypreal_add_minus left_minus <-- {hypreal,hcomplex}_add_minus_left mult_commute <-- {hypreal,hypnat,hcomplex}_mult_commute mult_assoc <-- {hypreal,hypnat,hcomplex}_mult_assoc mult_1_left <-- {hypreal,hypnat}_mult_1, hcomplex_mult_one_left mult_1_right <-- hcomplex_mult_one_right mult_zero_left <-- hcomplex_mult_zero_left left_distrib <-- {hypreal,hypnat,hcomplex}_add_mult_distrib right_distrib <-- hypnat_add_mult_distrib2 zero_neq_one <-- {hypreal,hypnat,hcomplex}_zero_not_eq_one right_inverse <-- hypreal_mult_inverse left_inverse <-- hypreal_mult_inverse_left, hcomplex_mult_inv_left order_refl <-- {hypreal,hypnat}_le_refl order_trans <-- {hypreal,hypnat}_le_trans order_antisym <-- {hypreal,hypnat}_le_anti_sym order_less_le <-- {hypreal,hypnat}_less_le linorder_linear <-- {hypreal,hypnat}_le_linear add_left_mono <-- {hypreal,hypnat}_add_left_mono mult_strict_left_mono <-- {hypreal,hypnat}_mult_less_mono2 add_nonneg_nonneg <-- hypreal_le_add_order * Hyperreal: Separate theorems having to do with type-specific versions of constants have been merged into theorems that apply to the new polymorphic constants (INCOMPATIBILITY): STAR_UNIV_set <-- {STAR_real,NatStar_real,STARC_complex}_set STAR_empty_set <-- {STAR,NatStar,STARC}_empty_set STAR_Un <-- {STAR,NatStar,STARC}_Un STAR_Int <-- {STAR,NatStar,STARC}_Int STAR_Compl <-- {STAR,NatStar,STARC}_Compl STAR_subset <-- {STAR,NatStar,STARC}_subset STAR_mem <-- {STAR,NatStar,STARC}_mem STAR_mem_Compl <-- {STAR,STARC}_mem_Compl STAR_diff <-- {STAR,STARC}_diff STAR_star_of_image_subset <-- {STAR_hypreal_of_real, NatStar_hypreal_of_real, STARC_hcomplex_of_complex}_image_subset starset_n_Un <-- starset{Nat,C}_n_Un starset_n_Int <-- starset{Nat,C}_n_Int starset_n_Compl <-- starset{Nat,C}_n_Compl starset_n_diff <-- starset{Nat,C}_n_diff InternalSets_Un <-- Internal{Nat,C}Sets_Un InternalSets_Int <-- Internal{Nat,C}Sets_Int InternalSets_Compl <-- Internal{Nat,C}Sets_Compl InternalSets_diff <-- Internal{Nat,C}Sets_diff InternalSets_UNIV_diff <-- Internal{Nat,C}Sets_UNIV_diff InternalSets_starset_n <-- Internal{Nat,C}Sets_starset{Nat,C}_n starset_starset_n_eq <-- starset{Nat,C}_starset{Nat,C}_n_eq starset_n_starset <-- starset{Nat,C}_n_starset{Nat,C} starfun_n_starfun <-- starfun{Nat,Nat2,C,RC,CR}_n_starfun{Nat,Nat2,C,RC,CR} starfun <-- starfun{Nat,Nat2,C,RC,CR} starfun_mult <-- starfun{Nat,Nat2,C,RC,CR}_mult starfun_add <-- starfun{Nat,Nat2,C,RC,CR}_add starfun_minus <-- starfun{Nat,Nat2,C,RC,CR}_minus starfun_diff <-- starfun{C,RC,CR}_diff starfun_o <-- starfun{NatNat2,Nat2,_stafunNat,C,C_starfunRC,_starfunCR}_o starfun_o2 <-- starfun{NatNat2,_stafunNat,C,C_starfunRC,_starfunCR}_o2 starfun_const_fun <-- starfun{Nat,Nat2,C,RC,CR}_const_fun starfun_inverse <-- starfun{Nat,C,RC,CR}_inverse starfun_eq <-- starfun{Nat,Nat2,C,RC,CR}_eq starfun_eq_iff <-- starfun{C,RC,CR}_eq_iff starfun_Id <-- starfunC_Id starfun_approx <-- starfun{Nat,CR}_approx starfun_capprox <-- starfun{C,RC}_capprox starfun_abs <-- starfunNat_rabs starfun_lambda_cancel <-- starfun{C,CR,RC}_lambda_cancel starfun_lambda_cancel2 <-- starfun{C,CR,RC}_lambda_cancel2 starfun_mult_HFinite_approx <-- starfunCR_mult_HFinite_capprox starfun_mult_CFinite_capprox <-- starfun{C,RC}_mult_CFinite_capprox starfun_add_capprox <-- starfun{C,RC}_add_capprox starfun_add_approx <-- starfunCR_add_approx starfun_inverse_inverse <-- starfunC_inverse_inverse starfun_divide <-- starfun{C,CR,RC}_divide starfun_n <-- starfun{Nat,C}_n starfun_n_mult <-- starfun{Nat,C}_n_mult starfun_n_add <-- starfun{Nat,C}_n_add starfun_n_add_minus <-- starfunNat_n_add_minus starfun_n_const_fun <-- starfun{Nat,C}_n_const_fun starfun_n_minus <-- starfun{Nat,C}_n_minus starfun_n_eq <-- starfun{Nat,C}_n_eq star_n_add <-- {hypreal,hypnat,hcomplex}_add star_n_minus <-- {hypreal,hcomplex}_minus star_n_diff <-- {hypreal,hcomplex}_diff star_n_mult <-- {hypreal,hcomplex}_mult star_n_inverse <-- {hypreal,hcomplex}_inverse star_n_le <-- {hypreal,hypnat}_le star_n_less <-- {hypreal,hypnat}_less star_n_zero_num <-- {hypreal,hypnat,hcomplex}_zero_num star_n_one_num <-- {hypreal,hypnat,hcomplex}_one_num star_n_abs <-- hypreal_hrabs star_n_divide <-- hcomplex_divide star_of_add <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_add star_of_minus <-- {hypreal_of_real,hcomplex_of_complex}_minus star_of_diff <-- hypreal_of_real_diff star_of_mult <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_mult star_of_one <-- {hypreal_of_real,hcomplex_of_complex}_one star_of_zero <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_zero star_of_le <-- {hypreal_of_real,hypnat_of_nat}_le_iff star_of_less <-- {hypreal_of_real,hypnat_of_nat}_less_iff star_of_eq <-- {hypreal_of_real,hypnat_of_nat,hcomplex_of_complex}_eq_iff star_of_inverse <-- {hypreal_of_real,hcomplex_of_complex}_inverse star_of_divide <-- {hypreal_of_real,hcomplex_of_complex}_divide star_of_of_nat <-- {hypreal_of_real,hcomplex_of_complex}_of_nat star_of_of_int <-- {hypreal_of_real,hcomplex_of_complex}_of_int star_of_number_of <-- {hypreal,hcomplex}_number_of star_of_number_less <-- number_of_less_hypreal_of_real_iff star_of_number_le <-- number_of_le_hypreal_of_real_iff star_of_eq_number <-- hypreal_of_real_eq_number_of_iff star_of_less_number <-- hypreal_of_real_less_number_of_iff star_of_le_number <-- hypreal_of_real_le_number_of_iff star_of_power <-- hypreal_of_real_power star_of_eq_0 <-- hcomplex_of_complex_zero_iff * Hyperreal: new method "transfer" that implements the transfer principle of nonstandard analysis. With a subgoal that mentions nonstandard types like "'a star", the command "apply transfer" replaces it with an equivalent one that mentions only standard types. To be successful, all free variables must have standard types; non- standard variables must have explicit universal quantifiers. * Hyperreal: A theory of Taylor series. *** HOLCF *** * Discontinued special version of 'constdefs' (which used to support continuous functions) in favor of the general Pure one with full type-inference. * New simplification procedure for solving continuity conditions; it is much faster on terms with many nested lambda abstractions (cubic instead of exponential time). * New syntax for domain package: selector names are now optional. Parentheses should be omitted unless argument is lazy, for example: domain 'a stream = cons "'a" (lazy "'a stream") * New command 'fixrec' for defining recursive functions with pattern matching; defining multiple functions with mutual recursion is also supported. Patterns may include the constants cpair, spair, up, sinl, sinr, or any data constructor defined by the domain package. The given equations are proven as rewrite rules. See HOLCF/ex/Fixrec_ex.thy for syntax and examples. * New commands 'cpodef' and 'pcpodef' for defining predicate subtypes of cpo and pcpo types. Syntax is exactly like the 'typedef' command, but the proof obligation additionally includes an admissibility requirement. The packages generate instances of class cpo or pcpo, with continuity and strictness theorems for Rep and Abs. * HOLCF: Many theorems have been renamed according to a more standard naming scheme (INCOMPATIBILITY): foo_inject: "foo$x = foo$y ==> x = y" foo_eq: "(foo$x = foo$y) = (x = y)" foo_less: "(foo$x << foo$y) = (x << y)" foo_strict: "foo$UU = UU" foo_defined: "... ==> foo$x ~= UU" foo_defined_iff: "(foo$x = UU) = (x = UU)" *** ZF *** * ZF/ex: theories Group and Ring provide examples in abstract algebra, including the First Isomorphism Theorem (on quotienting by the kernel of a homomorphism). * ZF/Simplifier: install second copy of type solver that actually makes use of TC rules declared to Isar proof contexts (or locales); the old version is still required for ML proof scripts. *** Cube *** * Converted to Isar theory format; use locales instead of axiomatic theories. *** ML *** * Pure/library.ML: added ##>, ##>>, #>> -- higher-order counterparts for ||>, ||>>, |>>, * Pure/library.ML no longer defines its own option datatype, but uses that of the SML basis, which has constructors NONE and SOME instead of None and Some, as well as exception Option.Option instead of OPTION. The functions the, if_none, is_some, is_none have been adapted accordingly, while Option.map replaces apsome. * Pure/library.ML: the exception LIST has been given up in favour of the standard exceptions Empty and Subscript, as well as Library.UnequalLengths. Function like Library.hd and Library.tl are superceded by the standard hd and tl functions etc. A number of basic list functions are no longer exported to the ML toplevel, as they are variants of predefined functions. The following suggests how one can translate existing code: rev_append xs ys = List.revAppend (xs, ys) nth_elem (i, xs) = List.nth (xs, i) last_elem xs = List.last xs flat xss = List.concat xss seq fs = List.app fs partition P xs = List.partition P xs mapfilter f xs = List.mapPartial f xs * Pure/library.ML: several combinators for linear functional transformations, notably reverse application and composition: x |> f f #> g (x, y) |-> f f #-> g * Pure/library.ML: introduced/changed precedence of infix operators: infix 1 |> |-> ||> ||>> |>> |>>> #> #->; infix 2 ?; infix 3 o oo ooo oooo; infix 4 ~~ upto downto; Maybe INCOMPATIBILITY when any of those is used in conjunction with other infix operators. * Pure/library.ML: natural list combinators fold, fold_rev, and fold_map support linear functional transformations and nesting. For example: fold f [x1, ..., xN] y = y |> f x1 |> ... |> f xN (fold o fold) f [xs1, ..., xsN] y = y |> fold f xs1 |> ... |> fold f xsN fold f [x1, ..., xN] = f x1 #> ... #> f xN (fold o fold) f [xs1, ..., xsN] = fold f xs1 #> ... #> fold f xsN * Pure/library.ML: the following selectors on type 'a option are available: the: 'a option -> 'a (*partial*) these: 'a option -> 'a where 'a = 'b list the_default: 'a -> 'a option -> 'a the_list: 'a option -> 'a list * Pure/General: structure AList (cf. Pure/General/alist.ML) provides basic operations for association lists, following natural argument order; moreover the explicit equality predicate passed here avoids potentially expensive polymorphic runtime equality checks. The old functions may be expressed as follows: assoc = uncurry (AList.lookup (op =)) assocs = these oo AList.lookup (op =) overwrite = uncurry (AList.update (op =)) o swap * Pure/General: structure AList (cf. Pure/General/alist.ML) provides val make: ('a -> 'b) -> 'a list -> ('a * 'b) list val find: ('a * 'b -> bool) -> ('c * 'b) list -> 'a -> 'c list replacing make_keylist and keyfilter (occassionally used) Naive rewrites: make_keylist = AList.make keyfilter = AList.find (op =) * eq_fst and eq_snd now take explicit equality parameter, thus avoiding eqtypes. Naive rewrites: eq_fst = eq_fst (op =) eq_snd = eq_snd (op =) * Removed deprecated apl and apr (rarely used). Naive rewrites: apl (n, op) =>>= curry op n apr (op, m) =>>= fn n => op (n, m) * Pure/General: structure OrdList (cf. Pure/General/ord_list.ML) provides a reasonably efficient light-weight implementation of sets as lists. * Pure/General: generic tables (cf. Pure/General/table.ML) provide a few new operations; existing lookup and update are now curried to follow natural argument order (for use with fold etc.); INCOMPATIBILITY, use (uncurry Symtab.lookup) etc. as last resort. * Pure/General: output via the Isabelle channels of writeln/warning/error etc. is now passed through Output.output, with a hook for arbitrary transformations depending on the print_mode (cf. Output.add_mode -- the first active mode that provides a output function wins). Already formatted output may be embedded into further text via Output.raw; the result of Pretty.string_of/str_of and derived functions (string_of_term/cterm/thm etc.) is already marked raw to accommodate easy composition of diagnostic messages etc. Programmers rarely need to care about Output.output or Output.raw at all, with some notable exceptions: Output.output is required when bypassing the standard channels (writeln etc.), or in token translations to produce properly formatted results; Output.raw is required when capturing already output material that will eventually be presented to the user a second time. For the default print mode, both Output.output and Output.raw have no effect. * Pure/General: Output.time_accumulator NAME creates an operator ('a -> 'b) -> 'a -> 'b to measure runtime and count invocations; the cumulative results are displayed at the end of a batch session. * Pure/General: File.sysify_path and File.quote_sysify path have been replaced by File.platform_path and File.shell_path (with appropriate hooks). This provides a clean interface for unusual systems where the internal and external process view of file names are different. * Pure: more efficient orders for basic syntactic entities: added fast_string_ord, fast_indexname_ord, fast_term_ord; changed sort_ord and typ_ord to use fast_string_ord and fast_indexname_ord (term_ord is NOT affected); structures Symtab, Vartab, Typtab, Termtab use the fast orders now -- potential INCOMPATIBILITY for code that depends on a particular order for Symtab.keys, Symtab.dest, etc. (consider using Library.sort_strings on result). * Pure/term.ML: combinators fold_atyps, fold_aterms, fold_term_types, fold_types traverse types/terms from left to right, observing natural argument order. Supercedes previous foldl_XXX versions, add_frees, add_vars etc. have been adapted as well: INCOMPATIBILITY. * Pure: name spaces have been refined, with significant changes of the internal interfaces -- INCOMPATIBILITY. Renamed cond_extern(_table) to extern(_table). The plain name entry path is superceded by a general 'naming' context, which also includes the 'policy' to produce a fully qualified name and external accesses of a fully qualified name; NameSpace.extend is superceded by context dependent Sign.declare_name. Several theory and proof context operations modify the naming context. Especially note Theory.restore_naming and ProofContext.restore_naming to get back to a sane state; note that Theory.add_path is no longer sufficient to recover from Theory.absolute_path in particular. * Pure: new flags short_names (default false) and unique_names (default true) for controlling output of qualified names. If short_names is set, names are printed unqualified. If unique_names is reset, the name prefix is reduced to the minimum required to achieve the original result when interning again, even if there is an overlap with earlier declarations. * Pure/TheoryDataFun: change of the argument structure; 'prep_ext' is now 'extend', and 'merge' gets an additional Pretty.pp argument (useful for printing error messages). INCOMPATIBILITY. * Pure: major reorganization of the theory context. Type Sign.sg and Theory.theory are now identified, referring to the universal Context.theory (see Pure/context.ML). Actual signature and theory content is managed as theory data. The old code and interfaces were spread over many files and structures; the new arrangement introduces considerable INCOMPATIBILITY to gain more clarity: Context -- theory management operations (name, identity, inclusion, parents, ancestors, merge, etc.), plus generic theory data; Sign -- logical signature and syntax operations (declaring consts, types, etc.), plus certify/read for common entities; Theory -- logical theory operations (stating axioms, definitions, oracles), plus a copy of logical signature operations (consts, types, etc.); also a few basic management operations (Theory.copy, Theory.merge, etc.) The most basic sign_of operations (Theory.sign_of, Thm.sign_of_thm etc.) as well as the sign field in Thm.rep_thm etc. have been retained for convenience -- they merely return the theory. * Pure: type Type.tsig is superceded by theory in most interfaces. * Pure: the Isar proof context type is already defined early in Pure as Context.proof (note that ProofContext.context and Proof.context are aliases, where the latter is the preferred name). This enables other Isabelle components to refer to that type even before Isar is present. * Pure/sign/theory: discontinued named name spaces (i.e. classK, typeK, constK, axiomK, oracleK), but provide explicit operations for any of these kinds. For example, Sign.intern typeK is now Sign.intern_type, Theory.hide_space Sign.typeK is now Theory.hide_types. Also note that former Theory.hide_classes/types/consts are now Theory.hide_classes_i/types_i/consts_i, while the non '_i' versions internalize their arguments! INCOMPATIBILITY. * Pure: get_thm interface (of PureThy and ProofContext) expects datatype thmref (with constructors Name and NameSelection) instead of plain string -- INCOMPATIBILITY; * Pure: cases produced by proof methods specify options, where NONE means to remove case bindings -- INCOMPATIBILITY in (RAW_)METHOD_CASES. * Pure: the following operations retrieve axioms or theorems from a theory node or theory hierarchy, respectively: Theory.axioms_of: theory -> (string * term) list Theory.all_axioms_of: theory -> (string * term) list PureThy.thms_of: theory -> (string * thm) list PureThy.all_thms_of: theory -> (string * thm) list * Pure: print_tac now outputs the goal through the trace channel. * Isar toplevel: improved diagnostics, mostly for Poly/ML only. Reference Toplevel.debug (default false) controls detailed printing and tracing of low-level exceptions; Toplevel.profiling (default 0) controls execution profiling -- set to 1 for time and 2 for space (both increase the runtime). * Isar session: The initial use of ROOT.ML is now always timed, i.e. the log will show the actual process times, in contrast to the elapsed wall-clock time that the outer shell wrapper produces. * Simplifier: improved handling of bound variables (nameless representation, avoid allocating new strings). Simprocs that invoke the Simplifier recursively should use Simplifier.inherit_bounds to avoid local name clashes. Failure to do so produces warnings "Simplifier: renamed bound variable ..."; set Simplifier.debug_bounds for further details. * ML functions legacy_bindings and use_legacy_bindings produce ML fact bindings for all theorems stored within a given theory; this may help in porting non-Isar theories to Isar ones, while keeping ML proof scripts for the time being. * ML operator HTML.with_charset specifies the charset begin used for generated HTML files. For example: HTML.with_charset "utf-8" use_thy "Hebrew"; HTML.with_charset "utf-8" use_thy "Chinese"; *** System *** * Allow symlinks to all proper Isabelle executables (Isabelle, isabelle, isatool etc.). * ISABELLE_DOC_FORMAT setting specifies preferred document format (for isatool doc, isatool mkdir, display_drafts etc.). * isatool usedir: option -f allows specification of the ML file to be used by Isabelle; default is ROOT.ML. * New isatool version outputs the version identifier of the Isabelle distribution being used. * HOL: new isatool dimacs2hol converts files in DIMACS CNF format (containing Boolean satisfiability problems) into Isabelle/HOL theories. New in Isabelle2004 (April 2004) -------------------------------- *** General *** * Provers/order.ML: new efficient reasoner for partial and linear orders. Replaces linorder.ML. * Pure: Greek letters (except small lambda, \), as well as Gothic (\...\\...\), calligraphic (\...\), and Euler (\...\), are now considered normal letters, and can therefore be used anywhere where an ASCII letter (a...zA...Z) has until now. COMPATIBILITY: This obviously changes the parsing of some terms, especially where a symbol has been used as a binder, say '\x. ...', which is now a type error since \x will be parsed as an identifier. Fix it by inserting a space around former symbols. Call 'isatool fixgreek' to try to fix parsing errors in existing theory and ML files. * Pure: Macintosh and Windows line-breaks are now allowed in theory files. * Pure: single letter sub/superscripts (\<^isub> and \<^isup>) are now allowed in identifiers. Similar to Greek letters \<^isub> is now considered a normal (but invisible) letter. For multiple letter subscripts repeat \<^isub> like this: x\<^isub>1\<^isub>2. * Pure: There are now sub-/superscripts that can span more than one character. Text between \<^bsub> and \<^esub> is set in subscript in ProofGeneral and LaTeX, text between \<^bsup> and \<^esup> in superscript. The new control characters are not identifier parts. * Pure: Control-symbols of the form \<^raw:...> will literally print the content of "..." to the latex file instead of \isacntrl... . The "..." may consist of any printable characters excluding the end bracket >. * Pure: Using new Isar command "finalconsts" (or the ML functions Theory.add_finals or Theory.add_finals_i) it is now possible to declare constants "final", which prevents their being given a definition later. It is useful for constants whose behaviour is fixed axiomatically rather than definitionally, such as the meta-logic connectives. * Pure: 'instance' now handles general arities with general sorts (i.e. intersections of classes), * Presentation: generated HTML now uses a CSS style sheet to make layout (somewhat) independent of content. It is copied from lib/html/isabelle.css. It can be changed to alter the colors/layout of generated pages. *** Isar *** * Tactic emulation methods rule_tac, erule_tac, drule_tac, frule_tac, cut_tac, subgoal_tac and thin_tac: - Now understand static (Isar) contexts. As a consequence, users of Isar locales are no longer forced to write Isar proof scripts. For details see Isar Reference Manual, paragraph 4.3.2: Further tactic emulations. - INCOMPATIBILITY: names of variables to be instantiated may no longer be enclosed in quotes. Instead, precede variable name with `?'. This is consistent with the instantiation attribute "where". * Attributes "where" and "of": - Now take type variables of instantiated theorem into account when reading the instantiation string. This fixes a bug that caused instantiated theorems to have too special types in some circumstances. - "where" permits explicit instantiations of type variables. * Calculation commands "moreover" and "also" no longer interfere with current facts ("this"), admitting arbitrary combinations with "then" and derived forms. * Locales: - Goal statements involving the context element "includes" no longer generate theorems with internal delta predicates (those ending on "_axioms") in the premise. Resolve particular premise with .intro to obtain old form. - Fixed bug in type inference ("unify_frozen") that prevented mix of target specification and "includes" elements in goal statement. - Rule sets .intro and .axioms no longer declared as [intro?] and [elim?] (respectively) by default. - Experimental command for instantiation of locales in proof contexts: instantiate
: "\i n. (i::int) mod 2 ^ n = 0 \ 0 < i mod 2 ^ n" by (metis le_less take_bit_eq_mod take_bit_nonnegative) have less_power: "\n i p. (i::int) mod numeral p ^ n < numeral p ^ n" by simp show \a mod b div b = 0\ for a b :: \'a word\ apply transfer apply (simp add: take_bit_eq_mod mod_eq_0_iff_dvd dvd_def) by (metis (no_types, opaque_lifting) "\
" Euclidean_Division.pos_mod_bound Euclidean_Division.pos_mod_sign le_less_trans mult_eq_0_iff take_bit_eq_mod take_bit_nonnegative zdiv_eq_0_iff zmod_le_nonneg_dividend) show \(1 + a) div 2 = a div 2\ if \even a\ for a :: \'a word\ using that by transfer (auto dest: le_Suc_ex simp add: take_bit_Suc elim!: evenE) show \(2 :: 'a word) ^ m div 2 ^ n = of_bool ((2 :: 'a word) ^ m \ 0 \ n \ m) * 2 ^ (m - n)\ for m n :: nat by transfer (simp, simp add: exp_div_exp_eq) show "a div 2 ^ m div 2 ^ n = a div 2 ^ (m + n)" for a :: "'a word" and m n :: nat apply transfer apply (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: drop_bit_eq_div) apply (simp add: drop_bit_take_bit) done show "a mod 2 ^ m mod 2 ^ n = a mod 2 ^ min m n" for a :: "'a word" and m n :: nat by transfer (auto simp flip: take_bit_eq_mod simp add: ac_simps) show \a * 2 ^ m mod 2 ^ n = a mod 2 ^ (n - m) * 2 ^ m\ if \m \ n\ for a :: "'a word" and m n :: nat using that apply transfer apply (auto simp flip: take_bit_eq_mod) apply (auto simp flip: push_bit_eq_mult simp add: push_bit_take_bit split: split_min_lin) done show \a div 2 ^ n mod 2 ^ m = a mod (2 ^ (n + m)) div 2 ^ n\ for a :: "'a word" and m n :: nat by transfer (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: take_bit_eq_mod drop_bit_eq_div split: split_min_lin) show \even ((2 ^ m - 1) div (2::'a word) ^ n) \ 2 ^ n = (0::'a word) \ m \ n\ for m n :: nat by transfer (auto simp add: take_bit_of_mask even_mask_div_iff) show \even (a * 2 ^ m div 2 ^ n) \ n < m \ (2::'a word) ^ n = 0 \ m \ n \ even (a div 2 ^ (n - m))\ for a :: \'a word\ and m n :: nat proof transfer show \even (take_bit LENGTH('a) (k * 2 ^ m) div take_bit LENGTH('a) (2 ^ n)) \ n < m \ take_bit LENGTH('a) ((2::int) ^ n) = take_bit LENGTH('a) 0 \ (m \ n \ even (take_bit LENGTH('a) k div take_bit LENGTH('a) (2 ^ (n - m))))\ for m n :: nat and k l :: int by (auto simp flip: take_bit_eq_mod drop_bit_eq_div push_bit_eq_mult simp add: div_push_bit_of_1_eq_drop_bit drop_bit_take_bit drop_bit_push_bit_int [of n m]) qed qed end lemma bit_word_eqI: \a = b\ if \\n. n < LENGTH('a) \ bit a n \ bit b n\ for a b :: \'a::len word\ using that by transfer (auto simp add: nat_less_le bit_eq_iff bit_take_bit_iff) lemma bit_imp_le_length: \n < LENGTH('a)\ if \bit w n\ for w :: \'a::len word\ using that by transfer simp lemma not_bit_length [simp]: \\ bit w LENGTH('a)\ for w :: \'a::len word\ by transfer simp lemma finite_bit_word [simp]: \finite {n. bit w n}\ for w :: \'a::len word\ proof - have \{n. bit w n} \ {0..LENGTH('a)}\ by (auto dest: bit_imp_le_length) moreover have \finite {0..LENGTH('a)}\ by simp ultimately show ?thesis by (rule finite_subset) qed lemma bit_numeral_word_iff [simp]: \bit (numeral w :: 'a::len word) n \ n < LENGTH('a) \ bit (numeral w :: int) n\ by transfer simp lemma bit_neg_numeral_word_iff [simp]: \bit (- numeral w :: 'a::len word) n \ n < LENGTH('a) \ bit (- numeral w :: int) n\ by transfer simp instantiation word :: (len) semiring_bit_shifts begin lift_definition push_bit_word :: \nat \ 'a word \ 'a word\ is push_bit proof - show \take_bit LENGTH('a) (push_bit n k) = take_bit LENGTH('a) (push_bit n l)\ if \take_bit LENGTH('a) k = take_bit LENGTH('a) l\ for k l :: int and n :: nat proof - from that have \take_bit (LENGTH('a) - n) (take_bit LENGTH('a) k) = take_bit (LENGTH('a) - n) (take_bit LENGTH('a) l)\ by simp moreover have \min (LENGTH('a) - n) LENGTH('a) = LENGTH('a) - n\ by simp ultimately show ?thesis by (simp add: take_bit_push_bit) qed qed lift_definition drop_bit_word :: \nat \ 'a word \ 'a word\ is \\n. drop_bit n \ take_bit LENGTH('a)\ by (simp add: take_bit_eq_mod) lift_definition take_bit_word :: \nat \ 'a word \ 'a word\ is \\n. take_bit (min LENGTH('a) n)\ by (simp add: ac_simps) (simp only: flip: take_bit_take_bit) instance proof show \push_bit n a = a * 2 ^ n\ for n :: nat and a :: \'a word\ by transfer (simp add: push_bit_eq_mult) show \drop_bit n a = a div 2 ^ n\ for n :: nat and a :: \'a word\ by transfer (simp flip: drop_bit_eq_div add: drop_bit_take_bit) show \take_bit n a = a mod 2 ^ n\ for n :: nat and a :: \'a word\ by transfer (auto simp flip: take_bit_eq_mod) qed end lemma [code]: \push_bit n w = w * 2 ^ n\ for w :: \'a::len word\ by (fact push_bit_eq_mult) lemma [code]: \Word.the_int (drop_bit n w) = drop_bit n (Word.the_int w)\ by transfer (simp add: drop_bit_take_bit min_def le_less less_diff_conv) lemma [code]: \Word.the_int (take_bit n w) = (if n < LENGTH('a::len) then take_bit n (Word.the_int w) else Word.the_int w)\ for w :: \'a::len word\ by transfer (simp add: not_le not_less ac_simps min_absorb2) instantiation word :: (len) ring_bit_operations begin lift_definition not_word :: \'a word \ 'a word\ is not by (simp add: take_bit_not_iff) lift_definition and_word :: \'a word \ 'a word \ 'a word\ is \and\ by simp lift_definition or_word :: \'a word \ 'a word \ 'a word\ is or by simp lift_definition xor_word :: \'a word \ 'a word \ 'a word\ is xor by simp lift_definition mask_word :: \nat \ 'a word\ is mask . lift_definition set_bit_word :: \nat \ 'a word \ 'a word\ is set_bit by (simp add: set_bit_def) lift_definition unset_bit_word :: \nat \ 'a word \ 'a word\ is unset_bit by (simp add: unset_bit_def) lift_definition flip_bit_word :: \nat \ 'a word \ 'a word\ is flip_bit by (simp add: flip_bit_def) instance by (standard; transfer) (auto simp add: minus_eq_not_minus_1 mask_eq_exp_minus_1 bit_simps set_bit_def flip_bit_def) end lemma [code_abbrev]: \push_bit n 1 = (2 :: 'a::len word) ^ n\ by (fact push_bit_of_1) lemma [code]: \NOT w = Word.of_int (NOT (Word.the_int w))\ for w :: \'a::len word\ by transfer (simp add: take_bit_not_take_bit) context includes bit_operations_syntax begin lemma [code]: \Word.the_int (v AND w) = Word.the_int v AND Word.the_int w\ by transfer simp lemma [code]: \Word.the_int (v OR w) = Word.the_int v OR Word.the_int w\ by transfer simp lemma [code]: \Word.the_int (v XOR w) = Word.the_int v XOR Word.the_int w\ by transfer simp lemma [code]: \Word.the_int (mask n :: 'a::len word) = mask (min LENGTH('a) n)\ by transfer simp lemma [code]: \set_bit n w = w OR push_bit n 1\ for w :: \'a::len word\ by (fact set_bit_eq_or) lemma [code]: \unset_bit n w = w AND NOT (push_bit n 1)\ for w :: \'a::len word\ by (fact unset_bit_eq_and_not) lemma [code]: \flip_bit n w = w XOR push_bit n 1\ for w :: \'a::len word\ by (fact flip_bit_eq_xor) context includes lifting_syntax begin lemma set_bit_word_transfer [transfer_rule]: \((=) ===> pcr_word ===> pcr_word) set_bit set_bit\ by (unfold set_bit_def) transfer_prover lemma unset_bit_word_transfer [transfer_rule]: \((=) ===> pcr_word ===> pcr_word) unset_bit unset_bit\ by (unfold unset_bit_def) transfer_prover lemma flip_bit_word_transfer [transfer_rule]: \((=) ===> pcr_word ===> pcr_word) flip_bit flip_bit\ by (unfold flip_bit_def) transfer_prover lemma signed_take_bit_word_transfer [transfer_rule]: \((=) ===> pcr_word ===> pcr_word) (\n k. signed_take_bit n (take_bit LENGTH('a::len) k)) (signed_take_bit :: nat \ 'a word \ 'a word)\ proof - let ?K = \\n (k :: int). take_bit (min LENGTH('a) n) k OR of_bool (n < LENGTH('a) \ bit k n) * NOT (mask n)\ let ?W = \\n (w :: 'a word). take_bit n w OR of_bool (bit w n) * NOT (mask n)\ have \((=) ===> pcr_word ===> pcr_word) ?K ?W\ by transfer_prover also have \?K = (\n k. signed_take_bit n (take_bit LENGTH('a::len) k))\ by (simp add: fun_eq_iff signed_take_bit_def bit_take_bit_iff ac_simps) also have \?W = signed_take_bit\ by (simp add: fun_eq_iff signed_take_bit_def) finally show ?thesis . qed end end subsection \Conversions including casts\ subsubsection \Generic unsigned conversion\ context semiring_bits begin lemma bit_unsigned_iff [bit_simps]: \bit (unsigned w) n \ 2 ^ n \ 0 \ bit w n\ for w :: \'b::len word\ by (transfer fixing: bit) (simp add: bit_of_nat_iff bit_nat_iff bit_take_bit_iff) end context semiring_bit_shifts begin lemma unsigned_push_bit_eq: \unsigned (push_bit n w) = take_bit LENGTH('b) (push_bit n (unsigned w))\ for w :: \'b::len word\ proof (rule bit_eqI) fix m assume \2 ^ m \ 0\ show \bit (unsigned (push_bit n w)) m = bit (take_bit LENGTH('b) (push_bit n (unsigned w))) m\ proof (cases \n \ m\) case True with \2 ^ m \ 0\ have \2 ^ (m - n) \ 0\ by (metis (full_types) diff_add exp_add_not_zero_imp) with True show ?thesis - by (simp add: bit_unsigned_iff bit_push_bit_iff Parity.bit_push_bit_iff bit_take_bit_iff not_le exp_eq_zero_iff ac_simps) + by (simp add: bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff not_le ac_simps) next case False then show ?thesis - by (simp add: not_le bit_unsigned_iff bit_push_bit_iff Parity.bit_push_bit_iff bit_take_bit_iff) + by (simp add: not_le bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff) qed qed lemma unsigned_take_bit_eq: \unsigned (take_bit n w) = take_bit n (unsigned w)\ for w :: \'b::len word\ - by (rule bit_eqI) (simp add: bit_unsigned_iff bit_take_bit_iff Parity.bit_take_bit_iff) + by (rule bit_eqI) (simp add: bit_unsigned_iff bit_take_bit_iff Bit_Operations.bit_take_bit_iff) end context unique_euclidean_semiring_with_bit_shifts begin lemma unsigned_drop_bit_eq: \unsigned (drop_bit n w) = drop_bit n (take_bit LENGTH('b) (unsigned w))\ for w :: \'b::len word\ - by (rule bit_eqI) (auto simp add: bit_unsigned_iff bit_take_bit_iff bit_drop_bit_eq Parity.bit_drop_bit_eq dest: bit_imp_le_length) + by (rule bit_eqI) (auto simp add: bit_unsigned_iff bit_take_bit_iff bit_drop_bit_eq Bit_Operations.bit_drop_bit_eq dest: bit_imp_le_length) end lemma ucast_drop_bit_eq: \ucast (drop_bit n w) = drop_bit n (ucast w :: 'b::len word)\ if \LENGTH('a) \ LENGTH('b)\ for w :: \'a::len word\ by (rule bit_word_eqI) (use that in \auto simp add: bit_unsigned_iff bit_drop_bit_eq dest: bit_imp_le_length\) context semiring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_and_eq: \unsigned (v AND w) = unsigned v AND unsigned w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_unsigned_iff bit_and_iff Bit_Operations.bit_and_iff) lemma unsigned_or_eq: \unsigned (v OR w) = unsigned v OR unsigned w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_unsigned_iff bit_or_iff Bit_Operations.bit_or_iff) lemma unsigned_xor_eq: \unsigned (v XOR w) = unsigned v XOR unsigned w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_unsigned_iff bit_xor_iff Bit_Operations.bit_xor_iff) end end context ring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_not_eq: \unsigned (NOT w) = take_bit LENGTH('b) (NOT (unsigned w))\ for w :: \'b::len word\ by (rule bit_eqI) - (simp add: bit_unsigned_iff bit_take_bit_iff bit_not_iff Bit_Operations.bit_not_iff exp_eq_zero_iff not_le) + (simp add: bit_unsigned_iff bit_take_bit_iff bit_not_iff Bit_Operations.bit_not_iff not_le) end end context unique_euclidean_semiring_numeral begin lemma unsigned_greater_eq [simp]: \0 \ unsigned w\ for w :: \'b::len word\ by (transfer fixing: less_eq) simp lemma unsigned_less [simp]: \unsigned w < 2 ^ LENGTH('b)\ for w :: \'b::len word\ by (transfer fixing: less) simp end context linordered_semidom begin lemma word_less_eq_iff_unsigned: "a \ b \ unsigned a \ unsigned b" by (transfer fixing: less_eq) (simp add: nat_le_eq_zle) lemma word_less_iff_unsigned: "a < b \ unsigned a < unsigned b" by (transfer fixing: less) (auto dest: preorder_class.le_less_trans [OF take_bit_nonnegative]) end subsubsection \Generic signed conversion\ context ring_bit_operations begin lemma bit_signed_iff [bit_simps]: \bit (signed w) n \ 2 ^ n \ 0 \ bit w (min (LENGTH('b) - Suc 0) n)\ for w :: \'b::len word\ by (transfer fixing: bit) (auto simp add: bit_of_int_iff Bit_Operations.bit_signed_take_bit_iff min_def) lemma signed_push_bit_eq: \signed (push_bit n w) = signed_take_bit (LENGTH('b) - Suc 0) (push_bit n (signed w :: 'a))\ for w :: \'b::len word\ proof (rule bit_eqI) fix m assume \2 ^ m \ 0\ define q where \q = LENGTH('b) - Suc 0\ then have *: \LENGTH('b) = Suc q\ by simp show \bit (signed (push_bit n w)) m \ bit (signed_take_bit (LENGTH('b) - Suc 0) (push_bit n (signed w :: 'a))) m\ proof (cases \q \ m\) case True moreover define r where \r = m - q\ ultimately have \m = q + r\ by simp moreover from \m = q + r\ \2 ^ m \ 0\ have \2 ^ q \ 0\ \2 ^ r \ 0\ using exp_add_not_zero_imp_left [of q r] exp_add_not_zero_imp_right [of q r] by simp_all moreover from \2 ^ q \ 0\ have \2 ^ (q - n) \ 0\ by (rule exp_not_zero_imp_exp_diff_not_zero) ultimately show ?thesis - by (auto simp add: bit_signed_iff bit_signed_take_bit_iff bit_push_bit_iff Parity.bit_push_bit_iff - min_def * exp_eq_zero_iff le_diff_conv2) + by (auto simp add: bit_signed_iff bit_signed_take_bit_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff + min_def * le_diff_conv2) next case False then show ?thesis using exp_not_zero_imp_exp_diff_not_zero [of m n] - by (auto simp add: bit_signed_iff bit_signed_take_bit_iff bit_push_bit_iff Parity.bit_push_bit_iff - min_def not_le not_less * le_diff_conv2 less_diff_conv2 Parity.exp_eq_0_imp_not_bit exp_eq_0_imp_not_bit - exp_eq_zero_iff) + by (auto simp add: bit_signed_iff bit_signed_take_bit_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff + min_def not_le not_less * le_diff_conv2 less_diff_conv2 Bit_Operations.exp_eq_0_imp_not_bit exp_eq_0_imp_not_bit) qed qed lemma signed_take_bit_eq: \signed (take_bit n w) = (if n < LENGTH('b) then take_bit n (signed w) else signed w)\ for w :: \'b::len word\ by (transfer fixing: take_bit; cases \LENGTH('b)\) (auto simp add: Bit_Operations.signed_take_bit_take_bit Bit_Operations.take_bit_signed_take_bit take_bit_of_int min_def less_Suc_eq) lemma signed_not_eq: \signed (NOT w) = signed_take_bit LENGTH('b) (NOT (signed w))\ for w :: \'b::len word\ proof (rule bit_eqI) fix n assume \2 ^ n \ 0\ define q where \q = LENGTH('b) - Suc 0\ then have *: \LENGTH('b) = Suc q\ by simp show \bit (signed (NOT w)) n \ bit (signed_take_bit LENGTH('b) (NOT (signed w))) n\ proof (cases \q < n\) case True moreover define r where \r = n - Suc q\ ultimately have \n = r + Suc q\ by simp moreover from \2 ^ n \ 0\ \n = r + Suc q\ have \2 ^ Suc q \ 0\ using exp_add_not_zero_imp_right by blast ultimately show ?thesis - by (simp add: * bit_signed_iff bit_not_iff bit_signed_take_bit_iff Bit_Operations.bit_not_iff min_def - exp_eq_zero_iff) + by (simp add: * bit_signed_iff bit_not_iff bit_signed_take_bit_iff Bit_Operations.bit_not_iff min_def) next case False then show ?thesis - by (auto simp add: * bit_signed_iff bit_not_iff bit_signed_take_bit_iff Bit_Operations.bit_not_iff min_def - exp_eq_zero_iff) + by (auto simp add: * bit_signed_iff bit_not_iff bit_signed_take_bit_iff Bit_Operations.bit_not_iff min_def) qed qed context includes bit_operations_syntax begin lemma signed_and_eq: \signed (v AND w) = signed v AND signed w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_signed_iff bit_and_iff Bit_Operations.bit_and_iff) lemma signed_or_eq: \signed (v OR w) = signed v OR signed w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_signed_iff bit_or_iff Bit_Operations.bit_or_iff) lemma signed_xor_eq: \signed (v XOR w) = signed v XOR signed w\ for v w :: \'b::len word\ by (rule bit_eqI) (simp add: bit_signed_iff bit_xor_iff Bit_Operations.bit_xor_iff) end end subsubsection \More\ lemma sint_greater_eq: \- (2 ^ (LENGTH('a) - Suc 0)) \ sint w\ for w :: \'a::len word\ proof (cases \bit w (LENGTH('a) - Suc 0)\) case True then show ?thesis by transfer (simp add: signed_take_bit_eq_if_negative minus_exp_eq_not_mask or_greater_eq ac_simps) next have *: \- (2 ^ (LENGTH('a) - Suc 0)) \ (0::int)\ by simp case False then show ?thesis by transfer (auto simp add: signed_take_bit_eq intro: order_trans *) qed lemma sint_less: \sint w < 2 ^ (LENGTH('a) - Suc 0)\ for w :: \'a::len word\ by (cases \bit w (LENGTH('a) - Suc 0)\; transfer) (simp_all add: signed_take_bit_eq signed_take_bit_def not_eq_complement mask_eq_exp_minus_1 OR_upper) lemma unat_div_distrib: \unat (v div w) = unat v div unat w\ proof transfer fix k l have \nat (take_bit LENGTH('a) k) div nat (take_bit LENGTH('a) l) \ nat (take_bit LENGTH('a) k)\ by (rule div_le_dividend) also have \nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\ by (simp add: nat_less_iff) finally show \(nat \ take_bit LENGTH('a)) (take_bit LENGTH('a) k div take_bit LENGTH('a) l) = (nat \ take_bit LENGTH('a)) k div (nat \ take_bit LENGTH('a)) l\ by (simp add: nat_take_bit_eq div_int_pos_iff nat_div_distrib take_bit_nat_eq_self_iff) qed lemma unat_mod_distrib: \unat (v mod w) = unat v mod unat w\ proof transfer fix k l have \nat (take_bit LENGTH('a) k) mod nat (take_bit LENGTH('a) l) \ nat (take_bit LENGTH('a) k)\ by (rule mod_less_eq_dividend) also have \nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\ by (simp add: nat_less_iff) finally show \(nat \ take_bit LENGTH('a)) (take_bit LENGTH('a) k mod take_bit LENGTH('a) l) = (nat \ take_bit LENGTH('a)) k mod (nat \ take_bit LENGTH('a)) l\ by (simp add: nat_take_bit_eq mod_int_pos_iff less_le nat_mod_distrib take_bit_nat_eq_self_iff) qed lemma uint_div_distrib: \uint (v div w) = uint v div uint w\ proof - have \int (unat (v div w)) = int (unat v div unat w)\ by (simp add: unat_div_distrib) then show ?thesis by (simp add: of_nat_div) qed lemma unat_drop_bit_eq: \unat (drop_bit n w) = drop_bit n (unat w)\ by (rule bit_eqI) (simp add: bit_unsigned_iff bit_drop_bit_eq) lemma uint_mod_distrib: \uint (v mod w) = uint v mod uint w\ proof - have \int (unat (v mod w)) = int (unat v mod unat w)\ by (simp add: unat_mod_distrib) then show ?thesis by (simp add: of_nat_mod) qed context semiring_bit_shifts begin lemma unsigned_ucast_eq: \unsigned (ucast w :: 'c::len word) = take_bit LENGTH('c) (unsigned w)\ for w :: \'b::len word\ - by (rule bit_eqI) (simp add: bit_unsigned_iff Word.bit_unsigned_iff bit_take_bit_iff exp_eq_zero_iff not_le) + by (rule bit_eqI) (simp add: bit_unsigned_iff Word.bit_unsigned_iff bit_take_bit_iff not_le) end context ring_bit_operations begin lemma signed_ucast_eq: \signed (ucast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (unsigned w)\ for w :: \'b::len word\ proof (rule bit_eqI) fix n assume \2 ^ n \ 0\ then have \2 ^ (min (LENGTH('c) - Suc 0) n) \ 0\ by (simp add: min_def) (metis (mono_tags) diff_diff_cancel exp_not_zero_imp_exp_diff_not_zero) then show \bit (signed (ucast w :: 'c::len word)) n \ bit (signed_take_bit (LENGTH('c) - Suc 0) (unsigned w)) n\ - by (simp add: bit_signed_iff bit_unsigned_iff Word.bit_unsigned_iff bit_signed_take_bit_iff exp_eq_zero_iff not_le) + by (simp add: bit_signed_iff bit_unsigned_iff Word.bit_unsigned_iff bit_signed_take_bit_iff not_le) qed lemma signed_scast_eq: \signed (scast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (signed w)\ for w :: \'b::len word\ proof (rule bit_eqI) fix n assume \2 ^ n \ 0\ then have \2 ^ (min (LENGTH('c) - Suc 0) n) \ 0\ by (simp add: min_def) (metis (mono_tags) diff_diff_cancel exp_not_zero_imp_exp_diff_not_zero) then show \bit (signed (scast w :: 'c::len word)) n \ bit (signed_take_bit (LENGTH('c) - Suc 0) (signed w)) n\ - by (simp add: bit_signed_iff bit_unsigned_iff Word.bit_signed_iff bit_signed_take_bit_iff exp_eq_zero_iff not_le) + by (simp add: bit_signed_iff bit_unsigned_iff Word.bit_signed_iff bit_signed_take_bit_iff not_le) qed end lemma uint_nonnegative: "0 \ uint w" by (fact unsigned_greater_eq) lemma uint_bounded: "uint w < 2 ^ LENGTH('a)" for w :: "'a::len word" by (fact unsigned_less) lemma uint_idem: "uint w mod 2 ^ LENGTH('a) = uint w" for w :: "'a::len word" by transfer (simp add: take_bit_eq_mod) lemma word_uint_eqI: "uint a = uint b \ a = b" by (fact unsigned_word_eqI) lemma word_uint_eq_iff: "a = b \ uint a = uint b" by (fact word_eq_iff_unsigned) lemma uint_word_of_int_eq: \uint (word_of_int k :: 'a::len word) = take_bit LENGTH('a) k\ by transfer rule lemma uint_word_of_int: "uint (word_of_int k :: 'a::len word) = k mod 2 ^ LENGTH('a)" by (simp add: uint_word_of_int_eq take_bit_eq_mod) lemma word_of_int_uint: "word_of_int (uint w) = w" by transfer simp lemma word_div_def [code]: "a div b = word_of_int (uint a div uint b)" by transfer rule lemma word_mod_def [code]: "a mod b = word_of_int (uint a mod uint b)" by transfer rule lemma split_word_all: "(\x::'a::len word. PROP P x) \ (\x. PROP P (word_of_int x))" proof fix x :: "'a word" assume "\x. PROP P (word_of_int x)" then have "PROP P (word_of_int (uint x))" . then show "PROP P x" by (simp only: word_of_int_uint) qed lemma sint_uint: \sint w = signed_take_bit (LENGTH('a) - Suc 0) (uint w)\ for w :: \'a::len word\ by (cases \LENGTH('a)\; transfer) (simp_all add: signed_take_bit_take_bit) lemma unat_eq_nat_uint: \unat w = nat (uint w)\ by simp lemma ucast_eq: \ucast w = word_of_int (uint w)\ by transfer simp lemma scast_eq: \scast w = word_of_int (sint w)\ by transfer simp lemma uint_0_eq: \uint 0 = 0\ by (fact unsigned_0) lemma uint_1_eq: \uint 1 = 1\ by (fact unsigned_1) lemma word_m1_wi: "- 1 = word_of_int (- 1)" by simp lemma uint_0_iff: "uint x = 0 \ x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0_iff: "unat x = 0 \ x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0: "unat 0 = 0" by (fact unsigned_0) lemma unat_gt_0: "0 < unat x \ x \ 0" by (auto simp: unat_0_iff [symmetric]) lemma ucast_0: "ucast 0 = 0" by (fact unsigned_0) lemma sint_0: "sint 0 = 0" by (fact signed_0) lemma scast_0: "scast 0 = 0" by (fact signed_0) lemma sint_n1: "sint (- 1) = - 1" by (fact signed_minus_1) lemma scast_n1: "scast (- 1) = - 1" by (fact signed_minus_1) lemma uint_1: "uint (1::'a::len word) = 1" by (fact uint_1_eq) lemma unat_1: "unat (1::'a::len word) = 1" by (fact unsigned_1) lemma ucast_1: "ucast (1::'a::len word) = 1" by (fact unsigned_1) instantiation word :: (len) size begin lift_definition size_word :: \'a word \ nat\ is \\_. LENGTH('a)\ .. instance .. end lemma word_size [code]: \size w = LENGTH('a)\ for w :: \'a::len word\ by (fact size_word.rep_eq) lemma word_size_gt_0 [iff]: "0 < size w" for w :: "'a::len word" by (simp add: word_size) lemmas lens_gt_0 = word_size_gt_0 len_gt_0 lemma lens_not_0 [iff]: \size w \ 0\ for w :: \'a::len word\ by auto lift_definition source_size :: \('a::len word \ 'b) \ nat\ is \\_. LENGTH('a)\ . lift_definition target_size :: \('a \ 'b::len word) \ nat\ is \\_. LENGTH('b)\ .. lift_definition is_up :: \('a::len word \ 'b::len word) \ bool\ is \\_. LENGTH('a) \ LENGTH('b)\ .. lift_definition is_down :: \('a::len word \ 'b::len word) \ bool\ is \\_. LENGTH('a) \ LENGTH('b)\ .. lemma is_up_eq: \is_up f \ source_size f \ target_size f\ for f :: \'a::len word \ 'b::len word\ by (simp add: source_size.rep_eq target_size.rep_eq is_up.rep_eq) lemma is_down_eq: \is_down f \ target_size f \ source_size f\ for f :: \'a::len word \ 'b::len word\ by (simp add: source_size.rep_eq target_size.rep_eq is_down.rep_eq) lift_definition word_int_case :: \(int \ 'b) \ 'a::len word \ 'b\ is \\f. f \ take_bit LENGTH('a)\ by simp lemma word_int_case_eq_uint [code]: \word_int_case f w = f (uint w)\ by transfer simp translations "case x of XCONST of_int y \ b" \ "CONST word_int_case (\y. b) x" "case x of (XCONST of_int :: 'a) y \ b" \ "CONST word_int_case (\y. b) x" subsection \Arithmetic operations\ text \Legacy theorems:\ lemma word_add_def [code]: "a + b = word_of_int (uint a + uint b)" by transfer (simp add: take_bit_add) lemma word_sub_wi [code]: "a - b = word_of_int (uint a - uint b)" by transfer (simp add: take_bit_diff) lemma word_mult_def [code]: "a * b = word_of_int (uint a * uint b)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_minus_def [code]: "- a = word_of_int (- uint a)" by transfer (simp add: take_bit_minus) lemma word_0_wi: "0 = word_of_int 0" by transfer simp lemma word_1_wi: "1 = word_of_int 1" by transfer simp lift_definition word_succ :: "'a::len word \ 'a word" is "\x. x + 1" by (auto simp add: take_bit_eq_mod intro: mod_add_cong) lift_definition word_pred :: "'a::len word \ 'a word" is "\x. x - 1" by (auto simp add: take_bit_eq_mod intro: mod_diff_cong) lemma word_succ_alt [code]: "word_succ a = word_of_int (uint a + 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_pred_alt [code]: "word_pred a = word_of_int (uint a - 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemmas word_arith_wis = word_add_def word_sub_wi word_mult_def word_minus_def word_succ_alt word_pred_alt word_0_wi word_1_wi lemma wi_homs: shows wi_hom_add: "word_of_int a + word_of_int b = word_of_int (a + b)" and wi_hom_sub: "word_of_int a - word_of_int b = word_of_int (a - b)" and wi_hom_mult: "word_of_int a * word_of_int b = word_of_int (a * b)" and wi_hom_neg: "- word_of_int a = word_of_int (- a)" and wi_hom_succ: "word_succ (word_of_int a) = word_of_int (a + 1)" and wi_hom_pred: "word_pred (word_of_int a) = word_of_int (a - 1)" by (transfer, simp)+ lemmas wi_hom_syms = wi_homs [symmetric] lemmas word_of_int_homs = wi_homs word_0_wi word_1_wi lemmas word_of_int_hom_syms = word_of_int_homs [symmetric] lemma double_eq_zero_iff: \2 * a = 0 \ a = 0 \ a = 2 ^ (LENGTH('a) - Suc 0)\ for a :: \'a::len word\ proof - define n where \n = LENGTH('a) - Suc 0\ then have *: \LENGTH('a) = Suc n\ by simp have \a = 0\ if \2 * a = 0\ and \a \ 2 ^ (LENGTH('a) - Suc 0)\ using that by transfer (auto simp add: take_bit_eq_0_iff take_bit_eq_mod *) moreover have \2 ^ LENGTH('a) = (0 :: 'a word)\ by transfer simp then have \2 * 2 ^ (LENGTH('a) - Suc 0) = (0 :: 'a word)\ by (simp add: *) ultimately show ?thesis by auto qed subsection \Ordering\ lift_definition word_sle :: \'a::len word \ 'a word \ bool\ is \\k l. signed_take_bit (LENGTH('a) - Suc 0) k \ signed_take_bit (LENGTH('a) - Suc 0) l\ by (simp flip: signed_take_bit_decr_length_iff) lift_definition word_sless :: \'a::len word \ 'a word \ bool\ is \\k l. signed_take_bit (LENGTH('a) - Suc 0) k < signed_take_bit (LENGTH('a) - Suc 0) l\ by (simp flip: signed_take_bit_decr_length_iff) notation word_sle ("'(\s')") and word_sle ("(_/ \s _)" [51, 51] 50) and word_sless ("'(a <=s b \ sint a \ sint b\ by transfer simp lemma [code]: \a sint a < sint b\ by transfer simp lemma signed_ordering: \ordering word_sle word_sless\ apply (standard; transfer) using signed_take_bit_decr_length_iff by force+ lemma signed_linorder: \class.linorder word_sle word_sless\ by (standard; transfer) (auto simp add: signed_take_bit_decr_length_iff) interpretation signed: linorder word_sle word_sless by (fact signed_linorder) lemma word_sless_eq: \x x <=s y \ x \ y\ by (fact signed.less_le) lemma word_less_alt: "a < b \ uint a < uint b" by (fact word_less_def) lemma word_zero_le [simp]: "0 \ y" for y :: "'a::len word" by (fact word_coorder.extremum) lemma word_m1_ge [simp] : "word_pred 0 \ y" (* FIXME: delete *) by transfer (simp add: take_bit_minus_one_eq_mask mask_eq_exp_minus_1 ) lemma word_n1_ge [simp]: "y \ -1" for y :: "'a::len word" by (fact word_order.extremum) lemmas word_not_simps [simp] = word_zero_le [THEN leD] word_m1_ge [THEN leD] word_n1_ge [THEN leD] lemma word_gt_0: "0 < y \ 0 \ y" for y :: "'a::len word" by (simp add: less_le) lemmas word_gt_0_no [simp] = word_gt_0 [of "numeral y"] for y lemma word_sless_alt: "a sint a < sint b" by transfer simp lemma word_le_nat_alt: "a \ b \ unat a \ unat b" by transfer (simp add: nat_le_eq_zle) lemma word_less_nat_alt: "a < b \ unat a < unat b" by transfer (auto simp add: less_le [of 0]) lemmas unat_mono = word_less_nat_alt [THEN iffD1] instance word :: (len) wellorder proof fix P :: "'a word \ bool" and a assume *: "(\b. (\a. a < b \ P a) \ P b)" have "wf (measure unat)" .. moreover have "{(a, b :: ('a::len) word). a < b} \ measure unat" by (auto simp add: word_less_nat_alt) ultimately have "wf {(a, b :: ('a::len) word). a < b}" by (rule wf_subset) then show "P a" using * by induction blast qed lemma wi_less: "(word_of_int n < (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) < m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) lemma wi_le: "(word_of_int n \ (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) \ m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) subsection \Bit-wise operations\ context includes bit_operations_syntax begin lemma uint_take_bit_eq: \uint (take_bit n w) = take_bit n (uint w)\ by transfer (simp add: ac_simps) lemma take_bit_word_eq_self: \take_bit n w = w\ if \LENGTH('a) \ n\ for w :: \'a::len word\ using that by transfer simp lemma take_bit_length_eq [simp]: \take_bit LENGTH('a) w = w\ for w :: \'a::len word\ by (rule take_bit_word_eq_self) simp lemma bit_word_of_int_iff: \bit (word_of_int k :: 'a::len word) n \ n < LENGTH('a) \ bit k n\ by transfer rule lemma bit_uint_iff: \bit (uint w) n \ n < LENGTH('a) \ bit w n\ for w :: \'a::len word\ by transfer (simp add: bit_take_bit_iff) lemma bit_sint_iff: \bit (sint w) n \ n \ LENGTH('a) \ bit w (LENGTH('a) - 1) \ bit w n\ for w :: \'a::len word\ by transfer (auto simp add: bit_signed_take_bit_iff min_def le_less not_less) lemma bit_word_ucast_iff: \bit (ucast w :: 'b::len word) n \ n < LENGTH('a) \ n < LENGTH('b) \ bit w n\ for w :: \'a::len word\ by transfer (simp add: bit_take_bit_iff ac_simps) lemma bit_word_scast_iff: \bit (scast w :: 'b::len word) n \ n < LENGTH('b) \ (bit w n \ LENGTH('a) \ n \ bit w (LENGTH('a) - Suc 0))\ for w :: \'a::len word\ by transfer (auto simp add: bit_signed_take_bit_iff le_less min_def) lemma bit_word_iff_drop_bit_and [code]: \bit a n \ drop_bit n a AND 1 = 1\ for a :: \'a::len word\ by (simp add: bit_iff_odd_drop_bit odd_iff_mod_2_eq_one and_one_eq) lemma word_not_def: "NOT (a::'a::len word) = word_of_int (NOT (uint a))" and word_and_def: "(a::'a word) AND b = word_of_int (uint a AND uint b)" and word_or_def: "(a::'a word) OR b = word_of_int (uint a OR uint b)" and word_xor_def: "(a::'a word) XOR b = word_of_int (uint a XOR uint b)" by (transfer, simp add: take_bit_not_take_bit)+ definition even_word :: \'a::len word \ bool\ where [code_abbrev]: \even_word = even\ lemma even_word_iff [code]: \even_word a \ a AND 1 = 0\ by (simp add: and_one_eq even_iff_mod_2_eq_zero even_word_def) lemma map_bit_range_eq_if_take_bit_eq: \map (bit k) [0.. if \take_bit n k = take_bit n l\ for k l :: int using that proof (induction n arbitrary: k l) case 0 then show ?case by simp next case (Suc n) from Suc.prems have \take_bit n (k div 2) = take_bit n (l div 2)\ by (simp add: take_bit_Suc) then have \map (bit (k div 2)) [0.. by (rule Suc.IH) moreover have \bit (r div 2) = bit r \ Suc\ for r :: int by (simp add: fun_eq_iff bit_Suc) moreover from Suc.prems have \even k \ even l\ by (auto simp add: take_bit_Suc elim!: evenE oddE) arith+ ultimately show ?case by (simp only: map_Suc_upt upt_conv_Cons flip: list.map_comp) simp qed lemma take_bit_word_Bit0_eq [simp]: \take_bit (numeral n) (numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (numeral m)\ (is ?P) and take_bit_word_Bit1_eq [simp]: \take_bit (numeral n) (numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (numeral m)\ (is ?Q) and take_bit_word_minus_Bit0_eq [simp]: \take_bit (numeral n) (- numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (- numeral m)\ (is ?R) and take_bit_word_minus_Bit1_eq [simp]: \take_bit (numeral n) (- numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (- numeral (Num.inc m))\ (is ?S) proof - define w :: \'a::len word\ where \w = numeral m\ moreover define q :: nat where \q = pred_numeral n\ ultimately have num: \numeral m = w\ \numeral (num.Bit0 m) = 2 * w\ \numeral (num.Bit1 m) = 1 + 2 * w\ \numeral (Num.inc m) = 1 + w\ \pred_numeral n = q\ \numeral n = Suc q\ by (simp_all only: w_def q_def numeral_Bit0 [of m] numeral_Bit1 [of m] ac_simps numeral_inc numeral_eq_Suc flip: mult_2) have even: \take_bit (Suc q) (2 * w) = 2 * take_bit q w\ for w :: \'a::len word\ by (rule bit_word_eqI) (auto simp add: bit_take_bit_iff bit_double_iff) have odd: \take_bit (Suc q) (1 + 2 * w) = 1 + 2 * take_bit q w\ for w :: \'a::len word\ by (rule bit_eqI) (auto simp add: bit_take_bit_iff bit_double_iff even_bit_succ_iff) show ?P using even [of w] by (simp add: num) show ?Q using odd [of w] by (simp add: num) show ?R using even [of \- w\] by (simp add: num) show ?S using odd [of \- (1 + w)\] by (simp add: num) qed subsection \More shift operations\ lift_definition signed_drop_bit :: \nat \ 'a word \ 'a::len word\ is \\n. drop_bit n \ signed_take_bit (LENGTH('a) - Suc 0)\ using signed_take_bit_decr_length_iff by (simp add: take_bit_drop_bit) force lemma bit_signed_drop_bit_iff [bit_simps]: \bit (signed_drop_bit m w) n \ bit w (if LENGTH('a) - m \ n \ n < LENGTH('a) then LENGTH('a) - 1 else m + n)\ for w :: \'a::len word\ apply transfer apply (auto simp add: bit_drop_bit_eq bit_signed_take_bit_iff not_le min_def) apply (metis add.commute le_antisym less_diff_conv less_eq_decr_length_iff) apply (metis le_antisym less_eq_decr_length_iff) done lemma [code]: \Word.the_int (signed_drop_bit n w) = take_bit LENGTH('a) (drop_bit n (Word.the_signed_int w))\ for w :: \'a::len word\ by transfer simp lemma signed_drop_bit_of_0 [simp]: \signed_drop_bit n 0 = 0\ by transfer simp lemma signed_drop_bit_of_minus_1 [simp]: \signed_drop_bit n (- 1) = - 1\ by transfer simp lemma signed_drop_bit_signed_drop_bit [simp]: \signed_drop_bit m (signed_drop_bit n w) = signed_drop_bit (m + n) w\ for w :: \'a::len word\ proof (cases \LENGTH('a)\) case 0 then show ?thesis using len_not_eq_0 by blast next case (Suc n) then show ?thesis by (force simp add: bit_signed_drop_bit_iff not_le less_diff_conv ac_simps intro!: bit_word_eqI) qed lemma signed_drop_bit_0 [simp]: \signed_drop_bit 0 w = w\ by transfer (simp add: take_bit_signed_take_bit) lemma sint_signed_drop_bit_eq: \sint (signed_drop_bit n w) = drop_bit n (sint w)\ proof (cases \LENGTH('a) = 0 \ n=0\) case False then show ?thesis apply simp apply (rule bit_eqI) by (auto simp add: bit_sint_iff bit_drop_bit_eq bit_signed_drop_bit_iff dest: bit_imp_le_length) qed auto subsection \Rotation\ lift_definition word_rotr :: \nat \ 'a::len word \ 'a::len word\ is \\n k. concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (n mod LENGTH('a)) k)\ subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \LENGTH('a)\ k l \n mod LENGTH('a::len)\]) done lift_definition word_rotl :: \nat \ 'a::len word \ 'a::len word\ is \\n k. concat_bit (n mod LENGTH('a)) (drop_bit (LENGTH('a) - n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (LENGTH('a) - n mod LENGTH('a)) k)\ subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \LENGTH('a)\ k l \LENGTH('a) - n mod LENGTH('a::len)\]) done lift_definition word_roti :: \int \ 'a::len word \ 'a::len word\ is \\r k. concat_bit (LENGTH('a) - nat (r mod int LENGTH('a))) (drop_bit (nat (r mod int LENGTH('a))) (take_bit LENGTH('a) k)) (take_bit (nat (r mod int LENGTH('a))) k)\ subgoal for r k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \LENGTH('a)\ k l \nat (r mod int LENGTH('a::len))\]) done lemma word_rotl_eq_word_rotr [code]: \word_rotl n = (word_rotr (LENGTH('a) - n mod LENGTH('a)) :: 'a::len word \ 'a word)\ by (rule ext, cases \n mod LENGTH('a) = 0\; transfer) simp_all lemma word_roti_eq_word_rotr_word_rotl [code]: \word_roti i w = (if i \ 0 then word_rotr (nat i) w else word_rotl (nat (- i)) w)\ proof (cases \i \ 0\) case True moreover define n where \n = nat i\ ultimately have \i = int n\ by simp moreover have \word_roti (int n) = (word_rotr n :: _ \ 'a word)\ by (rule ext, transfer) (simp add: nat_mod_distrib) ultimately show ?thesis by simp next case False moreover define n where \n = nat (- i)\ ultimately have \i = - int n\ \n > 0\ by simp_all moreover have \word_roti (- int n) = (word_rotl n :: _ \ 'a word)\ by (rule ext, transfer) (simp add: zmod_zminus1_eq_if flip: of_nat_mod of_nat_diff) ultimately show ?thesis by simp qed lemma bit_word_rotr_iff [bit_simps]: \bit (word_rotr m w) n \ n < LENGTH('a) \ bit w ((n + m) mod LENGTH('a))\ for w :: \'a::len word\ proof transfer fix k :: int and m n :: nat define q where \q = m mod LENGTH('a)\ have \q < LENGTH('a)\ by (simp add: q_def) then have \q \ LENGTH('a)\ by simp have \m mod LENGTH('a) = q\ by (simp add: q_def) moreover have \(n + m) mod LENGTH('a) = (n + q) mod LENGTH('a)\ by (subst mod_add_right_eq [symmetric]) (simp add: \m mod LENGTH('a) = q\) moreover have \n < LENGTH('a) \ bit (concat_bit (LENGTH('a) - q) (drop_bit q (take_bit LENGTH('a) k)) (take_bit q k)) n \ n < LENGTH('a) \ bit k ((n + q) mod LENGTH('a))\ using \q < LENGTH('a)\ by (cases \q + n \ LENGTH('a)\) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff le_mod_geq ac_simps) ultimately show \n < LENGTH('a) \ bit (concat_bit (LENGTH('a) - m mod LENGTH('a)) (drop_bit (m mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (m mod LENGTH('a)) k)) n \ n < LENGTH('a) \ (n + m) mod LENGTH('a) < LENGTH('a) \ bit k ((n + m) mod LENGTH('a))\ by simp qed lemma bit_word_rotl_iff [bit_simps]: \bit (word_rotl m w) n \ n < LENGTH('a) \ bit w ((n + (LENGTH('a) - m mod LENGTH('a))) mod LENGTH('a))\ for w :: \'a::len word\ by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff) lemma bit_word_roti_iff [bit_simps]: \bit (word_roti k w) n \ n < LENGTH('a) \ bit w (nat ((int n + k) mod int LENGTH('a)))\ for w :: \'a::len word\ proof transfer fix k l :: int and n :: nat define m where \m = nat (k mod int LENGTH('a))\ have \m < LENGTH('a)\ by (simp add: nat_less_iff m_def) then have \m \ LENGTH('a)\ by simp have \k mod int LENGTH('a) = int m\ by (simp add: nat_less_iff m_def) moreover have \(int n + k) mod int LENGTH('a) = int ((n + m) mod LENGTH('a))\ by (subst mod_add_right_eq [symmetric]) (simp add: of_nat_mod \k mod int LENGTH('a) = int m\) moreover have \n < LENGTH('a) \ bit (concat_bit (LENGTH('a) - m) (drop_bit m (take_bit LENGTH('a) l)) (take_bit m l)) n \ n < LENGTH('a) \ bit l ((n + m) mod LENGTH('a))\ using \m < LENGTH('a)\ by (cases \m + n \ LENGTH('a)\) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff nat_less_iff not_le not_less ac_simps le_diff_conv le_mod_geq) ultimately show \n < LENGTH('a) \ bit (concat_bit (LENGTH('a) - nat (k mod int LENGTH('a))) (drop_bit (nat (k mod int LENGTH('a))) (take_bit LENGTH('a) l)) (take_bit (nat (k mod int LENGTH('a))) l)) n \ n < LENGTH('a) \ nat ((int n + k) mod int LENGTH('a)) < LENGTH('a) \ bit l (nat ((int n + k) mod int LENGTH('a)))\ by simp qed lemma uint_word_rotr_eq: \uint (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (uint w)) (uint (take_bit (n mod LENGTH('a)) w))\ for w :: \'a::len word\ - apply transfer - by (simp add: min.absorb2 take_bit_concat_bit_eq) + by transfer (simp add: take_bit_concat_bit_eq) lemma [code]: \Word.the_int (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (Word.the_int w)) (Word.the_int (take_bit (n mod LENGTH('a)) w))\ for w :: \'a::len word\ using uint_word_rotr_eq [of n w] by simp subsection \Split and cat operations\ lift_definition word_cat :: \'a::len word \ 'b::len word \ 'c::len word\ is \\k l. concat_bit LENGTH('b) l (take_bit LENGTH('a) k)\ by (simp add: bit_eq_iff bit_concat_bit_iff bit_take_bit_iff) lemma word_cat_eq: \(word_cat v w :: 'c::len word) = push_bit LENGTH('b) (ucast v) + ucast w\ for v :: \'a::len word\ and w :: \'b::len word\ by transfer (simp add: concat_bit_eq ac_simps) lemma word_cat_eq' [code]: \word_cat a b = word_of_int (concat_bit LENGTH('b) (uint b) (uint a))\ for a :: \'a::len word\ and b :: \'b::len word\ by transfer (simp add: concat_bit_take_bit_eq) lemma bit_word_cat_iff [bit_simps]: \bit (word_cat v w :: 'c::len word) n \ n < LENGTH('c) \ (if n < LENGTH('b) then bit w n else bit v (n - LENGTH('b)))\ for v :: \'a::len word\ and w :: \'b::len word\ by transfer (simp add: bit_concat_bit_iff bit_take_bit_iff) definition word_split :: \'a::len word \ 'b::len word \ 'c::len word\ where \word_split w = (ucast (drop_bit LENGTH('c) w) :: 'b::len word, ucast w :: 'c::len word)\ definition word_rcat :: \'a::len word list \ 'b::len word\ where \word_rcat = word_of_int \ horner_sum uint (2 ^ LENGTH('a)) \ rev\ subsection \More on conversions\ lemma int_word_sint: \sint (word_of_int x :: 'a::len word) = (x + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)\ by transfer (simp flip: take_bit_eq_mod add: signed_take_bit_eq_take_bit_shift) lemma sint_sbintrunc': "sint (word_of_int bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) bin" by simp lemma uint_sint: "uint w = take_bit LENGTH('a) (sint w)" for w :: "'a::len word" by transfer (simp add: take_bit_signed_take_bit) lemma bintr_uint: "LENGTH('a) \ n \ take_bit n (uint w) = uint w" for w :: "'a::len word" by transfer (simp add: min_def) lemma wi_bintr: "LENGTH('a::len) \ n \ word_of_int (take_bit n w) = (word_of_int w :: 'a word)" by transfer simp lemma word_numeral_alt: "numeral b = word_of_int (numeral b)" by (induct b, simp_all only: numeral.simps word_of_int_homs) declare word_numeral_alt [symmetric, code_abbrev] lemma word_neg_numeral_alt: "- numeral b = word_of_int (- numeral b)" by (simp only: word_numeral_alt wi_hom_neg) declare word_neg_numeral_alt [symmetric, code_abbrev] lemma uint_bintrunc [simp]: "uint (numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (numeral bin)" by transfer rule lemma uint_bintrunc_neg [simp]: "uint (- numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (- numeral bin)" by transfer rule lemma sint_sbintrunc [simp]: "sint (numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (numeral bin)" by transfer simp lemma sint_sbintrunc_neg [simp]: "sint (- numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (- numeral bin)" by transfer simp lemma unat_bintrunc [simp]: "unat (numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (numeral bin))" by transfer simp lemma unat_bintrunc_neg [simp]: "unat (- numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (- numeral bin))" by transfer simp lemma size_0_eq: "size w = 0 \ v = w" for v w :: "'a::len word" by transfer simp lemma uint_ge_0 [iff]: "0 \ uint x" by (fact unsigned_greater_eq) lemma uint_lt2p [iff]: "uint x < 2 ^ LENGTH('a)" for x :: "'a::len word" by (fact unsigned_less) lemma sint_ge: "- (2 ^ (LENGTH('a) - 1)) \ sint x" for x :: "'a::len word" using sint_greater_eq [of x] by simp lemma sint_lt: "sint x < 2 ^ (LENGTH('a) - 1)" for x :: "'a::len word" using sint_less [of x] by simp lemma uint_m2p_neg: "uint x - 2 ^ LENGTH('a) < 0" for x :: "'a::len word" by (simp only: diff_less_0_iff_less uint_lt2p) lemma uint_m2p_not_non_neg: "\ 0 \ uint x - 2 ^ LENGTH('a)" for x :: "'a::len word" by (simp only: not_le uint_m2p_neg) lemma lt2p_lem: "LENGTH('a) \ n \ uint w < 2 ^ n" for w :: "'a::len word" using uint_bounded [of w] by (rule less_le_trans) simp lemma uint_le_0_iff [simp]: "uint x \ 0 \ uint x = 0" by (fact uint_ge_0 [THEN leD, THEN antisym_conv1]) lemma uint_nat: "uint w = int (unat w)" by transfer simp lemma uint_numeral: "uint (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma uint_neg_numeral: "uint (- numeral b :: 'a::len word) = - numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma unat_numeral: "unat (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by transfer (simp add: take_bit_eq_mod nat_mod_distrib nat_power_eq) lemma sint_numeral: "sint (numeral b :: 'a::len word) = (numeral b + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)" by (metis int_word_sint word_numeral_alt) lemma word_of_int_0 [simp, code_post]: "word_of_int 0 = 0" by (fact of_int_0) lemma word_of_int_1 [simp, code_post]: "word_of_int 1 = 1" by (fact of_int_1) lemma word_of_int_neg_1 [simp]: "word_of_int (- 1) = - 1" by (simp add: wi_hom_syms) lemma word_of_int_numeral [simp] : "(word_of_int (numeral bin) :: 'a::len word) = numeral bin" by (fact of_int_numeral) lemma word_of_int_neg_numeral [simp]: "(word_of_int (- numeral bin) :: 'a::len word) = - numeral bin" by (fact of_int_neg_numeral) lemma word_int_case_wi: "word_int_case f (word_of_int i :: 'b word) = f (i mod 2 ^ LENGTH('b::len))" by transfer (simp add: take_bit_eq_mod) lemma word_int_split: "P (word_int_case f x) = (\i. x = (word_of_int i :: 'b::len word) \ 0 \ i \ i < 2 ^ LENGTH('b) \ P (f i))" by transfer (auto simp add: take_bit_eq_mod) lemma word_int_split_asm: "P (word_int_case f x) = (\n. x = (word_of_int n :: 'b::len word) \ 0 \ n \ n < 2 ^ LENGTH('b::len) \ \ P (f n))" by transfer (auto simp add: take_bit_eq_mod) lemma uint_range_size: "0 \ uint w \ uint w < 2 ^ size w" by transfer simp lemma sint_range_size: "- (2 ^ (size w - Suc 0)) \ sint w \ sint w < 2 ^ (size w - Suc 0)" by (simp add: word_size sint_greater_eq sint_less) lemma sint_above_size: "2 ^ (size w - 1) \ x \ sint w < x" for w :: "'a::len word" unfolding word_size by (rule less_le_trans [OF sint_lt]) lemma sint_below_size: "x \ - (2 ^ (size w - 1)) \ x \ sint w" for w :: "'a::len word" unfolding word_size by (rule order_trans [OF _ sint_ge]) subsection \Testing bits\ lemma bin_nth_uint_imp: "bit (uint w) n \ n < LENGTH('a)" for w :: "'a::len word" by transfer (simp add: bit_take_bit_iff) lemma bin_nth_sint: "LENGTH('a) \ n \ bit (sint w) n = bit (sint w) (LENGTH('a) - 1)" for w :: "'a::len word" by (transfer fixing: n) (simp add: bit_signed_take_bit_iff le_diff_conv min_def) lemma num_of_bintr': "take_bit (LENGTH('a::len)) (numeral a :: int) = (numeral b) \ numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \take_bit LENGTH('a) (numeral a :: int) = numeral b\ then have \take_bit LENGTH('a) (take_bit LENGTH('a) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\ by simp then show \take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\ by simp qed lemma num_of_sbintr': "signed_take_bit (LENGTH('a::len) - 1) (numeral a :: int) = (numeral b) \ numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \signed_take_bit (LENGTH('a) - 1) (numeral a :: int) = numeral b\ then have \take_bit LENGTH('a) (signed_take_bit (LENGTH('a) - 1) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\ by simp then show \take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\ by (simp add: take_bit_signed_take_bit) qed lemma num_abs_bintr: "(numeral x :: 'a word) = word_of_int (take_bit (LENGTH('a::len)) (numeral x))" by transfer simp lemma num_abs_sbintr: "(numeral x :: 'a word) = word_of_int (signed_take_bit (LENGTH('a::len) - 1) (numeral x))" by transfer (simp add: take_bit_signed_take_bit) text \ \cast\ -- note, no arg for new length, as it's determined by type of result, thus in \cast w = w\, the type means cast to length of \w\! \ lemma bit_ucast_iff: - \bit (ucast a :: 'a::len word) n \ n < LENGTH('a::len) \ Parity.bit a n\ + \bit (ucast a :: 'a::len word) n \ n < LENGTH('a::len) \ bit a n\ by transfer (simp add: bit_take_bit_iff) lemma ucast_id [simp]: "ucast w = w" by transfer simp lemma scast_id [simp]: "scast w = w" by transfer (simp add: take_bit_signed_take_bit) lemma ucast_mask_eq: \ucast (mask n :: 'b word) = mask (min LENGTH('b::len) n)\ - by (simp add: bit_eq_iff) (auto simp add: bit_mask_iff bit_ucast_iff exp_eq_zero_iff) + by (simp add: bit_eq_iff) (auto simp add: bit_mask_iff bit_ucast_iff) \ \literal u(s)cast\ lemma ucast_bintr [simp]: "ucast (numeral w :: 'a::len word) = word_of_int (take_bit (LENGTH('a)) (numeral w))" by transfer simp (* TODO: neg_numeral *) lemma scast_sbintr [simp]: "scast (numeral w ::'a::len word) = word_of_int (signed_take_bit (LENGTH('a) - Suc 0) (numeral w))" by transfer simp lemma source_size: "source_size (c::'a::len word \ _) = LENGTH('a)" by transfer simp lemma target_size: "target_size (c::_ \ 'b::len word) = LENGTH('b)" by transfer simp lemma is_down: "is_down c \ LENGTH('b) \ LENGTH('a)" for c :: "'a::len word \ 'b::len word" by transfer simp lemma is_up: "is_up c \ LENGTH('a) \ LENGTH('b)" for c :: "'a::len word \ 'b::len word" by transfer simp lemma is_up_down: \is_up c \ is_down d\ for c :: \'a::len word \ 'b::len word\ and d :: \'b::len word \ 'a::len word\ by transfer simp context fixes dummy_types :: \'a::len \ 'b::len\ begin private abbreviation (input) UCAST :: \'a::len word \ 'b::len word\ where \UCAST == ucast\ private abbreviation (input) SCAST :: \'a::len word \ 'b::len word\ where \SCAST == scast\ lemma down_cast_same: \UCAST = scast\ if \is_down UCAST\ by (rule ext, use that in transfer) (simp add: take_bit_signed_take_bit) lemma sint_up_scast: \sint (SCAST w) = sint w\ if \is_up SCAST\ using that by transfer (simp add: min_def Suc_leI le_diff_iff) lemma uint_up_ucast: \uint (UCAST w) = uint w\ if \is_up UCAST\ using that by transfer (simp add: min_def) lemma ucast_up_ucast: \ucast (UCAST w) = ucast w\ if \is_up UCAST\ using that by transfer (simp add: ac_simps) lemma ucast_up_ucast_id: \ucast (UCAST w) = w\ if \is_up UCAST\ using that by (simp add: ucast_up_ucast) lemma scast_up_scast: \scast (SCAST w) = scast w\ if \is_up SCAST\ using that by transfer (simp add: ac_simps) lemma scast_up_scast_id: \scast (SCAST w) = w\ if \is_up SCAST\ using that by (simp add: scast_up_scast) lemma isduu: \is_up UCAST\ if \is_down d\ for d :: \'b word \ 'a word\ using that is_up_down [of UCAST d] by simp lemma isdus: \is_up SCAST\ if \is_down d\ for d :: \'b word \ 'a word\ using that is_up_down [of SCAST d] by simp lemmas ucast_down_ucast_id = isduu [THEN ucast_up_ucast_id] lemmas scast_down_scast_id = isdus [THEN scast_up_scast_id] lemma up_ucast_surj: \surj (ucast :: 'b word \ 'a word)\ if \is_up UCAST\ by (rule surjI) (use that in \rule ucast_up_ucast_id\) lemma up_scast_surj: \surj (scast :: 'b word \ 'a word)\ if \is_up SCAST\ by (rule surjI) (use that in \rule scast_up_scast_id\) lemma down_ucast_inj: \inj_on UCAST A\ if \is_down (ucast :: 'b word \ 'a word)\ by (rule inj_on_inverseI) (use that in \rule ucast_down_ucast_id\) lemma down_scast_inj: \inj_on SCAST A\ if \is_down (scast :: 'b word \ 'a word)\ by (rule inj_on_inverseI) (use that in \rule scast_down_scast_id\) lemma ucast_down_wi: \UCAST (word_of_int x) = word_of_int x\ if \is_down UCAST\ using that by transfer simp lemma ucast_down_no: \UCAST (numeral bin) = numeral bin\ if \is_down UCAST\ using that by transfer simp end lemmas word_log_defs = word_and_def word_or_def word_xor_def word_not_def lemma bit_last_iff: \bit w (LENGTH('a) - Suc 0) \ sint w < 0\ (is \?P \ ?Q\) for w :: \'a::len word\ proof - have \?P \ bit (uint w) (LENGTH('a) - Suc 0)\ by (simp add: bit_uint_iff) also have \\ \ ?Q\ by (simp add: sint_uint) finally show ?thesis . qed lemma drop_bit_eq_zero_iff_not_bit_last: \drop_bit (LENGTH('a) - Suc 0) w = 0 \ \ bit w (LENGTH('a) - Suc 0)\ for w :: "'a::len word" proof (cases \LENGTH('a)\) case (Suc n) then show ?thesis apply transfer apply (simp add: take_bit_drop_bit) - by (simp add: bit_iff_odd_drop_bit drop_bit_take_bit min.absorb2 odd_iff_mod_2_eq_one) + by (simp add: bit_iff_odd_drop_bit drop_bit_take_bit odd_iff_mod_2_eq_one) qed auto subsection \Word Arithmetic\ lemmas word_div_no [simp] = word_div_def [of "numeral a" "numeral b"] for a b lemmas word_mod_no [simp] = word_mod_def [of "numeral a" "numeral b"] for a b lemmas word_less_no [simp] = word_less_def [of "numeral a" "numeral b"] for a b lemmas word_le_no [simp] = word_le_def [of "numeral a" "numeral b"] for a b lemmas word_sless_no [simp] = word_sless_eq [of "numeral a" "numeral b"] for a b lemmas word_sle_no [simp] = word_sle_eq [of "numeral a" "numeral b"] for a b lemma size_0_same': "size w = 0 \ w = v" for v w :: "'a::len word" by (unfold word_size) simp lemmas size_0_same = size_0_same' [unfolded word_size] lemmas unat_eq_0 = unat_0_iff lemmas unat_eq_zero = unat_0_iff subsection \Transferring goals from words to ints\ lemma word_ths: shows word_succ_p1: "word_succ a = a + 1" and word_pred_m1: "word_pred a = a - 1" and word_pred_succ: "word_pred (word_succ a) = a" and word_succ_pred: "word_succ (word_pred a) = a" and word_mult_succ: "word_succ a * b = b + a * b" by (transfer, simp add: algebra_simps)+ lemma uint_cong: "x = y \ uint x = uint y" by simp lemma uint_word_ariths: fixes a b :: "'a::len word" shows "uint (a + b) = (uint a + uint b) mod 2 ^ LENGTH('a::len)" and "uint (a - b) = (uint a - uint b) mod 2 ^ LENGTH('a)" and "uint (a * b) = uint a * uint b mod 2 ^ LENGTH('a)" and "uint (- a) = - uint a mod 2 ^ LENGTH('a)" and "uint (word_succ a) = (uint a + 1) mod 2 ^ LENGTH('a)" and "uint (word_pred a) = (uint a - 1) mod 2 ^ LENGTH('a)" and "uint (0 :: 'a word) = 0 mod 2 ^ LENGTH('a)" and "uint (1 :: 'a word) = 1 mod 2 ^ LENGTH('a)" by (simp_all only: word_arith_wis uint_word_of_int_eq flip: take_bit_eq_mod) lemma uint_word_arith_bintrs: fixes a b :: "'a::len word" shows "uint (a + b) = take_bit (LENGTH('a)) (uint a + uint b)" and "uint (a - b) = take_bit (LENGTH('a)) (uint a - uint b)" and "uint (a * b) = take_bit (LENGTH('a)) (uint a * uint b)" and "uint (- a) = take_bit (LENGTH('a)) (- uint a)" and "uint (word_succ a) = take_bit (LENGTH('a)) (uint a + 1)" and "uint (word_pred a) = take_bit (LENGTH('a)) (uint a - 1)" and "uint (0 :: 'a word) = take_bit (LENGTH('a)) 0" and "uint (1 :: 'a word) = take_bit (LENGTH('a)) 1" by (simp_all add: uint_word_ariths take_bit_eq_mod) lemma sint_word_ariths: fixes a b :: "'a::len word" shows "sint (a + b) = signed_take_bit (LENGTH('a) - 1) (sint a + sint b)" and "sint (a - b) = signed_take_bit (LENGTH('a) - 1) (sint a - sint b)" and "sint (a * b) = signed_take_bit (LENGTH('a) - 1) (sint a * sint b)" and "sint (- a) = signed_take_bit (LENGTH('a) - 1) (- sint a)" and "sint (word_succ a) = signed_take_bit (LENGTH('a) - 1) (sint a + 1)" and "sint (word_pred a) = signed_take_bit (LENGTH('a) - 1) (sint a - 1)" and "sint (0 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 0" and "sint (1 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 1" subgoal by transfer (simp add: signed_take_bit_add) subgoal by transfer (simp add: signed_take_bit_diff) subgoal by transfer (simp add: signed_take_bit_mult) subgoal by transfer (simp add: signed_take_bit_minus) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_succ) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_pred) apply (simp_all add: sint_uint) done lemma word_pred_0_n1: "word_pred 0 = word_of_int (- 1)" unfolding word_pred_m1 by simp lemma succ_pred_no [simp]: "word_succ (numeral w) = numeral w + 1" "word_pred (numeral w) = numeral w - 1" "word_succ (- numeral w) = - numeral w + 1" "word_pred (- numeral w) = - numeral w - 1" by (simp_all add: word_succ_p1 word_pred_m1) lemma word_sp_01 [simp]: "word_succ (- 1) = 0 \ word_succ 0 = 1 \ word_pred 0 = - 1 \ word_pred 1 = 0" by (simp_all add: word_succ_p1 word_pred_m1) \ \alternative approach to lifting arithmetic equalities\ lemma word_of_int_Ex: "\y. x = word_of_int y" by (rule_tac x="uint x" in exI) simp subsection \Order on fixed-length words\ lift_definition udvd :: \'a::len word \ 'a::len word \ bool\ (infixl \udvd\ 50) is \\k l. take_bit LENGTH('a) k dvd take_bit LENGTH('a) l\ by simp lemma udvd_iff_dvd: \x udvd y \ unat x dvd unat y\ by transfer (simp add: nat_dvd_iff) lemma udvd_iff_dvd_int: \v udvd w \ uint v dvd uint w\ by transfer rule lemma udvdI [intro]: \v udvd w\ if \unat w = unat v * unat u\ proof - from that have \unat v dvd unat w\ .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvdE [elim]: fixes v w :: \'a::len word\ assumes \v udvd w\ obtains u :: \'a word\ where \unat w = unat v * unat u\ proof (cases \v = 0\) case True moreover from True \v udvd w\ have \w = 0\ by transfer simp ultimately show thesis using that by simp next case False then have \unat v > 0\ by (simp add: unat_gt_0) from \v udvd w\ have \unat v dvd unat w\ by (simp add: udvd_iff_dvd) then obtain n where \unat w = unat v * n\ .. moreover have \n < 2 ^ LENGTH('a)\ proof (rule ccontr) assume \\ n < 2 ^ LENGTH('a)\ then have \n \ 2 ^ LENGTH('a)\ by (simp add: not_le) then have \unat v * n \ 2 ^ LENGTH('a)\ using \unat v > 0\ mult_le_mono [of 1 \unat v\ \2 ^ LENGTH('a)\ n] by simp with \unat w = unat v * n\ have \unat w \ 2 ^ LENGTH('a)\ by simp with unsigned_less [of w, where ?'a = nat] show False by linarith qed ultimately have \unat w = unat v * unat (word_of_nat n :: 'a word)\ by (auto simp add: take_bit_nat_eq_self_iff intro: sym) with that show thesis . qed lemma udvd_imp_mod_eq_0: \w mod v = 0\ if \v udvd w\ using that by transfer simp lemma mod_eq_0_imp_udvd [intro?]: \v udvd w\ if \w mod v = 0\ proof - from that have \unat (w mod v) = unat 0\ by simp then have \unat w mod unat v = 0\ by (simp add: unat_mod_distrib) then have \unat v dvd unat w\ .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvd_imp_dvd: \v dvd w\ if \v udvd w\ for v w :: \'a::len word\ proof - from that obtain u :: \'a word\ where \unat w = unat v * unat u\ .. then have \(word_of_nat (unat w) :: 'a word) = word_of_nat (unat v * unat u)\ by simp then have \w = v * u\ by simp then show \v dvd w\ .. qed lemma exp_dvd_iff_exp_udvd: \2 ^ n dvd w \ 2 ^ n udvd w\ for v w :: \'a::len word\ proof assume \2 ^ n udvd w\ then show \2 ^ n dvd w\ by (rule udvd_imp_dvd) next assume \2 ^ n dvd w\ then obtain u :: \'a word\ where \w = 2 ^ n * u\ .. then have \w = push_bit n u\ by (simp add: push_bit_eq_mult) then show \2 ^ n udvd w\ by transfer (simp add: take_bit_push_bit dvd_eq_mod_eq_0 flip: take_bit_eq_mod) qed lemma udvd_nat_alt: \a udvd b \ (\n. unat b = n * unat a)\ by (auto simp add: udvd_iff_dvd) lemma udvd_unfold_int: \a udvd b \ (\n\0. uint b = n * uint a)\ unfolding udvd_iff_dvd_int by (metis dvd_div_mult_self dvd_triv_right uint_div_distrib uint_ge_0) lemma unat_minus_one: \unat (w - 1) = unat w - 1\ if \w \ 0\ proof - have "0 \ uint w" by (fact uint_nonnegative) moreover from that have "0 \ uint w" by (simp add: uint_0_iff) ultimately have "1 \ uint w" by arith from uint_lt2p [of w] have "uint w - 1 < 2 ^ LENGTH('a)" by arith with \1 \ uint w\ have "(uint w - 1) mod 2 ^ LENGTH('a) = uint w - 1" by (auto intro: mod_pos_pos_trivial) with \1 \ uint w\ have "nat ((uint w - 1) mod 2 ^ LENGTH('a)) = nat (uint w) - 1" by (auto simp del: nat_uint_eq) then show ?thesis by (simp only: unat_eq_nat_uint word_arith_wis mod_diff_right_eq) (metis of_int_1 uint_word_of_int unsigned_1) qed lemma measure_unat: "p \ 0 \ unat (p - 1) < unat p" by (simp add: unat_minus_one) (simp add: unat_0_iff [symmetric]) lemmas uint_add_ge0 [simp] = add_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemmas uint_mult_ge0 [simp] = mult_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemma uint_sub_lt2p [simp]: "uint x - uint y < 2 ^ LENGTH('a)" for x :: "'a::len word" and y :: "'b::len word" using uint_ge_0 [of y] uint_lt2p [of x] by arith subsection \Conditions for the addition (etc) of two words to overflow\ lemma uint_add_lem: "(uint x + uint y < 2 ^ LENGTH('a)) = (uint (x + y) = uint x + uint y)" for x y :: "'a::len word" by (metis add.right_neutral add_mono_thms_linordered_semiring(1) mod_pos_pos_trivial of_nat_0_le_iff uint_lt2p uint_nat uint_word_ariths(1)) lemma uint_mult_lem: "(uint x * uint y < 2 ^ LENGTH('a)) = (uint (x * y) = uint x * uint y)" for x y :: "'a::len word" by (metis mod_pos_pos_trivial uint_lt2p uint_mult_ge0 uint_word_ariths(3)) lemma uint_sub_lem: "uint x \ uint y \ uint (x - y) = uint x - uint y" by (metis diff_ge_0_iff_ge of_nat_0_le_iff uint_nat uint_sub_lt2p uint_word_of_int unique_euclidean_semiring_numeral_class.mod_less word_sub_wi) lemma uint_add_le: "uint (x + y) \ uint x + uint y" unfolding uint_word_ariths by (simp add: zmod_le_nonneg_dividend) lemma uint_sub_ge: "uint (x - y) \ uint x - uint y" unfolding uint_word_ariths by (simp flip: take_bit_eq_mod add: take_bit_int_greater_eq_self_iff) lemma int_mod_ge: \a \ a mod n\ if \a < n\ \0 < n\ for a n :: int proof (cases \a < 0\) case True with \0 < n\ show ?thesis by (metis less_trans not_less pos_mod_conj) next case False with \a < n\ show ?thesis by simp qed lemma mod_add_if_z: "\x < z; y < z; 0 \ y; 0 \ x; 0 \ z\ \ (x + y) mod z = (if x + y < z then x + y else x + y - z)" for x y z :: int apply (simp add: not_less) by (metis (no_types) add_strict_mono diff_ge_0_iff_ge diff_less_eq minus_mod_self2 mod_pos_pos_trivial) lemma uint_plus_if': "uint (a + b) = (if uint a + uint b < 2 ^ LENGTH('a) then uint a + uint b else uint a + uint b - 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_add_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) lemma mod_sub_if_z: "\x < z; y < z; 0 \ y; 0 \ x; 0 \ z\ \ (x - y) mod z = (if y \ x then x - y else x - y + z)" for x y z :: int using mod_pos_pos_trivial [of "x - y + z" z] by (auto simp add: not_le) lemma uint_sub_if': "uint (a - b) = (if uint b \ uint a then uint a - uint b else uint a - uint b + 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_sub_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) subsection \Definition of \uint_arith\\ lemma word_of_int_inverse: "word_of_int r = a \ 0 \ r \ r < 2 ^ LENGTH('a) \ uint a = r" for a :: "'a::len word" by transfer (simp add: take_bit_int_eq_self) lemma uint_split: "P (uint x) = (\i. word_of_int i = x \ 0 \ i \ i < 2^LENGTH('a) \ P i)" for x :: "'a::len word" by transfer (auto simp add: take_bit_eq_mod) lemma uint_split_asm: "P (uint x) = (\i. word_of_int i = x \ 0 \ i \ i < 2^LENGTH('a) \ \ P i)" for x :: "'a::len word" by auto (metis take_bit_int_eq_self_iff) lemmas uint_splits = uint_split uint_split_asm lemmas uint_arith_simps = word_le_def word_less_alt word_uint_eq_iff uint_sub_if' uint_plus_if' \ \use this to stop, eg. \2 ^ LENGTH(32)\ being simplified\ lemma power_False_cong: "False \ a ^ b = c ^ d" by auto \ \\uint_arith_tac\: reduce to arithmetic on int, try to solve by arith\ ML \ val uint_arith_simpset = @{context} |> fold Simplifier.add_simp @{thms uint_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of; fun uint_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset uint_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms uint_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms word_of_int_inverse} n THEN assume_tac ctxt n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun uint_arith_tac ctxt = SELECT_GOAL (EVERY (uint_arith_tacs ctxt)) \ method_setup uint_arith = \Scan.succeed (SIMPLE_METHOD' o uint_arith_tac)\ "solving word arithmetic via integers and arith" subsection \More on overflows and monotonicity\ lemma no_plus_overflow_uint_size: "x \ x + y \ uint x + uint y < 2 ^ size x" for x y :: "'a::len word" unfolding word_size by uint_arith lemmas no_olen_add = no_plus_overflow_uint_size [unfolded word_size] lemma no_ulen_sub: "x \ x - y \ uint y \ uint x" for x y :: "'a::len word" by uint_arith lemma no_olen_add': "x \ y + x \ uint y + uint x < 2 ^ LENGTH('a)" for x y :: "'a::len word" by (simp add: ac_simps no_olen_add) lemmas olen_add_eqv = trans [OF no_olen_add no_olen_add' [symmetric]] lemmas uint_plus_simple_iff = trans [OF no_olen_add uint_add_lem] lemmas uint_plus_simple = uint_plus_simple_iff [THEN iffD1] lemmas uint_minus_simple_iff = trans [OF no_ulen_sub uint_sub_lem] lemmas uint_minus_simple_alt = uint_sub_lem [folded word_le_def] lemmas word_sub_le_iff = no_ulen_sub [folded word_le_def] lemmas word_sub_le = word_sub_le_iff [THEN iffD2] lemma word_less_sub1: "x \ 0 \ 1 < x \ 0 < x - 1" for x :: "'a::len word" by uint_arith lemma word_le_sub1: "x \ 0 \ 1 \ x \ 0 \ x - 1" for x :: "'a::len word" by uint_arith lemma sub_wrap_lt: "x < x - z \ x < z" for x z :: "'a::len word" by uint_arith lemma sub_wrap: "x \ x - z \ z = 0 \ x < z" for x z :: "'a::len word" by uint_arith lemma plus_minus_not_NULL_ab: "x \ ab - c \ c \ ab \ c \ 0 \ x + c \ 0" for x ab c :: "'a::len word" by uint_arith lemma plus_minus_no_overflow_ab: "x \ ab - c \ c \ ab \ x \ x + c" for x ab c :: "'a::len word" by uint_arith lemma le_minus': "a + c \ b \ a \ a + c \ c \ b - a" for a b c :: "'a::len word" by uint_arith lemma le_plus': "a \ b \ c \ b - a \ a + c \ b" for a b c :: "'a::len word" by uint_arith lemmas le_plus = le_plus' [rotated] lemmas le_minus = leD [THEN thin_rl, THEN le_minus'] (* FIXME *) lemma word_plus_mono_right: "y \ z \ x \ x + z \ x + y \ x + z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_cancel: "y - x < z - x \ x \ z \ y < z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono_left: "y < z \ x \ y \ y - x < z - x" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono: "a < c \ d < b \ a - b < a \ c - d < c \ a - b < c - d" for a b c d :: "'a::len word" by uint_arith lemma word_le_minus_cancel: "y - x \ z - x \ x \ z \ y \ z" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono_left: "y \ z \ x \ y \ y - x \ z - x" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono: "a \ c \ d \ b \ a - b \ a \ c - d \ c \ a - b \ c - d" for a b c d :: "'a::len word" by uint_arith lemma plus_le_left_cancel_wrap: "x + y' < x \ x + y < x \ x + y' < x + y \ y' < y" for x y y' :: "'a::len word" by uint_arith lemma plus_le_left_cancel_nowrap: "x \ x + y' \ x \ x + y \ x + y' < x + y \ y' < y" for x y y' :: "'a::len word" by uint_arith lemma word_plus_mono_right2: "a \ a + b \ c \ b \ a \ a + c" for a b c :: "'a::len word" by uint_arith lemma word_less_add_right: "x < y - z \ z \ y \ x + z < y" for x y z :: "'a::len word" by uint_arith lemma word_less_sub_right: "x < y + z \ y \ x \ x - y < z" for x y z :: "'a::len word" by uint_arith lemma word_le_plus_either: "x \ y \ x \ z \ y \ y + z \ x \ y + z" for x y z :: "'a::len word" by uint_arith lemma word_less_nowrapI: "x < z - k \ k \ z \ 0 < k \ x < x + k" for x z k :: "'a::len word" by uint_arith lemma inc_le: "i < m \ i + 1 \ m" for i m :: "'a::len word" by uint_arith lemma inc_i: "1 \ i \ i < m \ 1 \ i + 1 \ i + 1 \ m" for i m :: "'a::len word" by uint_arith lemma udvd_incr_lem: "up < uq \ up = ua + n * uint K \ uq = ua + n' * uint K \ up + uint K \ uq" by auto (metis int_distrib(1) linorder_not_less mult.left_neutral mult_right_mono uint_nonnegative zless_imp_add1_zle) lemma udvd_incr': "p < q \ uint p = ua + n * uint K \ uint q = ua + n' * uint K \ p + K \ q" unfolding word_less_alt word_le_def by (metis (full_types) order_trans udvd_incr_lem uint_add_le) lemma udvd_decr': assumes "p < q" "uint p = ua + n * uint K" "uint q = ua + n' * uint K" shows "uint q = ua + n' * uint K \ p \ q - K" proof - have "\w wa. uint (w::'a word) \ uint wa + uint (w - wa)" by (metis (no_types) add_diff_cancel_left' diff_add_cancel uint_add_le) moreover have "uint K + uint p \ uint q" using assms by (metis (no_types) add_diff_cancel_left' diff_add_cancel udvd_incr_lem word_less_def) ultimately show ?thesis by (meson add_le_cancel_left order_trans word_less_eq_iff_unsigned) qed lemmas udvd_incr_lem0 = udvd_incr_lem [where ua=0, unfolded add_0_left] lemmas udvd_incr0 = udvd_incr' [where ua=0, unfolded add_0_left] lemmas udvd_decr0 = udvd_decr' [where ua=0, unfolded add_0_left] lemma udvd_minus_le': "xy < k \ z udvd xy \ z udvd k \ xy \ k - z" unfolding udvd_unfold_int by (meson udvd_decr0) lemma udvd_incr2_K: "p < a + s \ a \ a + s \ K udvd s \ K udvd p - a \ a \ p \ 0 < K \ p \ p + K \ p + K \ a + s" unfolding udvd_unfold_int apply (simp add: uint_arith_simps split: if_split_asm) apply (metis (no_types, opaque_lifting) le_add_diff_inverse le_less_trans udvd_incr_lem) using uint_lt2p [of s] by simp subsection \Arithmetic type class instantiations\ lemmas word_le_0_iff [simp] = word_zero_le [THEN leD, THEN antisym_conv1] lemma word_of_int_nat: "0 \ x \ word_of_int x = of_nat (nat x)" by simp text \ note that \iszero_def\ is only for class \comm_semiring_1_cancel\, which requires word length \\ 1\, ie \'a::len word\ \ lemma iszero_word_no [simp]: "iszero (numeral bin :: 'a::len word) = iszero (take_bit LENGTH('a) (numeral bin :: int))" by (metis iszero_def uint_0_iff uint_bintrunc) text \Use \iszero\ to simplify equalities between word numerals.\ lemmas word_eq_numeral_iff_iszero [simp] = eq_numeral_iff_iszero [where 'a="'a::len word"] subsection \Word and nat\ lemma word_nchotomy: "\w :: 'a::len word. \n. w = of_nat n \ n < 2 ^ LENGTH('a)" by (metis of_nat_unat ucast_id unsigned_less) lemma of_nat_eq: "of_nat n = w \ (\q. n = unat w + q * 2 ^ LENGTH('a))" for w :: "'a::len word" using mod_div_mult_eq [of n "2 ^ LENGTH('a)", symmetric] by (auto simp flip: take_bit_eq_mod) lemma of_nat_eq_size: "of_nat n = w \ (\q. n = unat w + q * 2 ^ size w)" unfolding word_size by (rule of_nat_eq) lemma of_nat_0: "of_nat m = (0::'a::len word) \ (\q. m = q * 2 ^ LENGTH('a))" by (simp add: of_nat_eq) lemma of_nat_2p [simp]: "of_nat (2 ^ LENGTH('a)) = (0::'a::len word)" by (fact mult_1 [symmetric, THEN iffD2 [OF of_nat_0 exI]]) lemma of_nat_gt_0: "of_nat k \ 0 \ 0 < k" by (cases k) auto lemma of_nat_neq_0: "0 < k \ k < 2 ^ LENGTH('a::len) \ of_nat k \ (0 :: 'a word)" by (auto simp add : of_nat_0) lemma Abs_fnat_hom_add: "of_nat a + of_nat b = of_nat (a + b)" by simp lemma Abs_fnat_hom_mult: "of_nat a * of_nat b = (of_nat (a * b) :: 'a::len word)" by (simp add: wi_hom_mult) lemma Abs_fnat_hom_Suc: "word_succ (of_nat a) = of_nat (Suc a)" by transfer (simp add: ac_simps) lemma Abs_fnat_hom_0: "(0::'a::len word) = of_nat 0" by simp lemma Abs_fnat_hom_1: "(1::'a::len word) = of_nat (Suc 0)" by simp lemmas Abs_fnat_homs = Abs_fnat_hom_add Abs_fnat_hom_mult Abs_fnat_hom_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 lemma word_arith_nat_add: "a + b = of_nat (unat a + unat b)" by simp lemma word_arith_nat_mult: "a * b = of_nat (unat a * unat b)" by simp lemma word_arith_nat_Suc: "word_succ a = of_nat (Suc (unat a))" by (subst Abs_fnat_hom_Suc [symmetric]) simp lemma word_arith_nat_div: "a div b = of_nat (unat a div unat b)" by (metis of_int_of_nat_eq of_nat_unat of_nat_div word_div_def) lemma word_arith_nat_mod: "a mod b = of_nat (unat a mod unat b)" by (metis of_int_of_nat_eq of_nat_mod of_nat_unat word_mod_def) lemmas word_arith_nat_defs = word_arith_nat_add word_arith_nat_mult word_arith_nat_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 word_arith_nat_div word_arith_nat_mod lemma unat_cong: "x = y \ unat x = unat y" by (fact arg_cong) lemma unat_of_nat: \unat (word_of_nat x :: 'a::len word) = x mod 2 ^ LENGTH('a)\ by transfer (simp flip: take_bit_eq_mod add: nat_take_bit_eq) lemmas unat_word_ariths = word_arith_nat_defs [THEN trans [OF unat_cong unat_of_nat]] lemmas word_sub_less_iff = word_sub_le_iff [unfolded linorder_not_less [symmetric] Not_eq_iff] lemma unat_add_lem: "unat x + unat y < 2 ^ LENGTH('a) \ unat (x + y) = unat x + unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(1) unsigned_less) lemma unat_mult_lem: "unat x * unat y < 2 ^ LENGTH('a) \ unat (x * y) = unat x * unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(2) unsigned_less) lemma unat_plus_if': \unat (a + b) = (if unat a + unat b < 2 ^ LENGTH('a) then unat a + unat b else unat a + unat b - 2 ^ LENGTH('a))\ for a b :: \'a::len word\ apply (auto simp: unat_word_ariths not_less le_iff_add) by (metis add.commute add_less_cancel_right add_strict_mono mod_less unsigned_less) lemma le_no_overflow: "x \ b \ a \ a + b \ x \ a + b" for a b x :: "'a::len word" using word_le_plus_either by blast lemmas un_ui_le = trans [OF word_le_nat_alt [symmetric] word_le_def] lemma unat_sub_if_size: "unat (x - y) = (if unat y \ unat x then unat x - unat y else unat x + 2 ^ size x - unat y)" proof - { assume xy: "\ uint y \ uint x" have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x + 2 ^ LENGTH('a) - uint y)" by simp also have "... = nat (uint x + 2 ^ LENGTH('a)) - nat (uint y)" by (simp add: nat_diff_distrib') also have "... = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" by (metis nat_add_distrib nat_eq_numeral_power_cancel_iff order_less_imp_le unsigned_0 unsigned_greater_eq unsigned_less) finally have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" . } then show ?thesis unfolding word_size by (metis nat_diff_distrib' uint_sub_if' un_ui_le unat_eq_nat_uint unsigned_greater_eq) qed lemmas unat_sub_if' = unat_sub_if_size [unfolded word_size] lemma uint_div: \uint (x div y) = uint x div uint y\ by (fact uint_div_distrib) lemma unat_div: \unat (x div y) = unat x div unat y\ by (fact unat_div_distrib) lemma uint_mod: \uint (x mod y) = uint x mod uint y\ by (fact uint_mod_distrib) lemma unat_mod: \unat (x mod y) = unat x mod unat y\ by (fact unat_mod_distrib) text \Definition of \unat_arith\ tactic\ lemma unat_split: "P (unat x) \ (\n. of_nat n = x \ n < 2^LENGTH('a) \ P n)" for x :: "'a::len word" by auto (metis take_bit_nat_eq_self_iff) lemma unat_split_asm: "P (unat x) \ (\n. of_nat n = x \ n < 2^LENGTH('a) \ \ P n)" for x :: "'a::len word" by auto (metis take_bit_nat_eq_self_iff) lemma of_nat_inverse: \word_of_nat r = a \ r < 2 ^ LENGTH('a) \ unat a = r\ for a :: \'a::len word\ by (metis mod_if unat_of_nat) lemma word_unat_eq_iff: \v = w \ unat v = unat w\ for v w :: \'a::len word\ by (fact word_eq_iff_unsigned) lemmas unat_splits = unat_split unat_split_asm lemmas unat_arith_simps = word_le_nat_alt word_less_nat_alt word_unat_eq_iff unat_sub_if' unat_plus_if' unat_div unat_mod \ \\unat_arith_tac\: tactic to reduce word arithmetic to \nat\, try to solve via \arith\\ ML \ val unat_arith_simpset = @{context} (* TODO: completely explicitly determined simpset *) |> fold Simplifier.del_simp @{thms unsigned_of_nat unsigned_of_int} |> fold Simplifier.add_simp @{thms unat_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of fun unat_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset unat_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms unat_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms of_nat_inverse} n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun unat_arith_tac ctxt = SELECT_GOAL (EVERY (unat_arith_tacs ctxt)) \ method_setup unat_arith = \Scan.succeed (SIMPLE_METHOD' o unat_arith_tac)\ "solving word arithmetic via natural numbers and arith" lemma no_plus_overflow_unat_size: "x \ x + y \ unat x + unat y < 2 ^ size x" for x y :: "'a::len word" unfolding word_size by unat_arith lemmas no_olen_add_nat = no_plus_overflow_unat_size [unfolded word_size] lemmas unat_plus_simple = trans [OF no_olen_add_nat unat_add_lem] lemma word_div_mult: "\0 < y; unat x * unat y < 2 ^ LENGTH('a)\ \ x * y div y = x" for x y :: "'a::len word" by (simp add: unat_eq_zero unat_mult_lem word_arith_nat_div) lemma div_lt': "i \ k div x \ unat i * unat x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" by unat_arith (meson le_less_trans less_mult_imp_div_less not_le unsigned_less) lemmas div_lt'' = order_less_imp_le [THEN div_lt'] lemma div_lt_mult: "\i < k div x; 0 < x\ \ i * x < k" for i k x :: "'a::len word" by (metis div_le_mono div_lt'' not_le unat_div word_div_mult word_less_iff_unsigned) lemma div_le_mult: "\i \ k div x; 0 < x\ \ i * x \ k" for i k x :: "'a::len word" by (metis div_lt' less_mult_imp_div_less not_less unat_arith_simps(2) unat_div unat_mult_lem) lemma div_lt_uint': "i \ k div x \ uint i * uint x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" unfolding uint_nat by (metis div_lt' int_ops(7) of_nat_unat uint_mult_lem unat_mult_lem) lemmas div_lt_uint'' = order_less_imp_le [THEN div_lt_uint'] lemma word_le_exists': "x \ y \ \z. y = x + z \ uint x + uint z < 2 ^ LENGTH('a)" for x y z :: "'a::len word" by (metis add.commute diff_add_cancel no_olen_add) lemmas plus_minus_not_NULL = order_less_imp_le [THEN plus_minus_not_NULL_ab] lemmas plus_minus_no_overflow = order_less_imp_le [THEN plus_minus_no_overflow_ab] lemmas mcs = word_less_minus_cancel word_less_minus_mono_left word_le_minus_cancel word_le_minus_mono_left lemmas word_l_diffs = mcs [where y = "w + x", unfolded add_diff_cancel] for w x lemmas word_diff_ls = mcs [where z = "w + x", unfolded add_diff_cancel] for w x lemmas word_plus_mcs = word_diff_ls [where y = "v + x", unfolded add_diff_cancel] for v x lemma le_unat_uoi: \y \ unat z \ unat (word_of_nat y :: 'a word) = y\ for z :: \'a::len word\ by transfer (simp add: nat_take_bit_eq take_bit_nat_eq_self_iff le_less_trans) lemmas thd = times_div_less_eq_dividend lemmas uno_simps [THEN le_unat_uoi] = mod_le_divisor div_le_dividend lemma word_mod_div_equality: "(n div b) * b + (n mod b) = n" for n b :: "'a::len word" by (fact div_mult_mod_eq) lemma word_div_mult_le: "a div b * b \ a" for a b :: "'a::len word" by (metis div_le_mult mult_not_zero order.not_eq_order_implies_strict order_refl word_zero_le) lemma word_mod_less_divisor: "0 < n \ m mod n < n" for m n :: "'a::len word" by (simp add: unat_arith_simps) lemma word_of_int_power_hom: "word_of_int a ^ n = (word_of_int (a ^ n) :: 'a::len word)" by (induct n) (simp_all add: wi_hom_mult [symmetric]) lemma word_arith_power_alt: "a ^ n = (word_of_int (uint a ^ n) :: 'a::len word)" by (simp add : word_of_int_power_hom [symmetric]) lemma unatSuc: "1 + n \ 0 \ unat (1 + n) = Suc (unat n)" for n :: "'a::len word" by unat_arith subsection \Cardinality, finiteness of set of words\ lemma inj_on_word_of_int: \inj_on (word_of_int :: int \ 'a word) {0..<2 ^ LENGTH('a::len)}\ unfolding inj_on_def by (metis atLeastLessThan_iff word_of_int_inverse) lemma range_uint: \range (uint :: 'a word \ int) = {0..<2 ^ LENGTH('a::len)}\ apply transfer apply (auto simp add: image_iff) apply (metis take_bit_int_eq_self_iff) done lemma UNIV_eq: \(UNIV :: 'a word set) = word_of_int ` {0..<2 ^ LENGTH('a::len)}\ by (auto simp add: image_iff) (metis atLeastLessThan_iff linorder_not_le uint_split) lemma card_word: "CARD('a word) = 2 ^ LENGTH('a::len)" by (simp add: UNIV_eq card_image inj_on_word_of_int) lemma card_word_size: "CARD('a word) = 2 ^ size x" for x :: "'a::len word" unfolding word_size by (rule card_word) end instance word :: (len) finite by standard (simp add: UNIV_eq) subsection \Bitwise Operations on Words\ context includes bit_operations_syntax begin lemma word_wi_log_defs: "NOT (word_of_int a) = word_of_int (NOT a)" "word_of_int a AND word_of_int b = word_of_int (a AND b)" "word_of_int a OR word_of_int b = word_of_int (a OR b)" "word_of_int a XOR word_of_int b = word_of_int (a XOR b)" by (transfer, rule refl)+ lemma word_no_log_defs [simp]: "NOT (numeral a) = word_of_int (NOT (numeral a))" "NOT (- numeral a) = word_of_int (NOT (- numeral a))" "numeral a AND numeral b = word_of_int (numeral a AND numeral b)" "numeral a AND - numeral b = word_of_int (numeral a AND - numeral b)" "- numeral a AND numeral b = word_of_int (- numeral a AND numeral b)" "- numeral a AND - numeral b = word_of_int (- numeral a AND - numeral b)" "numeral a OR numeral b = word_of_int (numeral a OR numeral b)" "numeral a OR - numeral b = word_of_int (numeral a OR - numeral b)" "- numeral a OR numeral b = word_of_int (- numeral a OR numeral b)" "- numeral a OR - numeral b = word_of_int (- numeral a OR - numeral b)" "numeral a XOR numeral b = word_of_int (numeral a XOR numeral b)" "numeral a XOR - numeral b = word_of_int (numeral a XOR - numeral b)" "- numeral a XOR numeral b = word_of_int (- numeral a XOR numeral b)" "- numeral a XOR - numeral b = word_of_int (- numeral a XOR - numeral b)" by (transfer, rule refl)+ text \Special cases for when one of the arguments equals 1.\ lemma word_bitwise_1_simps [simp]: "NOT (1::'a::len word) = -2" "1 AND numeral b = word_of_int (1 AND numeral b)" "1 AND - numeral b = word_of_int (1 AND - numeral b)" "numeral a AND 1 = word_of_int (numeral a AND 1)" "- numeral a AND 1 = word_of_int (- numeral a AND 1)" "1 OR numeral b = word_of_int (1 OR numeral b)" "1 OR - numeral b = word_of_int (1 OR - numeral b)" "numeral a OR 1 = word_of_int (numeral a OR 1)" "- numeral a OR 1 = word_of_int (- numeral a OR 1)" "1 XOR numeral b = word_of_int (1 XOR numeral b)" "1 XOR - numeral b = word_of_int (1 XOR - numeral b)" "numeral a XOR 1 = word_of_int (numeral a XOR 1)" "- numeral a XOR 1 = word_of_int (- numeral a XOR 1)" by (transfer, simp)+ text \Special cases for when one of the arguments equals -1.\ lemma word_bitwise_m1_simps [simp]: "NOT (-1::'a::len word) = 0" "(-1::'a::len word) AND x = x" "x AND (-1::'a::len word) = x" "(-1::'a::len word) OR x = -1" "x OR (-1::'a::len word) = -1" " (-1::'a::len word) XOR x = NOT x" "x XOR (-1::'a::len word) = NOT x" by (transfer, simp)+ lemma uint_and: \uint (x AND y) = uint x AND uint y\ by transfer simp lemma uint_or: \uint (x OR y) = uint x OR uint y\ by transfer simp lemma uint_xor: \uint (x XOR y) = uint x XOR uint y\ by transfer simp \ \get from commutativity, associativity etc of \int_and\ etc to same for \word_and etc\\ lemmas bwsimps = wi_hom_add word_wi_log_defs lemma word_bw_assocs: "(x AND y) AND z = x AND y AND z" "(x OR y) OR z = x OR y OR z" "(x XOR y) XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_comms: "x AND y = y AND x" "x OR y = y OR x" "x XOR y = y XOR x" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_lcs: "y AND x AND z = x AND y AND z" "y OR x OR z = x OR y OR z" "y XOR x XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_log_esimps: "x AND 0 = 0" "x AND -1 = x" "x OR 0 = x" "x OR -1 = -1" "x XOR 0 = x" "x XOR -1 = NOT x" "0 AND x = 0" "-1 AND x = x" "0 OR x = x" "-1 OR x = -1" "0 XOR x = x" "-1 XOR x = NOT x" for x :: "'a::len word" by simp_all lemma word_not_dist: "NOT (x OR y) = NOT x AND NOT y" "NOT (x AND y) = NOT x OR NOT y" for x :: "'a::len word" by simp_all lemma word_bw_same: "x AND x = x" "x OR x = x" "x XOR x = 0" for x :: "'a::len word" by simp_all lemma word_ao_absorbs [simp]: "x AND (y OR x) = x" "x OR y AND x = x" "x AND (x OR y) = x" "y AND x OR x = x" "(y OR x) AND x = x" "x OR x AND y = x" "(x OR y) AND x = x" "x AND y OR x = x" for x :: "'a::len word" by (auto intro: bit_eqI simp add: bit_and_iff bit_or_iff) lemma word_not_not [simp]: "NOT (NOT x) = x" for x :: "'a::len word" by (fact bit.double_compl) lemma word_ao_dist: "(x OR y) AND z = x AND z OR y AND z" for x :: "'a::len word" by (fact bit.conj_disj_distrib2) lemma word_oa_dist: "x AND y OR z = (x OR z) AND (y OR z)" for x :: "'a::len word" by (fact bit.disj_conj_distrib2) lemma word_add_not [simp]: "x + NOT x = -1" for x :: "'a::len word" by (simp add: not_eq_complement) lemma word_plus_and_or [simp]: "(x AND y) + (x OR y) = x + y" for x :: "'a::len word" by transfer (simp add: plus_and_or) lemma leoa: "w = x OR y \ y = w AND y" for x :: "'a::len word" by auto lemma leao: "w' = x' AND y' \ x' = x' OR w'" for x' :: "'a::len word" by auto lemma word_ao_equiv: "w = w OR w' \ w' = w AND w'" for w w' :: "'a::len word" by (auto intro: leoa leao) lemma le_word_or2: "x \ x OR y" for x y :: "'a::len word" by (simp add: or_greater_eq uint_or word_le_def) lemmas le_word_or1 = xtrans(3) [OF word_bw_comms (2) le_word_or2] lemmas word_and_le1 = xtrans(3) [OF word_ao_absorbs (4) [symmetric] le_word_or2] lemmas word_and_le2 = xtrans(3) [OF word_ao_absorbs (8) [symmetric] le_word_or2] lemma bit_horner_sum_bit_word_iff [bit_simps]: \bit (horner_sum of_bool (2 :: 'a::len word) bs) n \ n < min LENGTH('a) (length bs) \ bs ! n\ by transfer (simp add: bit_horner_sum_bit_iff) definition word_reverse :: \'a::len word \ 'a word\ where \word_reverse w = horner_sum of_bool 2 (rev (map (bit w) [0.. lemma bit_word_reverse_iff [bit_simps]: \bit (word_reverse w) n \ n < LENGTH('a) \ bit w (LENGTH('a) - Suc n)\ for w :: \'a::len word\ by (cases \n < LENGTH('a)\) (simp_all add: word_reverse_def bit_horner_sum_bit_word_iff rev_nth) lemma word_rev_rev [simp] : "word_reverse (word_reverse w) = w" by (rule bit_word_eqI) (auto simp add: bit_word_reverse_iff bit_imp_le_length Suc_diff_Suc) lemma word_rev_gal: "word_reverse w = u \ word_reverse u = w" by (metis word_rev_rev) lemma word_rev_gal': "u = word_reverse w \ w = word_reverse u" by simp lemma uint_2p: "(0::'a::len word) < 2 ^ n \ uint (2 ^ n::'a::len word) = 2 ^ n" by (cases \n < LENGTH('a)\; transfer; force) lemma word_of_int_2p: "(word_of_int (2 ^ n) :: 'a::len word) = 2 ^ n" by (induct n) (simp_all add: wi_hom_syms) subsubsection \shift functions in terms of lists of bools\ text \TODO: rules for \<^term>\- (numeral n)\\ lemma drop_bit_word_numeral [simp]: \drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\ by transfer simp lemma signed_drop_bit_word_numeral [simp]: \signed_drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (signed_take_bit (LENGTH('a) - 1) (numeral k))) :: 'a::len word)\ by transfer simp lemma False_map2_or: "\set xs \ {False}; length ys = length xs\ \ map2 (\) xs ys = ys" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_or: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\) xs ys = take m xs @ drop m ys" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv False_map2_or) qed auto lemma False_map2_and: "\set xs \ {False}; length ys = length xs\ \ map2 (\) xs ys = xs" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_and: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\) xs ys = replicate (n + m) False" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv set_replicate_conv_if False_map2_and) qed auto subsubsection \Mask\ lemma minus_1_eq_mask: \- 1 = (mask LENGTH('a) :: 'a::len word)\ - by (rule bit_eqI) (simp add: bit_exp_iff bit_mask_iff exp_eq_zero_iff) + by (rule bit_eqI) (simp add: bit_exp_iff bit_mask_iff) lemma mask_eq_decr_exp: \mask n = 2 ^ n - (1 :: 'a::len word)\ by (fact mask_eq_exp_minus_1) lemma mask_Suc_rec: \mask (Suc n) = 2 * mask n + (1 :: 'a::len word)\ by (simp add: mask_eq_exp_minus_1) context begin qualified lemma bit_mask_iff [bit_simps]: \bit (mask m :: 'a::len word) n \ n < min LENGTH('a) m\ - by (simp add: bit_mask_iff exp_eq_zero_iff not_le) + by (simp add: bit_mask_iff not_le) end lemma mask_bin: "mask n = word_of_int (take_bit n (- 1))" by transfer (simp add: take_bit_minus_one_eq_mask) lemma and_mask_bintr: "w AND mask n = word_of_int (take_bit n (uint w))" by transfer (simp add: ac_simps take_bit_eq_mask) lemma and_mask_wi: "word_of_int i AND mask n = word_of_int (take_bit n i)" by (auto simp add: and_mask_bintr min_def not_le wi_bintr) lemma and_mask_wi': "word_of_int i AND mask n = (word_of_int (take_bit (min LENGTH('a) n) i) :: 'a::len word)" by (auto simp add: and_mask_wi min_def wi_bintr) lemma and_mask_no: "numeral i AND mask n = word_of_int (take_bit n (numeral i))" unfolding word_numeral_alt by (rule and_mask_wi) lemma and_mask_mod_2p: "w AND mask n = word_of_int (uint w mod 2 ^ n)" by (simp only: and_mask_bintr take_bit_eq_mod) lemma uint_mask_eq: \uint (mask n :: 'a::len word) = mask (min LENGTH('a) n)\ by transfer simp lemma and_mask_lt_2p: "uint (w AND mask n) < 2 ^ n" by (metis take_bit_eq_mask take_bit_int_less_exp unsigned_take_bit_eq) lemma mask_eq_iff: "w AND mask n = w \ uint w < 2 ^ n" apply (auto simp flip: take_bit_eq_mask) apply (metis take_bit_int_eq_self_iff uint_take_bit_eq) apply (simp add: take_bit_int_eq_self unsigned_take_bit_eq word_uint_eqI) done lemma and_mask_dvd: "2 ^ n dvd uint w \ w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 uint_0_iff) lemma and_mask_dvd_nat: "2 ^ n dvd unat w \ w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 unat_0_iff uint_0_iff) lemma word_2p_lem: "n < size w \ w < 2 ^ n = (uint w < 2 ^ n)" for w :: "'a::len word" by transfer simp lemma less_mask_eq: fixes x :: "'a::len word" assumes "x < 2 ^ n" shows "x AND mask n = x" by (metis (no_types) assms lt2p_lem mask_eq_iff not_less word_2p_lem word_size) lemmas mask_eq_iff_w2p = trans [OF mask_eq_iff word_2p_lem [symmetric]] lemmas and_mask_less' = iffD2 [OF word_2p_lem and_mask_lt_2p, simplified word_size] lemma and_mask_less_size: "n < size x \ x AND mask n < 2 ^ n" for x :: \'a::len word\ unfolding word_size by (erule and_mask_less') lemma word_mod_2p_is_mask [OF refl]: "c = 2 ^ n \ c > 0 \ x mod c = x AND mask n" for c x :: "'a::len word" by (auto simp: word_mod_def uint_2p and_mask_mod_2p) lemma mask_eqs: "(a AND mask n) + b AND mask n = a + b AND mask n" "a + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - b AND mask n = a - b AND mask n" "a - (b AND mask n) AND mask n = a - b AND mask n" "a * (b AND mask n) AND mask n = a * b AND mask n" "(b AND mask n) * a AND mask n = b * a AND mask n" "(a AND mask n) + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - (b AND mask n) AND mask n = a - b AND mask n" "(a AND mask n) * (b AND mask n) AND mask n = a * b AND mask n" "- (a AND mask n) AND mask n = - a AND mask n" "word_succ (a AND mask n) AND mask n = word_succ a AND mask n" "word_pred (a AND mask n) AND mask n = word_pred a AND mask n" using word_of_int_Ex [where x=a] word_of_int_Ex [where x=b] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_power_eq: "(x AND mask n) ^ k AND mask n = x ^ k AND mask n" for x :: \'a::len word\ using word_of_int_Ex [where x=x] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_full [simp]: "mask LENGTH('a) = (- 1 :: 'a::len word)" by transfer (simp add: take_bit_minus_one_eq_mask) subsubsection \Slices\ definition slice1 :: \nat \ 'a::len word \ 'b::len word\ where \slice1 n w = (if n < LENGTH('a) then ucast (drop_bit (LENGTH('a) - n) w) else push_bit (n - LENGTH('a)) (ucast w))\ lemma bit_slice1_iff [bit_simps]: \bit (slice1 m w :: 'b::len word) n \ m - LENGTH('a) \ n \ n < min LENGTH('b) m \ bit w (n + (LENGTH('a) - m) - (m - LENGTH('a)))\ for w :: \'a::len word\ by (auto simp add: slice1_def bit_ucast_iff bit_drop_bit_eq bit_push_bit_iff not_less not_le ac_simps dest: bit_imp_le_length) definition slice :: \nat \ 'a::len word \ 'b::len word\ where \slice n = slice1 (LENGTH('a) - n)\ lemma bit_slice_iff [bit_simps]: \bit (slice m w :: 'b::len word) n \ n < min LENGTH('b) (LENGTH('a) - m) \ bit w (n + LENGTH('a) - (LENGTH('a) - m))\ for w :: \'a::len word\ by (simp add: slice_def word_size bit_slice1_iff) lemma slice1_0 [simp] : "slice1 n 0 = 0" unfolding slice1_def by simp lemma slice_0 [simp] : "slice n 0 = 0" unfolding slice_def by auto lemma ucast_slice1: "ucast w = slice1 (size w) w" unfolding slice1_def by (simp add: size_word.rep_eq) lemma ucast_slice: "ucast w = slice 0 w" by (simp add: slice_def slice1_def) lemma slice_id: "slice 0 t = t" by (simp only: ucast_slice [symmetric] ucast_id) lemma rev_slice1: \slice1 n (word_reverse w :: 'b::len word) = word_reverse (slice1 k w :: 'a::len word)\ if \n + k = LENGTH('a) + LENGTH('b)\ proof (rule bit_word_eqI) fix m assume *: \m < LENGTH('a)\ from that have **: \LENGTH('b) = n + k - LENGTH('a)\ by simp show \bit (slice1 n (word_reverse w :: 'b word) :: 'a word) m \ bit (word_reverse (slice1 k w :: 'a word)) m\ unfolding bit_slice1_iff bit_word_reverse_iff using * ** by (cases \n \ LENGTH('a)\; cases \k \ LENGTH('a)\) auto qed lemma rev_slice: "n + k + LENGTH('a::len) = LENGTH('b::len) \ slice n (word_reverse (w::'b word)) = word_reverse (slice k w :: 'a word)" unfolding slice_def word_size by (simp add: rev_slice1) subsubsection \Revcast\ definition revcast :: \'a::len word \ 'b::len word\ where \revcast = slice1 LENGTH('b)\ lemma bit_revcast_iff [bit_simps]: \bit (revcast w :: 'b::len word) n \ LENGTH('b) - LENGTH('a) \ n \ n < LENGTH('b) \ bit w (n + (LENGTH('a) - LENGTH('b)) - (LENGTH('b) - LENGTH('a)))\ for w :: \'a::len word\ by (simp add: revcast_def bit_slice1_iff) lemma revcast_slice1 [OF refl]: "rc = revcast w \ slice1 (size rc) w = rc" by (simp add: revcast_def word_size) lemma revcast_rev_ucast [OF refl refl refl]: "cs = [rc, uc] \ rc = revcast (word_reverse w) \ uc = ucast w \ rc = word_reverse uc" by (metis rev_slice1 revcast_slice1 ucast_slice1 word_size) lemma revcast_ucast: "revcast w = word_reverse (ucast (word_reverse w))" using revcast_rev_ucast [of "word_reverse w"] by simp lemma ucast_revcast: "ucast w = word_reverse (revcast (word_reverse w))" by (fact revcast_rev_ucast [THEN word_rev_gal']) lemma ucast_rev_revcast: "ucast (word_reverse w) = word_reverse (revcast w)" by (fact revcast_ucast [THEN word_rev_gal']) text "linking revcast and cast via shift" lemmas wsst_TYs = source_size target_size word_size lemmas sym_notr = not_iff [THEN iffD2, THEN not_sym, THEN not_iff [THEN iffD1]] subsection \Split and cat\ lemmas word_split_bin' = word_split_def lemmas word_cat_bin' = word_cat_eq \ \this odd result is analogous to \ucast_id\, result to the length given by the result type\ lemma word_cat_id: "word_cat a b = b" by transfer (simp add: take_bit_concat_bit_eq) lemma word_cat_split_alt: "\size w \ size u + size v; word_split w = (u,v)\ \ word_cat u v = w" unfolding word_split_def by (rule bit_word_eqI) (auto simp add: bit_word_cat_iff not_less word_size bit_ucast_iff bit_drop_bit_eq) lemmas word_cat_split_size = sym [THEN [2] word_cat_split_alt [symmetric]] subsubsection \Split and slice\ lemma split_slices: assumes "word_split w = (u, v)" shows "u = slice (size v) w \ v = slice 0 w" unfolding word_size proof (intro conjI) have \
: "\n. \ucast (drop_bit LENGTH('b) w) = u; LENGTH('c) < LENGTH('b)\ \ \ bit u n" by (metis bit_take_bit_iff bit_word_of_int_iff diff_is_0_eq' drop_bit_take_bit less_imp_le less_nat_zero_code of_int_uint unsigned_drop_bit_eq) show "u = slice LENGTH('b) w" proof (rule bit_word_eqI) show "bit u n = bit ((slice LENGTH('b) w)::'a word) n" if "n < LENGTH('a)" for n using assms bit_imp_le_length unfolding word_split_def bit_slice_iff by (fastforce simp add: \
ac_simps word_size bit_ucast_iff bit_drop_bit_eq) qed show "v = slice 0 w" by (metis Pair_inject assms ucast_slice word_split_bin') qed lemma slice_cat1 [OF refl]: "\wc = word_cat a b; size a + size b \ size wc\ \ slice (size b) wc = a" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemmas slice_cat2 = trans [OF slice_id word_cat_id] lemma cat_slices: "\a = slice n c; b = slice 0 c; n = size b; size c \ size a + size b\ \ word_cat a b = c" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemma word_split_cat_alt: assumes "w = word_cat u v" and size: "size u + size v \ size w" shows "word_split w = (u,v)" proof - have "ucast ((drop_bit LENGTH('c) (word_cat u v))::'a word) = u" "ucast ((word_cat u v)::'a word) = v" using assms by (auto simp add: word_size bit_ucast_iff bit_drop_bit_eq bit_word_cat_iff intro: bit_eqI) then show ?thesis by (simp add: assms(1) word_split_bin') qed lemma horner_sum_uint_exp_Cons_eq: \horner_sum uint (2 ^ LENGTH('a)) (w # ws) = concat_bit LENGTH('a) (uint w) (horner_sum uint (2 ^ LENGTH('a)) ws)\ for ws :: \'a::len word list\ by (simp add: bintr_uint concat_bit_eq push_bit_eq_mult) lemma bit_horner_sum_uint_exp_iff: \bit (horner_sum uint (2 ^ LENGTH('a)) ws) n \ n div LENGTH('a) < length ws \ bit (ws ! (n div LENGTH('a))) (n mod LENGTH('a))\ for ws :: \'a::len word list\ proof (induction ws arbitrary: n) case Nil then show ?case by simp next case (Cons w ws) then show ?case by (cases \n \ LENGTH('a)\) (simp_all only: horner_sum_uint_exp_Cons_eq, simp_all add: bit_concat_bit_iff le_div_geq le_mod_geq bit_uint_iff Cons) qed subsection \Rotation\ lemma word_rotr_word_rotr_eq: \word_rotr m (word_rotr n w) = word_rotr (m + n) w\ by (rule bit_word_eqI) (simp add: bit_word_rotr_iff ac_simps mod_add_right_eq) lemma word_rot_lem: "\l + k = d + k mod l; n < l\ \ ((d + n) mod l) = n" for l::nat by (metis (no_types, lifting) add.commute add.right_neutral add_diff_cancel_left' mod_if mod_mult_div_eq mod_mult_self2 mod_self) lemma word_rot_rl [simp]: \word_rotl k (word_rotr k v) = v\ proof (rule bit_word_eqI) show "bit (word_rotl k (word_rotr k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_lr [simp]: \word_rotr k (word_rotl k v) = v\ proof (rule bit_word_eqI) show "bit (word_rotr k (word_rotl k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp add: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_gal: \word_rotr n v = w \ word_rotl n w = v\ by auto lemma word_rot_gal': \w = word_rotr n v \ v = word_rotl n w\ by auto lemma word_rotr_rev: \word_rotr n w = word_reverse (word_rotl n (word_reverse w))\ proof (rule bit_word_eqI) fix m assume \m < LENGTH('a)\ moreover have \1 + ((int m + int n mod int LENGTH('a)) mod int LENGTH('a) + ((int LENGTH('a) * 2) mod int LENGTH('a) - (1 + (int m + int n mod int LENGTH('a)))) mod int LENGTH('a)) = int LENGTH('a)\ apply (cases \(1 + (int m + int n mod int LENGTH('a))) mod int LENGTH('a) = 0\) using zmod_zminus1_eq_if [of \1 + (int m + int n mod int LENGTH('a))\ \int LENGTH('a)\] apply simp_all apply (auto simp add: algebra_simps) apply (metis (mono_tags, opaque_lifting) Abs_fnat_hom_add mod_Suc mod_mult_self2_is_0 of_nat_Suc of_nat_mod semiring_char_0_class.of_nat_neq_0) apply (metis (no_types, opaque_lifting) Abs_fnat_hom_add less_not_refl mod_Suc of_nat_Suc of_nat_gt_0 of_nat_mod) done then have \int ((m + n) mod LENGTH('a)) = int (LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a)))\ using \m < LENGTH('a)\ by (simp only: of_nat_mod mod_simps) (simp add: of_nat_diff of_nat_mod Suc_le_eq add_less_mono algebra_simps mod_simps) then have \(m + n) mod LENGTH('a) = LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a))\ by simp ultimately show \bit (word_rotr n w) m \ bit (word_reverse (word_rotl n (word_reverse w))) m\ by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff bit_word_reverse_iff) qed lemma word_roti_0 [simp]: "word_roti 0 w = w" by transfer simp lemma word_roti_add: "word_roti (m + n) w = word_roti m (word_roti n w)" by (rule bit_word_eqI) (simp add: bit_word_roti_iff nat_less_iff mod_simps ac_simps) lemma word_roti_conv_mod': "word_roti n w = word_roti (n mod int (size w)) w" by transfer simp lemmas word_roti_conv_mod = word_roti_conv_mod' [unfolded word_size] end subsubsection \"Word rotation commutes with bit-wise operations\ \ \using locale to not pollute lemma namespace\ locale word_rotate begin context includes bit_operations_syntax begin lemma word_rot_logs: "word_rotl n (NOT v) = NOT (word_rotl n v)" "word_rotr n (NOT v) = NOT (word_rotr n v)" "word_rotl n (x AND y) = word_rotl n x AND word_rotl n y" "word_rotr n (x AND y) = word_rotr n x AND word_rotr n y" "word_rotl n (x OR y) = word_rotl n x OR word_rotl n y" "word_rotr n (x OR y) = word_rotr n x OR word_rotr n y" "word_rotl n (x XOR y) = word_rotl n x XOR word_rotl n y" "word_rotr n (x XOR y) = word_rotr n x XOR word_rotr n y" by (rule bit_word_eqI, auto simp add: bit_word_rotl_iff bit_word_rotr_iff bit_and_iff bit_or_iff bit_xor_iff bit_not_iff algebra_simps not_le)+ end end lemmas word_rot_logs = word_rotate.word_rot_logs lemma word_rotx_0 [simp] : "word_rotr i 0 = 0 \ word_rotl i 0 = 0" by transfer simp_all lemma word_roti_0' [simp] : "word_roti n 0 = 0" by transfer simp declare word_roti_eq_word_rotr_word_rotl [simp] subsection \Maximum machine word\ context includes bit_operations_syntax begin lemma word_int_cases: fixes x :: "'a::len word" obtains n where "x = word_of_int n" and "0 \ n" and "n < 2^LENGTH('a)" by (rule that [of \uint x\]) simp_all lemma word_nat_cases [cases type: word]: fixes x :: "'a::len word" obtains n where "x = of_nat n" and "n < 2^LENGTH('a)" by (rule that [of \unat x\]) simp_all lemma max_word_max [intro!]: \n \ - 1\ for n :: \'a::len word\ by (fact word_order.extremum) lemma word_of_int_2p_len: "word_of_int (2 ^ LENGTH('a)) = (0::'a::len word)" by simp lemma word_pow_0: "(2::'a::len word) ^ LENGTH('a) = 0" by (fact word_exp_length_eq_0) lemma max_word_wrap: \x + 1 = 0 \ x = - 1\ for x :: \'a::len word\ by (simp add: eq_neg_iff_add_eq_0) lemma word_and_max: \x AND - 1 = x\ for x :: \'a::len word\ by (fact word_log_esimps) lemma word_or_max: \x OR - 1 = - 1\ for x :: \'a::len word\ by (fact word_log_esimps) lemma word_ao_dist2: "x AND (y OR z) = x AND y OR x AND z" for x y z :: "'a::len word" by (fact bit.conj_disj_distrib) lemma word_oa_dist2: "x OR y AND z = (x OR y) AND (x OR z)" for x y z :: "'a::len word" by (fact bit.disj_conj_distrib) lemma word_and_not [simp]: "x AND NOT x = 0" for x :: "'a::len word" by (fact bit.conj_cancel_right) lemma word_or_not [simp]: \x OR NOT x = - 1\ for x :: \'a::len word\ by (fact bit.disj_cancel_right) lemma word_xor_and_or: "x XOR y = x AND NOT y OR NOT x AND y" for x y :: "'a::len word" by (fact bit.xor_def) lemma uint_lt_0 [simp]: "uint x < 0 = False" by (simp add: linorder_not_less) lemma word_less_1 [simp]: "x < 1 \ x = 0" for x :: "'a::len word" by (simp add: word_less_nat_alt unat_0_iff) lemma uint_plus_if_size: "uint (x + y) = (if uint x + uint y < 2^size x then uint x + uint y else uint x + uint y - 2^size x)" by (simp add: take_bit_eq_mod word_size uint_word_of_int_eq uint_plus_if') lemma unat_plus_if_size: "unat (x + y) = (if unat x + unat y < 2^size x then unat x + unat y else unat x + unat y - 2^size x)" for x y :: "'a::len word" by (simp add: size_word.rep_eq unat_arith_simps) lemma word_neq_0_conv: "w \ 0 \ 0 < w" for w :: "'a::len word" by (fact word_coorder.not_eq_extremum) lemma max_lt: "unat (max a b div c) = unat (max a b) div unat c" for c :: "'a::len word" by (fact unat_div) lemma uint_sub_if_size: "uint (x - y) = (if uint y \ uint x then uint x - uint y else uint x - uint y + 2^size x)" by (simp add: size_word.rep_eq uint_sub_if') lemma unat_sub: \unat (a - b) = unat a - unat b\ if \b \ a\ by (meson that unat_sub_if_size word_le_nat_alt) lemmas word_less_sub1_numberof [simp] = word_less_sub1 [of "numeral w"] for w lemmas word_le_sub1_numberof [simp] = word_le_sub1 [of "numeral w"] for w lemma word_of_int_minus: "word_of_int (2^LENGTH('a) - i) = (word_of_int (-i)::'a::len word)" by simp lemma word_of_int_inj: \(word_of_int x :: 'a::len word) = word_of_int y \ x = y\ if \0 \ x \ x < 2 ^ LENGTH('a)\ \0 \ y \ y < 2 ^ LENGTH('a)\ using that by (transfer fixing: x y) (simp add: take_bit_int_eq_self) lemma word_le_less_eq: "x \ y \ x = y \ x < y" for x y :: "'z::len word" by (auto simp add: order_class.le_less) lemma mod_plus_cong: fixes b b' :: int assumes 1: "b = b'" and 2: "x mod b' = x' mod b'" and 3: "y mod b' = y' mod b'" and 4: "x' + y' = z'" shows "(x + y) mod b = z' mod b'" proof - from 1 2[symmetric] 3[symmetric] have "(x + y) mod b = (x' mod b' + y' mod b') mod b'" by (simp add: mod_add_eq) also have "\ = (x' + y') mod b'" by (simp add: mod_add_eq) finally show ?thesis by (simp add: 4) qed lemma mod_minus_cong: fixes b b' :: int assumes "b = b'" and "x mod b' = x' mod b'" and "y mod b' = y' mod b'" and "x' - y' = z'" shows "(x - y) mod b = z' mod b'" using assms [symmetric] by (auto intro: mod_diff_cong) lemma word_induct_less [case_names zero less]: \P m\ if zero: \P 0\ and less: \\n. n < m \ P n \ P (1 + n)\ for m :: \'a::len word\ proof - define q where \q = unat m\ with less have \\n. n < word_of_nat q \ P n \ P (1 + n)\ by simp then have \P (word_of_nat q :: 'a word)\ proof (induction q) case 0 show ?case by (simp add: zero) next case (Suc q) show ?case proof (cases \1 + word_of_nat q = (0 :: 'a word)\) case True then show ?thesis by (simp add: zero) next case False then have *: \word_of_nat q < (word_of_nat (Suc q) :: 'a word)\ by (simp add: unatSuc word_less_nat_alt) then have **: \n < (1 + word_of_nat q :: 'a word) \ n \ (word_of_nat q :: 'a word)\ for n by (metis (no_types, lifting) add.commute inc_le le_less_trans not_less of_nat_Suc) have \P (word_of_nat q)\ by (simp add: "**" Suc.IH Suc.prems) with * have \P (1 + word_of_nat q)\ by (rule Suc.prems) then show ?thesis by simp qed qed with \q = unat m\ show ?thesis by simp qed lemma word_induct: "P 0 \ (\n. P n \ P (1 + n)) \ P m" for P :: "'a::len word \ bool" by (rule word_induct_less) lemma word_induct2 [case_names zero suc, induct type]: "P 0 \ (\n. 1 + n \ 0 \ P n \ P (1 + n)) \ P n" for P :: "'b::len word \ bool" by (induction rule: word_induct_less; force) subsection \Recursion combinator for words\ definition word_rec :: "'a \ ('b::len word \ 'a \ 'a) \ 'b word \ 'a" where "word_rec forZero forSuc n = rec_nat forZero (forSuc \ of_nat) (unat n)" lemma word_rec_0 [simp]: "word_rec z s 0 = z" by (simp add: word_rec_def) lemma word_rec_Suc [simp]: "1 + n \ 0 \ word_rec z s (1 + n) = s n (word_rec z s n)" for n :: "'a::len word" by (simp add: unatSuc word_rec_def) lemma word_rec_Pred: "n \ 0 \ word_rec z s n = s (n - 1) (word_rec z s (n - 1))" by (metis add.commute diff_add_cancel word_rec_Suc) lemma word_rec_in: "f (word_rec z (\_. f) n) = word_rec (f z) (\_. f) n" - by (induct n) (simp_all add: word_rec_Suc) + by (induct n) simp_all lemma word_rec_in2: "f n (word_rec z f n) = word_rec (f 0 z) (f \ (+) 1) n" - by (induct n) (simp_all add: word_rec_Suc) + by (induct n) simp_all lemma word_rec_twice: "m \ n \ word_rec z f n = word_rec (word_rec z f (n - m)) (f \ (+) (n - m)) m" proof (induction n arbitrary: z f) case zero then show ?case by (metis diff_0_right word_le_0_iff word_rec_0) next case (suc n z f) show ?case proof (cases "1 + (n - m) = 0") case True then show ?thesis by (simp add: add_diff_eq) next case False then have eq: "1 + n - m = 1 + (n - m)" by simp with False have "m \ n" by (metis "suc.prems" add.commute dual_order.antisym eq_iff_diff_eq_0 inc_le leI) with False "suc.hyps" show ?thesis using suc.IH [of "f 0 z" "f \ (+) 1"] by (simp add: word_rec_in2 eq add.assoc o_def) qed qed lemma word_rec_id: "word_rec z (\_. id) n = z" by (induct n) auto lemma word_rec_id_eq: "(\m. m < n \ f m = id) \ word_rec z f n = z" by (induction n) (auto simp add: unatSuc unat_arith_simps(2)) lemma word_rec_max: assumes "\m\n. m \ - 1 \ f m = id" shows "word_rec z f (- 1) = word_rec z f n" proof - have \
: "\m. \m < - 1 - n\ \ (f \ (+) n) m = id" using assms by (metis (mono_tags, lifting) add.commute add_diff_cancel_left' comp_apply less_le olen_add_eqv plus_minus_no_overflow word_n1_ge) have "word_rec z f (- 1) = word_rec (word_rec z f (- 1 - (- 1 - n))) (f \ (+) (- 1 - (- 1 - n))) (- 1 - n)" by (meson word_n1_ge word_rec_twice) also have "... = word_rec z f n" by (metis (no_types, lifting) \
diff_add_cancel minus_diff_eq uminus_add_conv_diff word_rec_id_eq) finally show ?thesis . qed subsection \More\ lemma mask_1: "mask 1 = 1" by simp lemma mask_Suc_0: "mask (Suc 0) = 1" by simp lemma bin_last_bintrunc: "odd (take_bit l n) \ l > 0 \ odd n" by simp lemma push_bit_word_beyond [simp]: \push_bit n w = 0\ if \LENGTH('a) \ n\ for w :: \'a::len word\ using that by (transfer fixing: n) (simp add: take_bit_push_bit) lemma drop_bit_word_beyond [simp]: \drop_bit n w = 0\ if \LENGTH('a) \ n\ for w :: \'a::len word\ using that by (transfer fixing: n) (simp add: drop_bit_take_bit) lemma signed_drop_bit_beyond: \signed_drop_bit n w = (if bit w (LENGTH('a) - Suc 0) then - 1 else 0)\ if \LENGTH('a) \ n\ for w :: \'a::len word\ by (rule bit_word_eqI) (simp add: bit_signed_drop_bit_iff that) end subsection \SMT support\ ML_file \Tools/smt_word.ML\ end diff --git a/src/HOL/Library/Z2.thy b/src/HOL/Library/Z2.thy --- a/src/HOL/Library/Z2.thy +++ b/src/HOL/Library/Z2.thy @@ -1,252 +1,252 @@ (* Title: HOL/Library/Z2.thy Author: Brian Huffman *) section \The Field of Integers mod 2\ theory Z2 -imports Main "HOL-Library.Bit_Operations" +imports Main begin text \ - Note that in most cases \<^typ>\bool\ is appropriate hen a binary type is needed; the + Note that in most cases \<^typ>\bool\ is appropriate when a binary type is needed; the type provided here, for historical reasons named \<^text>\bit\, is only needed if proper field operations are required. \ typedef bit = \UNIV :: bool set\ .. instantiation bit :: zero_neq_one begin definition zero_bit :: bit where \0 = Abs_bit False\ definition one_bit :: bit where \1 = Abs_bit True\ instance by standard (simp add: zero_bit_def one_bit_def Abs_bit_inject) end free_constructors case_bit for \0::bit\ | \1::bit\ proof - fix P :: bool fix a :: bit assume \a = 0 \ P\ and \a = 1 \ P\ then show P by (cases a) (auto simp add: zero_bit_def one_bit_def Abs_bit_inject) qed simp lemma bit_not_zero_iff [simp]: \a \ 0 \ a = 1\ for a :: bit by (cases a) simp_all lemma bit_not_one_imp [simp]: \a \ 1 \ a = 0\ for a :: bit by (cases a) simp_all instantiation bit :: semidom_modulo begin definition plus_bit :: \bit \ bit \ bit\ where \a + b = Abs_bit (Rep_bit a \ Rep_bit b)\ definition minus_bit :: \bit \ bit \ bit\ where [simp]: \minus_bit = plus\ definition times_bit :: \bit \ bit \ bit\ where \a * b = Abs_bit (Rep_bit a \ Rep_bit b)\ definition divide_bit :: \bit \ bit \ bit\ where [simp]: \divide_bit = times\ definition modulo_bit :: \bit \ bit \ bit\ where \a mod b = Abs_bit (Rep_bit a \ \ Rep_bit b)\ instance by standard (auto simp flip: Rep_bit_inject simp add: zero_bit_def one_bit_def plus_bit_def times_bit_def modulo_bit_def Abs_bit_inverse Rep_bit_inverse) end lemma bit_2_eq_0 [simp]: \2 = (0::bit)\ by (simp flip: one_add_one add: zero_bit_def plus_bit_def) instance bit :: semiring_parity apply standard apply (auto simp flip: Rep_bit_inject simp add: modulo_bit_def Abs_bit_inverse Rep_bit_inverse) apply (auto simp add: zero_bit_def one_bit_def Abs_bit_inverse Rep_bit_inverse) done lemma Abs_bit_eq_of_bool [code_abbrev]: \Abs_bit = of_bool\ by (simp add: fun_eq_iff zero_bit_def one_bit_def) lemma Rep_bit_eq_odd: \Rep_bit = odd\ proof - have \\ Rep_bit 0\ by (simp only: zero_bit_def) (subst Abs_bit_inverse, auto) then show ?thesis by (auto simp flip: Rep_bit_inject simp add: fun_eq_iff) qed lemma Rep_bit_iff_odd [code_abbrev]: \Rep_bit b \ odd b\ by (simp add: Rep_bit_eq_odd) lemma Not_Rep_bit_iff_even [code_abbrev]: \\ Rep_bit b \ even b\ by (simp add: Rep_bit_eq_odd) lemma Not_Not_Rep_bit [code_unfold]: \\ \ Rep_bit b \ Rep_bit b\ by simp code_datatype \0::bit\ \1::bit\ lemma Abs_bit_code [code]: \Abs_bit False = 0\ \Abs_bit True = 1\ by (simp_all add: Abs_bit_eq_of_bool) lemma Rep_bit_code [code]: \Rep_bit 0 \ False\ \Rep_bit 1 \ True\ by (simp_all add: Rep_bit_eq_odd) context zero_neq_one begin abbreviation of_bit :: \bit \ 'a\ where \of_bit b \ of_bool (odd b)\ end context begin qualified lemma bit_eq_iff: \a = b \ (even a \ even b)\ for a b :: bit by (cases a; cases b) simp_all end lemma modulo_bit_unfold [simp, code]: \a mod b = of_bool (odd a \ even b)\ for a b :: bit by (simp add: modulo_bit_def Abs_bit_eq_of_bool Rep_bit_eq_odd) lemma power_bit_unfold [simp, code]: \a ^ n = of_bool (odd a \ n = 0)\ for a :: bit by (cases a) simp_all instantiation bit :: semiring_bits begin definition bit_bit :: \bit \ nat \ bool\ where [simp]: \bit_bit b n \ odd b \ n = 0\ instance apply standard apply auto apply (metis bit.exhaust of_bool_eq(2)) done end instantiation bit :: semiring_bit_shifts begin definition push_bit_bit :: \nat \ bit \ bit\ where [simp]: \push_bit n b = of_bool (odd b \ n = 0)\ for b :: bit definition drop_bit_bit :: \nat \ bit \ bit\ where [simp]: \drop_bit_bit = push_bit\ definition take_bit_bit :: \nat \ bit \ bit\ where [simp]: \take_bit n b = of_bool (odd b \ n > 0)\ for b :: bit instance by standard simp_all end instantiation bit :: semiring_bit_operations begin context includes bit_operations_syntax begin definition and_bit :: \bit \ bit \ bit\ where [simp]: \b AND c = of_bool (odd b \ odd c)\ for b c :: bit definition or_bit :: \bit \ bit \ bit\ where [simp]: \b OR c = of_bool (odd b \ odd c)\ for b c :: bit definition xor_bit :: \bit \ bit \ bit\ where [simp]: \b XOR c = of_bool (odd b \ odd c)\ for b c :: bit definition mask_bit :: \nat \ bit\ where [simp]: \mask n = (of_bool (n > 0) :: bit)\ definition set_bit_bit :: \nat \ bit \ bit\ where [simp]: \set_bit n b = of_bool (n = 0 \ odd b)\ for b :: bit definition unset_bit_bit :: \nat \ bit \ bit\ where [simp]: \unset_bit n b = of_bool (n > 0 \ odd b)\ for b :: bit definition flip_bit_bit :: \nat \ bit \ bit\ where [simp]: \flip_bit n b = of_bool ((n = 0) \ odd b)\ for b :: bit end instance by standard auto end lemma add_bit_eq_xor [simp, code]: \(+) = (Bit_Operations.xor :: bit \ _)\ by (auto simp add: fun_eq_iff) lemma mult_bit_eq_and [simp, code]: \(*) = (Bit_Operations.and :: bit \ _)\ by (simp add: fun_eq_iff) instantiation bit :: field begin definition uminus_bit :: \bit \ bit\ where [simp]: \uminus_bit = id\ definition inverse_bit :: \bit \ bit\ where [simp]: \inverse_bit = id\ instance by standard simp_all end instantiation bit :: ring_bit_operations begin definition not_bit :: \bit \ bit\ where [simp]: \NOT b = of_bool (even b)\ for b :: bit instance by standard auto end lemma bit_numeral_even [simp]: "numeral (Num.Bit0 w) = (0 :: bit)" by (simp only: Z2.bit_eq_iff even_numeral) simp lemma bit_numeral_odd [simp]: "numeral (Num.Bit1 w) = (1 :: bit)" by (simp only: Z2.bit_eq_iff odd_numeral) simp end diff --git a/src/HOL/List.thy b/src/HOL/List.thy --- a/src/HOL/List.thy +++ b/src/HOL/List.thy @@ -1,8327 +1,8327 @@ (* Title: HOL/List.thy Author: Tobias Nipkow; proofs tidied by LCP *) section \The datatype of finite lists\ theory List -imports Sledgehammer Code_Numeral Lifting_Set +imports Sledgehammer Lifting_Set begin datatype (set: 'a) list = Nil ("[]") | Cons (hd: 'a) (tl: "'a list") (infixr "#" 65) for map: map rel: list_all2 pred: list_all where "tl [] = []" datatype_compat list lemma [case_names Nil Cons, cases type: list]: \ \for backward compatibility -- names of variables differ\ "(y = [] \ P) \ (\a list. y = a # list \ P) \ P" by (rule list.exhaust) lemma [case_names Nil Cons, induct type: list]: \ \for backward compatibility -- names of variables differ\ "P [] \ (\a list. P list \ P (a # list)) \ P list" by (rule list.induct) text \Compatibility:\ setup \Sign.mandatory_path "list"\ lemmas inducts = list.induct lemmas recs = list.rec lemmas cases = list.case setup \Sign.parent_path\ lemmas set_simps = list.set (* legacy *) syntax \ \list Enumeration\ "_list" :: "args => 'a list" ("[(_)]") translations "[x, xs]" == "x#[xs]" "[x]" == "x#[]" subsection \Basic list processing functions\ primrec (nonexhaustive) last :: "'a list \ 'a" where "last (x # xs) = (if xs = [] then x else last xs)" primrec butlast :: "'a list \ 'a list" where "butlast [] = []" | "butlast (x # xs) = (if xs = [] then [] else x # butlast xs)" lemma set_rec: "set xs = rec_list {} (\x _. insert x) xs" by (induct xs) auto definition coset :: "'a list \ 'a set" where [simp]: "coset xs = - set xs" primrec append :: "'a list \ 'a list \ 'a list" (infixr "@" 65) where append_Nil: "[] @ ys = ys" | append_Cons: "(x#xs) @ ys = x # xs @ ys" primrec rev :: "'a list \ 'a list" where "rev [] = []" | "rev (x # xs) = rev xs @ [x]" primrec filter:: "('a \ bool) \ 'a list \ 'a list" where "filter P [] = []" | "filter P (x # xs) = (if P x then x # filter P xs else filter P xs)" text \Special input syntax for filter:\ syntax (ASCII) "_filter" :: "[pttrn, 'a list, bool] => 'a list" ("(1[_<-_./ _])") syntax "_filter" :: "[pttrn, 'a list, bool] => 'a list" ("(1[_\_ ./ _])") translations "[x<-xs . P]" \ "CONST filter (\x. P) xs" primrec fold :: "('a \ 'b \ 'b) \ 'a list \ 'b \ 'b" where fold_Nil: "fold f [] = id" | fold_Cons: "fold f (x # xs) = fold f xs \ f x" primrec foldr :: "('a \ 'b \ 'b) \ 'a list \ 'b \ 'b" where foldr_Nil: "foldr f [] = id" | foldr_Cons: "foldr f (x # xs) = f x \ foldr f xs" primrec foldl :: "('b \ 'a \ 'b) \ 'b \ 'a list \ 'b" where foldl_Nil: "foldl f a [] = a" | foldl_Cons: "foldl f a (x # xs) = foldl f (f a x) xs" primrec concat:: "'a list list \ 'a list" where "concat [] = []" | "concat (x # xs) = x @ concat xs" primrec drop:: "nat \ 'a list \ 'a list" where drop_Nil: "drop n [] = []" | drop_Cons: "drop n (x # xs) = (case n of 0 \ x # xs | Suc m \ drop m xs)" \ \Warning: simpset does not contain this definition, but separate theorems for \n = 0\ and \n = Suc k\\ primrec take:: "nat \ 'a list \ 'a list" where take_Nil:"take n [] = []" | take_Cons: "take n (x # xs) = (case n of 0 \ [] | Suc m \ x # take m xs)" \ \Warning: simpset does not contain this definition, but separate theorems for \n = 0\ and \n = Suc k\\ primrec (nonexhaustive) nth :: "'a list => nat => 'a" (infixl "!" 100) where nth_Cons: "(x # xs) ! n = (case n of 0 \ x | Suc k \ xs ! k)" \ \Warning: simpset does not contain this definition, but separate theorems for \n = 0\ and \n = Suc k\\ primrec list_update :: "'a list \ nat \ 'a \ 'a list" where "list_update [] i v = []" | "list_update (x # xs) i v = (case i of 0 \ v # xs | Suc j \ x # list_update xs j v)" nonterminal lupdbinds and lupdbind syntax "_lupdbind":: "['a, 'a] => lupdbind" ("(2_ :=/ _)") "" :: "lupdbind => lupdbinds" ("_") "_lupdbinds" :: "[lupdbind, lupdbinds] => lupdbinds" ("_,/ _") "_LUpdate" :: "['a, lupdbinds] => 'a" ("_/[(_)]" [1000,0] 900) translations "_LUpdate xs (_lupdbinds b bs)" == "_LUpdate (_LUpdate xs b) bs" "xs[i:=x]" == "CONST list_update xs i x" primrec takeWhile :: "('a \ bool) \ 'a list \ 'a list" where "takeWhile P [] = []" | "takeWhile P (x # xs) = (if P x then x # takeWhile P xs else [])" primrec dropWhile :: "('a \ bool) \ 'a list \ 'a list" where "dropWhile P [] = []" | "dropWhile P (x # xs) = (if P x then dropWhile P xs else x # xs)" primrec zip :: "'a list \ 'b list \ ('a \ 'b) list" where "zip xs [] = []" | zip_Cons: "zip xs (y # ys) = (case xs of [] \ [] | z # zs \ (z, y) # zip zs ys)" \ \Warning: simpset does not contain this definition, but separate theorems for \xs = []\ and \xs = z # zs\\ abbreviation map2 :: "('a \ 'b \ 'c) \ 'a list \ 'b list \ 'c list" where "map2 f xs ys \ map (\(x,y). f x y) (zip xs ys)" primrec product :: "'a list \ 'b list \ ('a \ 'b) list" where "product [] _ = []" | "product (x#xs) ys = map (Pair x) ys @ product xs ys" hide_const (open) product primrec product_lists :: "'a list list \ 'a list list" where "product_lists [] = [[]]" | "product_lists (xs # xss) = concat (map (\x. map (Cons x) (product_lists xss)) xs)" primrec upt :: "nat \ nat \ nat list" ("(1[_.. j then [i.. 'a list \ 'a list" where "insert x xs = (if x \ set xs then xs else x # xs)" definition union :: "'a list \ 'a list \ 'a list" where "union = fold insert" hide_const (open) insert union hide_fact (open) insert_def union_def primrec find :: "('a \ bool) \ 'a list \ 'a option" where "find _ [] = None" | "find P (x#xs) = (if P x then Some x else find P xs)" text \In the context of multisets, \count_list\ is equivalent to \<^term>\count \ mset\ and it it advisable to use the latter.\ primrec count_list :: "'a list \ 'a \ nat" where "count_list [] y = 0" | "count_list (x#xs) y = (if x=y then count_list xs y + 1 else count_list xs y)" definition "extract" :: "('a \ bool) \ 'a list \ ('a list * 'a * 'a list) option" where "extract P xs = (case dropWhile (Not \ P) xs of [] \ None | y#ys \ Some(takeWhile (Not \ P) xs, y, ys))" hide_const (open) "extract" primrec those :: "'a option list \ 'a list option" where "those [] = Some []" | "those (x # xs) = (case x of None \ None | Some y \ map_option (Cons y) (those xs))" primrec remove1 :: "'a \ 'a list \ 'a list" where "remove1 x [] = []" | "remove1 x (y # xs) = (if x = y then xs else y # remove1 x xs)" primrec removeAll :: "'a \ 'a list \ 'a list" where "removeAll x [] = []" | "removeAll x (y # xs) = (if x = y then removeAll x xs else y # removeAll x xs)" primrec distinct :: "'a list \ bool" where "distinct [] \ True" | "distinct (x # xs) \ x \ set xs \ distinct xs" fun successively :: "('a \ 'a \ bool) \ 'a list \ bool" where "successively P [] = True" | "successively P [x] = True" | "successively P (x # y # xs) = (P x y \ successively P (y#xs))" definition distinct_adj where "distinct_adj = successively (\)" primrec remdups :: "'a list \ 'a list" where "remdups [] = []" | "remdups (x # xs) = (if x \ set xs then remdups xs else x # remdups xs)" fun remdups_adj :: "'a list \ 'a list" where "remdups_adj [] = []" | "remdups_adj [x] = [x]" | "remdups_adj (x # y # xs) = (if x = y then remdups_adj (x # xs) else x # remdups_adj (y # xs))" primrec replicate :: "nat \ 'a \ 'a list" where replicate_0: "replicate 0 x = []" | replicate_Suc: "replicate (Suc n) x = x # replicate n x" text \ Function \size\ is overloaded for all datatypes. Users may refer to the list version as \length\.\ abbreviation length :: "'a list \ nat" where "length \ size" definition enumerate :: "nat \ 'a list \ (nat \ 'a) list" where enumerate_eq_zip: "enumerate n xs = zip [n.. 'a list" where "rotate1 [] = []" | "rotate1 (x # xs) = xs @ [x]" definition rotate :: "nat \ 'a list \ 'a list" where "rotate n = rotate1 ^^ n" definition nths :: "'a list => nat set => 'a list" where "nths xs A = map fst (filter (\p. snd p \ A) (zip xs [0.. 'a list list" where "subseqs [] = [[]]" | "subseqs (x#xs) = (let xss = subseqs xs in map (Cons x) xss @ xss)" primrec n_lists :: "nat \ 'a list \ 'a list list" where "n_lists 0 xs = [[]]" | "n_lists (Suc n) xs = concat (map (\ys. map (\y. y # ys) xs) (n_lists n xs))" hide_const (open) n_lists function splice :: "'a list \ 'a list \ 'a list" where "splice [] ys = ys" | "splice (x#xs) ys = x # splice ys xs" by pat_completeness auto termination by(relation "measure(\(xs,ys). size xs + size ys)") auto function shuffles where "shuffles [] ys = {ys}" | "shuffles xs [] = {xs}" | "shuffles (x # xs) (y # ys) = (#) x ` shuffles xs (y # ys) \ (#) y ` shuffles (x # xs) ys" by pat_completeness simp_all termination by lexicographic_order text\Use only if you cannot use \<^const>\Min\ instead:\ fun min_list :: "'a::ord list \ 'a" where "min_list (x # xs) = (case xs of [] \ x | _ \ min x (min_list xs))" text\Returns first minimum:\ fun arg_min_list :: "('a \ ('b::linorder)) \ 'a list \ 'a" where "arg_min_list f [x] = x" | "arg_min_list f (x#y#zs) = (let m = arg_min_list f (y#zs) in if f x \ f m then x else m)" text\ \begin{figure}[htbp] \fbox{ \begin{tabular}{l} @{lemma "[a,b]@[c,d] = [a,b,c,d]" by simp}\\ @{lemma "length [a,b,c] = 3" by simp}\\ @{lemma "set [a,b,c] = {a,b,c}" by simp}\\ @{lemma "map f [a,b,c] = [f a, f b, f c]" by simp}\\ @{lemma "rev [a,b,c] = [c,b,a]" by simp}\\ @{lemma "hd [a,b,c,d] = a" by simp}\\ @{lemma "tl [a,b,c,d] = [b,c,d]" by simp}\\ @{lemma "last [a,b,c,d] = d" by simp}\\ @{lemma "butlast [a,b,c,d] = [a,b,c]" by simp}\\ @{lemma[source] "filter (\n::nat. n<2) [0,2,1] = [0,1]" by simp}\\ @{lemma "concat [[a,b],[c,d,e],[],[f]] = [a,b,c,d,e,f]" by simp}\\ @{lemma "fold f [a,b,c] x = f c (f b (f a x))" by simp}\\ @{lemma "foldr f [a,b,c] x = f a (f b (f c x))" by simp}\\ @{lemma "foldl f x [a,b,c] = f (f (f x a) b) c" by simp}\\ @{lemma "successively (\) [True,False,True,False]" by simp}\\ @{lemma "zip [a,b,c] [x,y,z] = [(a,x),(b,y),(c,z)]" by simp}\\ @{lemma "zip [a,b] [x,y,z] = [(a,x),(b,y)]" by simp}\\ @{lemma "enumerate 3 [a,b,c] = [(3,a),(4,b),(5,c)]" by normalization}\\ @{lemma "List.product [a,b] [c,d] = [(a, c), (a, d), (b, c), (b, d)]" by simp}\\ @{lemma "product_lists [[a,b], [c], [d,e]] = [[a,c,d], [a,c,e], [b,c,d], [b,c,e]]" by simp}\\ @{lemma "splice [a,b,c] [x,y,z] = [a,x,b,y,c,z]" by simp}\\ @{lemma "splice [a,b,c,d] [x,y] = [a,x,b,y,c,d]" by simp}\\ @{lemma "shuffles [a,b] [c,d] = {[a,b,c,d],[a,c,b,d],[a,c,d,b],[c,a,b,d],[c,a,d,b],[c,d,a,b]}" by (simp add: insert_commute)}\\ @{lemma "take 2 [a,b,c,d] = [a,b]" by simp}\\ @{lemma "take 6 [a,b,c,d] = [a,b,c,d]" by simp}\\ @{lemma "drop 2 [a,b,c,d] = [c,d]" by simp}\\ @{lemma "drop 6 [a,b,c,d] = []" by simp}\\ @{lemma "takeWhile (%n::nat. n<3) [1,2,3,0] = [1,2]" by simp}\\ @{lemma "dropWhile (%n::nat. n<3) [1,2,3,0] = [3,0]" by simp}\\ @{lemma "distinct [2,0,1::nat]" by simp}\\ @{lemma "remdups [2,0,2,1::nat,2] = [0,1,2]" by simp}\\ @{lemma "remdups_adj [2,2,3,1,1::nat,2,1] = [2,3,1,2,1]" by simp}\\ @{lemma "List.insert 2 [0::nat,1,2] = [0,1,2]" by (simp add: List.insert_def)}\\ @{lemma "List.insert 3 [0::nat,1,2] = [3,0,1,2]" by (simp add: List.insert_def)}\\ @{lemma "List.union [2,3,4] [0::int,1,2] = [4,3,0,1,2]" by (simp add: List.insert_def List.union_def)}\\ @{lemma "List.find (%i::int. i>0) [0,0] = None" by simp}\\ @{lemma "List.find (%i::int. i>0) [0,1,0,2] = Some 1" by simp}\\ @{lemma "count_list [0,1,0,2::int] 0 = 2" by (simp)}\\ @{lemma "List.extract (%i::int. i>0) [0,0] = None" by(simp add: extract_def)}\\ @{lemma "List.extract (%i::int. i>0) [0,1,0,2] = Some([0], 1, [0,2])" by(simp add: extract_def)}\\ @{lemma "remove1 2 [2,0,2,1::nat,2] = [0,2,1,2]" by simp}\\ @{lemma "removeAll 2 [2,0,2,1::nat,2] = [0,1]" by simp}\\ @{lemma "nth [a,b,c,d] 2 = c" by simp}\\ @{lemma "[a,b,c,d][2 := x] = [a,b,x,d]" by simp}\\ @{lemma "nths [a,b,c,d,e] {0,2,3} = [a,c,d]" by (simp add:nths_def)}\\ @{lemma "subseqs [a,b] = [[a, b], [a], [b], []]" by simp}\\ @{lemma "List.n_lists 2 [a,b,c] = [[a, a], [b, a], [c, a], [a, b], [b, b], [c, b], [a, c], [b, c], [c, c]]" by (simp add: eval_nat_numeral)}\\ @{lemma "rotate1 [a,b,c,d] = [b,c,d,a]" by simp}\\ @{lemma "rotate 3 [a,b,c,d] = [d,a,b,c]" by (simp add:rotate_def eval_nat_numeral)}\\ @{lemma "replicate 4 a = [a,a,a,a]" by (simp add:eval_nat_numeral)}\\ @{lemma "[2..<5] = [2,3,4]" by (simp add:eval_nat_numeral)}\\ @{lemma "min_list [3,1,-2::int] = -2" by (simp)}\\ @{lemma "arg_min_list (\i. i*i) [3,-1,1,-2::int] = -1" by (simp)} \end{tabular}} \caption{Characteristic examples} \label{fig:Characteristic} \end{figure} Figure~\ref{fig:Characteristic} shows characteristic examples that should give an intuitive understanding of the above functions. \ text\The following simple sort(ed) functions are intended for proofs, not for efficient implementations.\ text \A sorted predicate w.r.t. a relation:\ fun sorted_wrt :: "('a \ 'a \ bool) \ 'a list \ bool" where "sorted_wrt P [] = True" | "sorted_wrt P (x # ys) = ((\y \ set ys. P x y) \ sorted_wrt P ys)" text \A class-based sorted predicate:\ context linorder begin abbreviation sorted :: "'a list \ bool" where "sorted \ sorted_wrt (\)" lemma sorted_simps: "sorted [] = True" "sorted (x # ys) = ((\y \ set ys. x\y) \ sorted ys)" by auto lemma strict_sorted_simps: "sorted_wrt (<) [] = True" "sorted_wrt (<) (x # ys) = ((\y \ set ys. x sorted_wrt (<) ys)" by auto primrec insort_key :: "('b \ 'a) \ 'b \ 'b list \ 'b list" where "insort_key f x [] = [x]" | "insort_key f x (y#ys) = (if f x \ f y then (x#y#ys) else y#(insort_key f x ys))" definition sort_key :: "('b \ 'a) \ 'b list \ 'b list" where "sort_key f xs = foldr (insort_key f) xs []" definition insort_insert_key :: "('b \ 'a) \ 'b \ 'b list \ 'b list" where "insort_insert_key f x xs = (if f x \ f ` set xs then xs else insort_key f x xs)" abbreviation "sort \ sort_key (\x. x)" abbreviation "insort \ insort_key (\x. x)" abbreviation "insort_insert \ insort_insert_key (\x. x)" definition stable_sort_key :: "(('b \ 'a) \ 'b list \ 'b list) \ bool" where "stable_sort_key sk = (\f xs k. filter (\y. f y = k) (sk f xs) = filter (\y. f y = k) xs)" lemma strict_sorted_iff: "sorted_wrt (<) l \ sorted l \ distinct l" by (induction l) (auto iff: antisym_conv1) lemma strict_sorted_imp_sorted: "sorted_wrt (<) xs \ sorted xs" by (auto simp: strict_sorted_iff) end subsubsection \List comprehension\ text\Input syntax for Haskell-like list comprehension notation. Typical example: \[(x,y). x \ xs, y \ ys, x \ y]\, the list of all pairs of distinct elements from \xs\ and \ys\. The syntax is as in Haskell, except that \|\ becomes a dot (like in Isabelle's set comprehension): \[e. x \ xs, \]\ rather than \verb![e| x <- xs, ...]!. The qualifiers after the dot are \begin{description} \item[generators] \p \ xs\, where \p\ is a pattern and \xs\ an expression of list type, or \item[guards] \b\, where \b\ is a boolean expression. %\item[local bindings] @ {text"let x = e"}. \end{description} Just like in Haskell, list comprehension is just a shorthand. To avoid misunderstandings, the translation into desugared form is not reversed upon output. Note that the translation of \[e. x \ xs]\ is optmized to \<^term>\map (%x. e) xs\. It is easy to write short list comprehensions which stand for complex expressions. During proofs, they may become unreadable (and mangled). In such cases it can be advisable to introduce separate definitions for the list comprehensions in question.\ nonterminal lc_qual and lc_quals syntax "_listcompr" :: "'a \ lc_qual \ lc_quals \ 'a list" ("[_ . __") "_lc_gen" :: "'a \ 'a list \ lc_qual" ("_ \ _") "_lc_test" :: "bool \ lc_qual" ("_") (*"_lc_let" :: "letbinds => lc_qual" ("let _")*) "_lc_end" :: "lc_quals" ("]") "_lc_quals" :: "lc_qual \ lc_quals \ lc_quals" (", __") syntax (ASCII) "_lc_gen" :: "'a \ 'a list \ lc_qual" ("_ <- _") parse_translation \ let val NilC = Syntax.const \<^const_syntax>\Nil\; val ConsC = Syntax.const \<^const_syntax>\Cons\; val mapC = Syntax.const \<^const_syntax>\map\; val concatC = Syntax.const \<^const_syntax>\concat\; val IfC = Syntax.const \<^const_syntax>\If\; val dummyC = Syntax.const \<^const_syntax>\Pure.dummy_pattern\ fun single x = ConsC $ x $ NilC; fun pat_tr ctxt p e opti = (* %x. case x of p => e | _ => [] *) let (* FIXME proper name context!? *) val x = Free (singleton (Name.variant_list (fold Term.add_free_names [p, e] [])) "x", dummyT); val e = if opti then single e else e; val case1 = Syntax.const \<^syntax_const>\_case1\ $ p $ e; val case2 = Syntax.const \<^syntax_const>\_case1\ $ dummyC $ NilC; val cs = Syntax.const \<^syntax_const>\_case2\ $ case1 $ case2; in Syntax_Trans.abs_tr [x, Case_Translation.case_tr false ctxt [x, cs]] end; fun pair_pat_tr (x as Free _) e = Syntax_Trans.abs_tr [x, e] | pair_pat_tr (_ $ p1 $ p2) e = Syntax.const \<^const_syntax>\case_prod\ $ pair_pat_tr p1 (pair_pat_tr p2 e) | pair_pat_tr dummy e = Syntax_Trans.abs_tr [Syntax.const "_idtdummy", e] fun pair_pat ctxt (Const (\<^const_syntax>\Pair\,_) $ s $ t) = pair_pat ctxt s andalso pair_pat ctxt t | pair_pat ctxt (Free (s,_)) = let val thy = Proof_Context.theory_of ctxt; val s' = Proof_Context.intern_const ctxt s; in not (Sign.declared_const thy s') end | pair_pat _ t = (t = dummyC); fun abs_tr ctxt p e opti = let val p = Term_Position.strip_positions p in if pair_pat ctxt p then (pair_pat_tr p e, true) else (pat_tr ctxt p e opti, false) end fun lc_tr ctxt [e, Const (\<^syntax_const>\_lc_test\, _) $ b, qs] = let val res = (case qs of Const (\<^syntax_const>\_lc_end\, _) => single e | Const (\<^syntax_const>\_lc_quals\, _) $ q $ qs => lc_tr ctxt [e, q, qs]); in IfC $ b $ res $ NilC end | lc_tr ctxt [e, Const (\<^syntax_const>\_lc_gen\, _) $ p $ es, Const(\<^syntax_const>\_lc_end\, _)] = (case abs_tr ctxt p e true of (f, true) => mapC $ f $ es | (f, false) => concatC $ (mapC $ f $ es)) | lc_tr ctxt [e, Const (\<^syntax_const>\_lc_gen\, _) $ p $ es, Const (\<^syntax_const>\_lc_quals\, _) $ q $ qs] = let val e' = lc_tr ctxt [e, q, qs]; in concatC $ (mapC $ (fst (abs_tr ctxt p e' false)) $ es) end; in [(\<^syntax_const>\_listcompr\, lc_tr)] end \ ML_val \ let val read = Syntax.read_term \<^context> o Syntax.implode_input; fun check s1 s2 = read s1 aconv read s2 orelse error ("Check failed: " ^ quote (#1 (Input.source_content s1)) ^ Position.here_list [Input.pos_of s1, Input.pos_of s2]); in check \[(x,y,z). b]\ \if b then [(x, y, z)] else []\; check \[(x,y,z). (x,_,y)\xs]\ \map (\(x,_,y). (x, y, z)) xs\; check \[e x y. (x,_)\xs, y\ys]\ \concat (map (\(x,_). map (\y. e x y) ys) xs)\; check \[(x,y,z). xb]\ \if x < a then if b < x then [(x, y, z)] else [] else []\; check \[(x,y,z). x\xs, x>b]\ \concat (map (\x. if b < x then [(x, y, z)] else []) xs)\; check \[(x,y,z). xxs]\ \if x < a then map (\x. (x, y, z)) xs else []\; check \[(x,y). Cons True x \ xs]\ \concat (map (\xa. case xa of [] \ [] | True # x \ [(x, y)] | False # x \ []) xs)\; check \[(x,y,z). Cons x [] \ xs]\ \concat (map (\xa. case xa of [] \ [] | [x] \ [(x, y, z)] | x # aa # lista \ []) xs)\; check \[(x,y,z). xb, x=d]\ \if x < a then if b < x then if x = d then [(x, y, z)] else [] else [] else []\; check \[(x,y,z). xb, y\ys]\ \if x < a then if b < x then map (\y. (x, y, z)) ys else [] else []\; check \[(x,y,z). xxs,y>b]\ \if x < a then concat (map (\(_,x). if b < y then [(x, y, z)] else []) xs) else []\; check \[(x,y,z). xxs, y\ys]\ \if x < a then concat (map (\x. map (\y. (x, y, z)) ys) xs) else []\; check \[(x,y,z). x\xs, x>b, y \concat (map (\x. if b < x then if y < a then [(x, y, z)] else [] else []) xs)\; check \[(x,y,z). x\xs, x>b, y\ys]\ \concat (map (\x. if b < x then map (\y. (x, y, z)) ys else []) xs)\; check \[(x,y,z). x\xs, (y,_)\ys,y>x]\ \concat (map (\x. concat (map (\(y,_). if x < y then [(x, y, z)] else []) ys)) xs)\; check \[(x,y,z). x\xs, y\ys,z\zs]\ \concat (map (\x. concat (map (\y. map (\z. (x, y, z)) zs) ys)) xs)\ end; \ ML \ (* Simproc for rewriting list comprehensions applied to List.set to set comprehension. *) signature LIST_TO_SET_COMPREHENSION = sig val simproc : Proof.context -> cterm -> thm option end structure List_to_Set_Comprehension : LIST_TO_SET_COMPREHENSION = struct (* conversion *) fun all_exists_conv cv ctxt ct = (case Thm.term_of ct of Const (\<^const_name>\Ex\, _) $ Abs _ => Conv.arg_conv (Conv.abs_conv (all_exists_conv cv o #2) ctxt) ct | _ => cv ctxt ct) fun all_but_last_exists_conv cv ctxt ct = (case Thm.term_of ct of Const (\<^const_name>\Ex\, _) $ Abs (_, _, Const (\<^const_name>\Ex\, _) $ _) => Conv.arg_conv (Conv.abs_conv (all_but_last_exists_conv cv o #2) ctxt) ct | _ => cv ctxt ct) fun Collect_conv cv ctxt ct = (case Thm.term_of ct of Const (\<^const_name>\Collect\, _) $ Abs _ => Conv.arg_conv (Conv.abs_conv cv ctxt) ct | _ => raise CTERM ("Collect_conv", [ct])) fun rewr_conv' th = Conv.rewr_conv (mk_meta_eq th) fun conjunct_assoc_conv ct = Conv.try_conv (rewr_conv' @{thm conj_assoc} then_conv HOLogic.conj_conv Conv.all_conv conjunct_assoc_conv) ct fun right_hand_set_comprehension_conv conv ctxt = HOLogic.Trueprop_conv (HOLogic.eq_conv Conv.all_conv (Collect_conv (all_exists_conv conv o #2) ctxt)) (* term abstraction of list comprehension patterns *) datatype termlets = If | Case of typ * int local val set_Nil_I = @{lemma "set [] = {x. False}" by (simp add: empty_def [symmetric])} val set_singleton = @{lemma "set [a] = {x. x = a}" by simp} val inst_Collect_mem_eq = @{lemma "set A = {x. x \ set A}" by simp} val del_refl_eq = @{lemma "(t = t \ P) \ P" by simp} fun mk_set T = Const (\<^const_name>\set\, HOLogic.listT T --> HOLogic.mk_setT T) fun dest_set (Const (\<^const_name>\set\, _) $ xs) = xs fun dest_singleton_list (Const (\<^const_name>\Cons\, _) $ t $ (Const (\<^const_name>\Nil\, _))) = t | dest_singleton_list t = raise TERM ("dest_singleton_list", [t]) (*We check that one case returns a singleton list and all other cases return [], and return the index of the one singleton list case.*) fun possible_index_of_singleton_case cases = let fun check (i, case_t) s = (case strip_abs_body case_t of (Const (\<^const_name>\Nil\, _)) => s | _ => (case s of SOME NONE => SOME (SOME i) | _ => NONE)) in fold_index check cases (SOME NONE) |> the_default NONE end (*returns condition continuing term option*) fun dest_if (Const (\<^const_name>\If\, _) $ cond $ then_t $ Const (\<^const_name>\Nil\, _)) = SOME (cond, then_t) | dest_if _ = NONE (*returns (case_expr type index chosen_case constr_name) option*) fun dest_case ctxt case_term = let val (case_const, args) = strip_comb case_term in (case try dest_Const case_const of SOME (c, T) => (case Ctr_Sugar.ctr_sugar_of_case ctxt c of SOME {ctrs, ...} => (case possible_index_of_singleton_case (fst (split_last args)) of SOME i => let val constr_names = map (fst o dest_Const) ctrs val (Ts, _) = strip_type T val T' = List.last Ts in SOME (List.last args, T', i, nth args i, nth constr_names i) end | NONE => NONE) | NONE => NONE) | NONE => NONE) end fun tac ctxt [] = resolve_tac ctxt [set_singleton] 1 ORELSE resolve_tac ctxt [inst_Collect_mem_eq] 1 | tac ctxt (If :: cont) = Splitter.split_tac ctxt @{thms if_split} 1 THEN resolve_tac ctxt @{thms conjI} 1 THEN resolve_tac ctxt @{thms impI} 1 THEN Subgoal.FOCUS (fn {prems, context = ctxt', ...} => CONVERSION (right_hand_set_comprehension_conv (K (HOLogic.conj_conv (Conv.rewr_conv (List.last prems RS @{thm Eq_TrueI})) Conv.all_conv then_conv rewr_conv' @{lemma "(True \ P) = P" by simp})) ctxt') 1) ctxt 1 THEN tac ctxt cont THEN resolve_tac ctxt @{thms impI} 1 THEN Subgoal.FOCUS (fn {prems, context = ctxt', ...} => CONVERSION (right_hand_set_comprehension_conv (K (HOLogic.conj_conv (Conv.rewr_conv (List.last prems RS @{thm Eq_FalseI})) Conv.all_conv then_conv rewr_conv' @{lemma "(False \ P) = False" by simp})) ctxt') 1) ctxt 1 THEN resolve_tac ctxt [set_Nil_I] 1 | tac ctxt (Case (T, i) :: cont) = let val SOME {injects, distincts, case_thms, split, ...} = Ctr_Sugar.ctr_sugar_of ctxt (fst (dest_Type T)) in (* do case distinction *) Splitter.split_tac ctxt [split] 1 THEN EVERY (map_index (fn (i', _) => (if i' < length case_thms - 1 then resolve_tac ctxt @{thms conjI} 1 else all_tac) THEN REPEAT_DETERM (resolve_tac ctxt @{thms allI} 1) THEN resolve_tac ctxt @{thms impI} 1 THEN (if i' = i then (* continue recursively *) Subgoal.FOCUS (fn {prems, context = ctxt', ...} => CONVERSION (Thm.eta_conversion then_conv right_hand_set_comprehension_conv (K ((HOLogic.conj_conv (HOLogic.eq_conv Conv.all_conv (rewr_conv' (List.last prems)) then_conv (Conv.try_conv (Conv.rewrs_conv (map mk_meta_eq injects)))) Conv.all_conv) then_conv (Conv.try_conv (Conv.rewr_conv del_refl_eq)) then_conv conjunct_assoc_conv)) ctxt' then_conv (HOLogic.Trueprop_conv (HOLogic.eq_conv Conv.all_conv (Collect_conv (fn (_, ctxt'') => Conv.repeat_conv (all_but_last_exists_conv (K (rewr_conv' @{lemma "(\x. x = t \ P x) = P t" by simp})) ctxt'')) ctxt')))) 1) ctxt 1 THEN tac ctxt cont else Subgoal.FOCUS (fn {prems, context = ctxt', ...} => CONVERSION (right_hand_set_comprehension_conv (K (HOLogic.conj_conv ((HOLogic.eq_conv Conv.all_conv (rewr_conv' (List.last prems))) then_conv (Conv.rewrs_conv (map (fn th => th RS @{thm Eq_FalseI}) distincts))) Conv.all_conv then_conv (rewr_conv' @{lemma "(False \ P) = False" by simp}))) ctxt' then_conv HOLogic.Trueprop_conv (HOLogic.eq_conv Conv.all_conv (Collect_conv (fn (_, ctxt'') => Conv.repeat_conv (Conv.bottom_conv (K (rewr_conv' @{lemma "(\x. P) = P" by simp})) ctxt'')) ctxt'))) 1) ctxt 1 THEN resolve_tac ctxt [set_Nil_I] 1)) case_thms) end in fun simproc ctxt redex = let fun make_inner_eqs bound_vs Tis eqs t = (case dest_case ctxt t of SOME (x, T, i, cont, constr_name) => let val (vs, body) = strip_abs (Envir.eta_long (map snd bound_vs) cont) val x' = incr_boundvars (length vs) x val eqs' = map (incr_boundvars (length vs)) eqs val constr_t = list_comb (Const (constr_name, map snd vs ---> T), map Bound (((length vs) - 1) downto 0)) val constr_eq = Const (\<^const_name>\HOL.eq\, T --> T --> \<^typ>\bool\) $ constr_t $ x' in make_inner_eqs (rev vs @ bound_vs) (Case (T, i) :: Tis) (constr_eq :: eqs') body end | NONE => (case dest_if t of SOME (condition, cont) => make_inner_eqs bound_vs (If :: Tis) (condition :: eqs) cont | NONE => if null eqs then NONE (*no rewriting, nothing to be done*) else let val Type (\<^type_name>\list\, [rT]) = fastype_of1 (map snd bound_vs, t) val pat_eq = (case try dest_singleton_list t of SOME t' => Const (\<^const_name>\HOL.eq\, rT --> rT --> \<^typ>\bool\) $ Bound (length bound_vs) $ t' | NONE => Const (\<^const_name>\Set.member\, rT --> HOLogic.mk_setT rT --> \<^typ>\bool\) $ Bound (length bound_vs) $ (mk_set rT $ t)) val reverse_bounds = curry subst_bounds ((map Bound ((length bound_vs - 1) downto 0)) @ [Bound (length bound_vs)]) val eqs' = map reverse_bounds eqs val pat_eq' = reverse_bounds pat_eq val inner_t = fold (fn (_, T) => fn t => HOLogic.exists_const T $ absdummy T t) (rev bound_vs) (fold (curry HOLogic.mk_conj) eqs' pat_eq') val lhs = Thm.term_of redex val rhs = HOLogic.mk_Collect ("x", rT, inner_t) val rewrite_rule_t = HOLogic.mk_Trueprop (HOLogic.mk_eq (lhs, rhs)) in SOME ((Goal.prove ctxt [] [] rewrite_rule_t (fn {context = ctxt', ...} => tac ctxt' (rev Tis))) RS @{thm eq_reflection}) end)) in make_inner_eqs [] [] [] (dest_set (Thm.term_of redex)) end end end \ simproc_setup list_to_set_comprehension ("set xs") = \K List_to_Set_Comprehension.simproc\ code_datatype set coset hide_const (open) coset subsubsection \\<^const>\Nil\ and \<^const>\Cons\\ lemma not_Cons_self [simp]: "xs \ x # xs" by (induct xs) auto lemma not_Cons_self2 [simp]: "x # xs \ xs" by (rule not_Cons_self [symmetric]) lemma neq_Nil_conv: "(xs \ []) = (\y ys. xs = y # ys)" by (induct xs) auto lemma tl_Nil: "tl xs = [] \ xs = [] \ (\x. xs = [x])" by (cases xs) auto lemma Nil_tl: "[] = tl xs \ xs = [] \ (\x. xs = [x])" by (cases xs) auto lemma length_induct: "(\xs. \ys. length ys < length xs \ P ys \ P xs) \ P xs" by (fact measure_induct) lemma induct_list012: "\P []; \x. P [x]; \x y zs. \ P zs; P (y # zs) \ \ P (x # y # zs)\ \ P xs" by induction_schema (pat_completeness, lexicographic_order) lemma list_nonempty_induct [consumes 1, case_names single cons]: "\ xs \ []; \x. P [x]; \x xs. xs \ [] \ P xs \ P (x # xs)\ \ P xs" by(induction xs rule: induct_list012) auto lemma inj_split_Cons: "inj_on (\(xs, n). n#xs) X" by (auto intro!: inj_onI) lemma inj_on_Cons1 [simp]: "inj_on ((#) x) A" by(simp add: inj_on_def) subsubsection \\<^const>\length\\ text \ Needs to come before \@\ because of theorem \append_eq_append_conv\. \ lemma length_append [simp]: "length (xs @ ys) = length xs + length ys" by (induct xs) auto lemma length_map [simp]: "length (map f xs) = length xs" by (induct xs) auto lemma length_rev [simp]: "length (rev xs) = length xs" by (induct xs) auto lemma length_tl [simp]: "length (tl xs) = length xs - 1" by (cases xs) auto lemma length_0_conv [iff]: "(length xs = 0) = (xs = [])" by (induct xs) auto lemma length_greater_0_conv [iff]: "(0 < length xs) = (xs \ [])" by (induct xs) auto lemma length_pos_if_in_set: "x \ set xs \ length xs > 0" by auto lemma length_Suc_conv: "(length xs = Suc n) = (\y ys. xs = y # ys \ length ys = n)" by (induct xs) auto lemma Suc_length_conv: "(Suc n = length xs) = (\y ys. xs = y # ys \ length ys = n)" by (induct xs; simp; blast) lemma Suc_le_length_iff: "(Suc n \ length xs) = (\x ys. xs = x # ys \ n \ length ys)" by (metis Suc_le_D[of n] Suc_le_mono[of n] Suc_length_conv[of _ xs]) lemma impossible_Cons: "length xs \ length ys \ xs = x # ys = False" by (induct xs) auto lemma list_induct2 [consumes 1, case_names Nil Cons]: "length xs = length ys \ P [] [] \ (\x xs y ys. length xs = length ys \ P xs ys \ P (x#xs) (y#ys)) \ P xs ys" proof (induct xs arbitrary: ys) case (Cons x xs ys) then show ?case by (cases ys) simp_all qed simp lemma list_induct3 [consumes 2, case_names Nil Cons]: "length xs = length ys \ length ys = length zs \ P [] [] [] \ (\x xs y ys z zs. length xs = length ys \ length ys = length zs \ P xs ys zs \ P (x#xs) (y#ys) (z#zs)) \ P xs ys zs" proof (induct xs arbitrary: ys zs) case Nil then show ?case by simp next case (Cons x xs ys zs) then show ?case by (cases ys, simp_all) (cases zs, simp_all) qed lemma list_induct4 [consumes 3, case_names Nil Cons]: "length xs = length ys \ length ys = length zs \ length zs = length ws \ P [] [] [] [] \ (\x xs y ys z zs w ws. length xs = length ys \ length ys = length zs \ length zs = length ws \ P xs ys zs ws \ P (x#xs) (y#ys) (z#zs) (w#ws)) \ P xs ys zs ws" proof (induct xs arbitrary: ys zs ws) case Nil then show ?case by simp next case (Cons x xs ys zs ws) then show ?case by ((cases ys, simp_all), (cases zs,simp_all)) (cases ws, simp_all) qed lemma list_induct2': "\ P [] []; \x xs. P (x#xs) []; \y ys. P [] (y#ys); \x xs y ys. P xs ys \ P (x#xs) (y#ys) \ \ P xs ys" by (induct xs arbitrary: ys) (case_tac x, auto)+ lemma list_all2_iff: "list_all2 P xs ys \ length xs = length ys \ (\(x, y) \ set (zip xs ys). P x y)" by (induct xs ys rule: list_induct2') auto lemma neq_if_length_neq: "length xs \ length ys \ (xs = ys) == False" by (rule Eq_FalseI) auto subsubsection \\@\ -- append\ global_interpretation append: monoid append Nil proof fix xs ys zs :: "'a list" show "(xs @ ys) @ zs = xs @ (ys @ zs)" by (induct xs) simp_all show "xs @ [] = xs" by (induct xs) simp_all qed simp lemma append_assoc [simp]: "(xs @ ys) @ zs = xs @ (ys @ zs)" by (fact append.assoc) lemma append_Nil2: "xs @ [] = xs" by (fact append.right_neutral) lemma append_is_Nil_conv [iff]: "(xs @ ys = []) = (xs = [] \ ys = [])" by (induct xs) auto lemma Nil_is_append_conv [iff]: "([] = xs @ ys) = (xs = [] \ ys = [])" by (induct xs) auto lemma append_self_conv [iff]: "(xs @ ys = xs) = (ys = [])" by (induct xs) auto lemma self_append_conv [iff]: "(xs = xs @ ys) = (ys = [])" by (induct xs) auto lemma append_eq_append_conv [simp]: "length xs = length ys \ length us = length vs \ (xs@us = ys@vs) = (xs=ys \ us=vs)" by (induct xs arbitrary: ys; case_tac ys; force) lemma append_eq_append_conv2: "(xs @ ys = zs @ ts) = (\us. xs = zs @ us \ us @ ys = ts \ xs @ us = zs \ ys = us @ ts)" proof (induct xs arbitrary: ys zs ts) case (Cons x xs) then show ?case by (cases zs) auto qed fastforce lemma same_append_eq [iff, induct_simp]: "(xs @ ys = xs @ zs) = (ys = zs)" by simp lemma append1_eq_conv [iff]: "(xs @ [x] = ys @ [y]) = (xs = ys \ x = y)" by simp lemma append_same_eq [iff, induct_simp]: "(ys @ xs = zs @ xs) = (ys = zs)" by simp lemma append_self_conv2 [iff]: "(xs @ ys = ys) = (xs = [])" using append_same_eq [of _ _ "[]"] by auto lemma self_append_conv2 [iff]: "(ys = xs @ ys) = (xs = [])" using append_same_eq [of "[]"] by auto lemma hd_Cons_tl: "xs \ [] \ hd xs # tl xs = xs" by (fact list.collapse) lemma hd_append: "hd (xs @ ys) = (if xs = [] then hd ys else hd xs)" by (induct xs) auto lemma hd_append2 [simp]: "xs \ [] \ hd (xs @ ys) = hd xs" by (simp add: hd_append split: list.split) lemma tl_append: "tl (xs @ ys) = (case xs of [] \ tl ys | z#zs \ zs @ ys)" by (simp split: list.split) lemma tl_append2 [simp]: "xs \ [] \ tl (xs @ ys) = tl xs @ ys" by (simp add: tl_append split: list.split) lemma Cons_eq_append_conv: "x#xs = ys@zs = (ys = [] \ x#xs = zs \ (\ys'. x#ys' = ys \ xs = ys'@zs))" by(cases ys) auto lemma append_eq_Cons_conv: "(ys@zs = x#xs) = (ys = [] \ zs = x#xs \ (\ys'. ys = x#ys' \ ys'@zs = xs))" by(cases ys) auto lemma longest_common_prefix: "\ps xs' ys'. xs = ps @ xs' \ ys = ps @ ys' \ (xs' = [] \ ys' = [] \ hd xs' \ hd ys')" by (induct xs ys rule: list_induct2') (blast, blast, blast, metis (no_types, opaque_lifting) append_Cons append_Nil list.sel(1)) text \Trivial rules for solving \@\-equations automatically.\ lemma eq_Nil_appendI: "xs = ys \ xs = [] @ ys" by simp lemma Cons_eq_appendI: "\x # xs1 = ys; xs = xs1 @ zs\ \ x # xs = ys @ zs" by auto lemma append_eq_appendI: "\xs @ xs1 = zs; ys = xs1 @ us\ \ xs @ ys = zs @ us" by auto text \ Simplification procedure for all list equalities. Currently only tries to rearrange \@\ to see if - both lists end in a singleton list, - or both lists end in the same list. \ simproc_setup list_eq ("(xs::'a list) = ys") = \ let fun last (cons as Const (\<^const_name>\Cons\, _) $ _ $ xs) = (case xs of Const (\<^const_name>\Nil\, _) => cons | _ => last xs) | last (Const(\<^const_name>\append\,_) $ _ $ ys) = last ys | last t = t; fun list1 (Const(\<^const_name>\Cons\,_) $ _ $ Const(\<^const_name>\Nil\,_)) = true | list1 _ = false; fun butlast ((cons as Const(\<^const_name>\Cons\,_) $ x) $ xs) = (case xs of Const (\<^const_name>\Nil\, _) => xs | _ => cons $ butlast xs) | butlast ((app as Const (\<^const_name>\append\, _) $ xs) $ ys) = app $ butlast ys | butlast xs = Const(\<^const_name>\Nil\, fastype_of xs); val rearr_ss = simpset_of (put_simpset HOL_basic_ss \<^context> addsimps [@{thm append_assoc}, @{thm append_Nil}, @{thm append_Cons}]); fun list_eq ctxt (F as (eq as Const(_,eqT)) $ lhs $ rhs) = let val lastl = last lhs and lastr = last rhs; fun rearr conv = let val lhs1 = butlast lhs and rhs1 = butlast rhs; val Type(_,listT::_) = eqT val appT = [listT,listT] ---> listT val app = Const(\<^const_name>\append\,appT) val F2 = eq $ (app$lhs1$lastl) $ (app$rhs1$lastr) val eq = HOLogic.mk_Trueprop (HOLogic.mk_eq (F,F2)); val thm = Goal.prove ctxt [] [] eq (K (simp_tac (put_simpset rearr_ss ctxt) 1)); in SOME ((conv RS (thm RS trans)) RS eq_reflection) end; in if list1 lastl andalso list1 lastr then rearr @{thm append1_eq_conv} else if lastl aconv lastr then rearr @{thm append_same_eq} else NONE end; in fn _ => fn ctxt => fn ct => list_eq ctxt (Thm.term_of ct) end \ subsubsection \\<^const>\map\\ lemma hd_map: "xs \ [] \ hd (map f xs) = f (hd xs)" by (cases xs) simp_all lemma map_tl: "map f (tl xs) = tl (map f xs)" by (cases xs) simp_all lemma map_ext: "(\x. x \ set xs \ f x = g x) \ map f xs = map g xs" by (induct xs) simp_all lemma map_ident [simp]: "map (\x. x) = (\xs. xs)" by (rule ext, induct_tac xs) auto lemma map_append [simp]: "map f (xs @ ys) = map f xs @ map f ys" by (induct xs) auto lemma map_map [simp]: "map f (map g xs) = map (f \ g) xs" by (induct xs) auto lemma map_comp_map[simp]: "((map f) \ (map g)) = map(f \ g)" by (rule ext) simp lemma rev_map: "rev (map f xs) = map f (rev xs)" by (induct xs) auto lemma map_eq_conv[simp]: "(map f xs = map g xs) = (\x \ set xs. f x = g x)" by (induct xs) auto lemma map_cong [fundef_cong]: "xs = ys \ (\x. x \ set ys \ f x = g x) \ map f xs = map g ys" by simp lemma map_is_Nil_conv [iff]: "(map f xs = []) = (xs = [])" by (cases xs) auto lemma Nil_is_map_conv [iff]: "([] = map f xs) = (xs = [])" by (cases xs) auto lemma map_eq_Cons_conv: "(map f xs = y#ys) = (\z zs. xs = z#zs \ f z = y \ map f zs = ys)" by (cases xs) auto lemma Cons_eq_map_conv: "(x#xs = map f ys) = (\z zs. ys = z#zs \ x = f z \ xs = map f zs)" by (cases ys) auto lemmas map_eq_Cons_D = map_eq_Cons_conv [THEN iffD1] lemmas Cons_eq_map_D = Cons_eq_map_conv [THEN iffD1] declare map_eq_Cons_D [dest!] Cons_eq_map_D [dest!] lemma ex_map_conv: "(\xs. ys = map f xs) = (\y \ set ys. \x. y = f x)" by(induct ys, auto simp add: Cons_eq_map_conv) lemma map_eq_imp_length_eq: assumes "map f xs = map g ys" shows "length xs = length ys" using assms proof (induct ys arbitrary: xs) case Nil then show ?case by simp next case (Cons y ys) then obtain z zs where xs: "xs = z # zs" by auto from Cons xs have "map f zs = map g ys" by simp with Cons have "length zs = length ys" by blast with xs show ?case by simp qed lemma map_inj_on: assumes map: "map f xs = map f ys" and inj: "inj_on f (set xs Un set ys)" shows "xs = ys" using map_eq_imp_length_eq [OF map] assms proof (induct rule: list_induct2) case (Cons x xs y ys) then show ?case by (auto intro: sym) qed auto lemma inj_on_map_eq_map: "inj_on f (set xs Un set ys) \ (map f xs = map f ys) = (xs = ys)" by(blast dest:map_inj_on) lemma map_injective: "map f xs = map f ys \ inj f \ xs = ys" by (induct ys arbitrary: xs) (auto dest!:injD) lemma inj_map_eq_map[simp]: "inj f \ (map f xs = map f ys) = (xs = ys)" by(blast dest:map_injective) lemma inj_mapI: "inj f \ inj (map f)" by (iprover dest: map_injective injD intro: inj_onI) lemma inj_mapD: "inj (map f) \ inj f" by (metis (no_types, opaque_lifting) injI list.inject list.simps(9) the_inv_f_f) lemma inj_map[iff]: "inj (map f) = inj f" by (blast dest: inj_mapD intro: inj_mapI) lemma inj_on_mapI: "inj_on f (\(set ` A)) \ inj_on (map f) A" by (blast intro:inj_onI dest:inj_onD map_inj_on) lemma map_idI: "(\x. x \ set xs \ f x = x) \ map f xs = xs" by (induct xs, auto) lemma map_fun_upd [simp]: "y \ set xs \ map (f(y:=v)) xs = map f xs" by (induct xs) auto lemma map_fst_zip[simp]: "length xs = length ys \ map fst (zip xs ys) = xs" by (induct rule:list_induct2, simp_all) lemma map_snd_zip[simp]: "length xs = length ys \ map snd (zip xs ys) = ys" by (induct rule:list_induct2, simp_all) lemma map_fst_zip_take: "map fst (zip xs ys) = take (min (length xs) (length ys)) xs" by (induct xs ys rule: list_induct2') simp_all lemma map_snd_zip_take: "map snd (zip xs ys) = take (min (length xs) (length ys)) ys" by (induct xs ys rule: list_induct2') simp_all lemma map2_map_map: "map2 h (map f xs) (map g xs) = map (\x. h (f x) (g x)) xs" by (induction xs) (auto) functor map: map by (simp_all add: id_def) declare map.id [simp] subsubsection \\<^const>\rev\\ lemma rev_append [simp]: "rev (xs @ ys) = rev ys @ rev xs" by (induct xs) auto lemma rev_rev_ident [simp]: "rev (rev xs) = xs" by (induct xs) auto lemma rev_swap: "(rev xs = ys) = (xs = rev ys)" by auto lemma rev_is_Nil_conv [iff]: "(rev xs = []) = (xs = [])" by (induct xs) auto lemma Nil_is_rev_conv [iff]: "([] = rev xs) = (xs = [])" by (induct xs) auto lemma rev_singleton_conv [simp]: "(rev xs = [x]) = (xs = [x])" by (cases xs) auto lemma singleton_rev_conv [simp]: "([x] = rev xs) = (xs = [x])" by (cases xs) auto lemma rev_is_rev_conv [iff]: "(rev xs = rev ys) = (xs = ys)" proof (induct xs arbitrary: ys) case Nil then show ?case by force next case Cons then show ?case by (cases ys) auto qed lemma inj_on_rev[iff]: "inj_on rev A" by(simp add:inj_on_def) lemma rev_induct [case_names Nil snoc]: assumes "P []" and "\x xs. P xs \ P (xs @ [x])" shows "P xs" proof - have "P (rev (rev xs))" by (rule_tac list = "rev xs" in list.induct, simp_all add: assms) then show ?thesis by simp qed lemma rev_exhaust [case_names Nil snoc]: "(xs = [] \ P) \(\ys y. xs = ys @ [y] \ P) \ P" by (induct xs rule: rev_induct) auto lemmas rev_cases = rev_exhaust lemma rev_nonempty_induct [consumes 1, case_names single snoc]: assumes "xs \ []" and single: "\x. P [x]" and snoc': "\x xs. xs \ [] \ P xs \ P (xs@[x])" shows "P xs" using \xs \ []\ proof (induct xs rule: rev_induct) case (snoc x xs) then show ?case proof (cases xs) case Nil thus ?thesis by (simp add: single) next case Cons with snoc show ?thesis by (fastforce intro!: snoc') qed qed simp lemma rev_eq_Cons_iff[iff]: "(rev xs = y#ys) = (xs = rev ys @ [y])" by(rule rev_cases[of xs]) auto subsubsection \\<^const>\set\\ declare list.set[code_post] \ \pretty output\ lemma finite_set [iff]: "finite (set xs)" by (induct xs) auto lemma set_append [simp]: "set (xs @ ys) = (set xs \ set ys)" by (induct xs) auto lemma hd_in_set[simp]: "xs \ [] \ hd xs \ set xs" by(cases xs) auto lemma set_subset_Cons: "set xs \ set (x # xs)" by auto lemma set_ConsD: "y \ set (x # xs) \ y=x \ y \ set xs" by auto lemma set_empty [iff]: "(set xs = {}) = (xs = [])" by (induct xs) auto lemma set_empty2[iff]: "({} = set xs) = (xs = [])" by(induct xs) auto lemma set_rev [simp]: "set (rev xs) = set xs" by (induct xs) auto lemma set_map [simp]: "set (map f xs) = f`(set xs)" by (induct xs) auto lemma set_filter [simp]: "set (filter P xs) = {x. x \ set xs \ P x}" by (induct xs) auto lemma set_upt [simp]: "set[i.. set xs \ \ys zs. xs = ys @ x # zs" proof (induct xs) case Nil thus ?case by simp next case Cons thus ?case by (auto intro: Cons_eq_appendI) qed lemma in_set_conv_decomp: "x \ set xs \ (\ys zs. xs = ys @ x # zs)" by (auto elim: split_list) lemma split_list_first: "x \ set xs \ \ys zs. xs = ys @ x # zs \ x \ set ys" proof (induct xs) case Nil thus ?case by simp next case (Cons a xs) show ?case proof cases assume "x = a" thus ?case using Cons by fastforce next assume "x \ a" thus ?case using Cons by(fastforce intro!: Cons_eq_appendI) qed qed lemma in_set_conv_decomp_first: "(x \ set xs) = (\ys zs. xs = ys @ x # zs \ x \ set ys)" by (auto dest!: split_list_first) lemma split_list_last: "x \ set xs \ \ys zs. xs = ys @ x # zs \ x \ set zs" proof (induct xs rule: rev_induct) case Nil thus ?case by simp next case (snoc a xs) show ?case proof cases assume "x = a" thus ?case using snoc by (auto intro!: exI) next assume "x \ a" thus ?case using snoc by fastforce qed qed lemma in_set_conv_decomp_last: "(x \ set xs) = (\ys zs. xs = ys @ x # zs \ x \ set zs)" by (auto dest!: split_list_last) lemma split_list_prop: "\x \ set xs. P x \ \ys x zs. xs = ys @ x # zs \ P x" proof (induct xs) case Nil thus ?case by simp next case Cons thus ?case by(simp add:Bex_def)(metis append_Cons append.simps(1)) qed lemma split_list_propE: assumes "\x \ set xs. P x" obtains ys x zs where "xs = ys @ x # zs" and "P x" using split_list_prop [OF assms] by blast lemma split_list_first_prop: "\x \ set xs. P x \ \ys x zs. xs = ys@x#zs \ P x \ (\y \ set ys. \ P y)" proof (induct xs) case Nil thus ?case by simp next case (Cons x xs) show ?case proof cases assume "P x" hence "x # xs = [] @ x # xs \ P x \ (\y\set []. \ P y)" by simp thus ?thesis by fast next assume "\ P x" hence "\x\set xs. P x" using Cons(2) by simp thus ?thesis using \\ P x\ Cons(1) by (metis append_Cons set_ConsD) qed qed lemma split_list_first_propE: assumes "\x \ set xs. P x" obtains ys x zs where "xs = ys @ x # zs" and "P x" and "\y \ set ys. \ P y" using split_list_first_prop [OF assms] by blast lemma split_list_first_prop_iff: "(\x \ set xs. P x) \ (\ys x zs. xs = ys@x#zs \ P x \ (\y \ set ys. \ P y))" by (rule, erule split_list_first_prop) auto lemma split_list_last_prop: "\x \ set xs. P x \ \ys x zs. xs = ys@x#zs \ P x \ (\z \ set zs. \ P z)" proof(induct xs rule:rev_induct) case Nil thus ?case by simp next case (snoc x xs) show ?case proof cases assume "P x" thus ?thesis by (auto intro!: exI) next assume "\ P x" hence "\x\set xs. P x" using snoc(2) by simp thus ?thesis using \\ P x\ snoc(1) by fastforce qed qed lemma split_list_last_propE: assumes "\x \ set xs. P x" obtains ys x zs where "xs = ys @ x # zs" and "P x" and "\z \ set zs. \ P z" using split_list_last_prop [OF assms] by blast lemma split_list_last_prop_iff: "(\x \ set xs. P x) \ (\ys x zs. xs = ys@x#zs \ P x \ (\z \ set zs. \ P z))" by rule (erule split_list_last_prop, auto) lemma finite_list: "finite A \ \xs. set xs = A" by (erule finite_induct) (auto simp add: list.set(2)[symmetric] simp del: list.set(2)) lemma card_length: "card (set xs) \ length xs" by (induct xs) (auto simp add: card_insert_if) lemma set_minus_filter_out: "set xs - {y} = set (filter (\x. \ (x = y)) xs)" by (induct xs) auto lemma append_Cons_eq_iff: "\ x \ set xs; x \ set ys \ \ xs @ x # ys = xs' @ x # ys' \ (xs = xs' \ ys = ys')" by(auto simp: append_eq_Cons_conv Cons_eq_append_conv append_eq_append_conv2) subsubsection \\<^const>\concat\\ lemma concat_append [simp]: "concat (xs @ ys) = concat xs @ concat ys" by (induct xs) auto lemma concat_eq_Nil_conv [simp]: "(concat xss = []) = (\xs \ set xss. xs = [])" by (induct xss) auto lemma Nil_eq_concat_conv [simp]: "([] = concat xss) = (\xs \ set xss. xs = [])" by (induct xss) auto lemma set_concat [simp]: "set (concat xs) = (\x\set xs. set x)" by (induct xs) auto lemma concat_map_singleton[simp]: "concat(map (%x. [f x]) xs) = map f xs" by (induct xs) auto lemma map_concat: "map f (concat xs) = concat (map (map f) xs)" by (induct xs) auto lemma rev_concat: "rev (concat xs) = concat (map rev (rev xs))" by (induct xs) auto lemma length_concat_rev[simp]: "length (concat (rev xs)) = length (concat xs)" by (induction xs) auto lemma concat_eq_concat_iff: "\(x, y) \ set (zip xs ys). length x = length y \ length xs = length ys \ (concat xs = concat ys) = (xs = ys)" proof (induct xs arbitrary: ys) case (Cons x xs ys) thus ?case by (cases ys) auto qed (auto) lemma concat_injective: "concat xs = concat ys \ length xs = length ys \ \(x, y) \ set (zip xs ys). length x = length y \ xs = ys" by (simp add: concat_eq_concat_iff) lemma concat_eq_appendD: assumes "concat xss = ys @ zs" "xss \ []" shows "\xss1 xs xs' xss2. xss = xss1 @ (xs @ xs') # xss2 \ ys = concat xss1 @ xs \ zs = xs' @ concat xss2" using assms proof(induction xss arbitrary: ys) case (Cons xs xss) from Cons.prems consider us where "xs @ us = ys" "concat xss = us @ zs" | us where "xs = ys @ us" "us @ concat xss = zs" by(auto simp add: append_eq_append_conv2) then show ?case proof cases case 1 then show ?thesis using Cons.IH[OF 1(2)] by(cases xss)(auto intro: exI[where x="[]"], metis append.assoc append_Cons concat.simps(2)) qed(auto intro: exI[where x="[]"]) qed simp lemma concat_eq_append_conv: "concat xss = ys @ zs \ (if xss = [] then ys = [] \ zs = [] else \xss1 xs xs' xss2. xss = xss1 @ (xs @ xs') # xss2 \ ys = concat xss1 @ xs \ zs = xs' @ concat xss2)" by(auto dest: concat_eq_appendD) lemma hd_concat: "\xs \ []; hd xs \ []\ \ hd (concat xs) = hd (hd xs)" by (metis concat.simps(2) hd_Cons_tl hd_append2) simproc_setup list_neq ("(xs::'a list) = ys") = \ (* Reduces xs=ys to False if xs and ys cannot be of the same length. This is the case if the atomic sublists of one are a submultiset of those of the other list and there are fewer Cons's in one than the other. *) let fun len (Const(\<^const_name>\Nil\,_)) acc = acc | len (Const(\<^const_name>\Cons\,_) $ _ $ xs) (ts,n) = len xs (ts,n+1) | len (Const(\<^const_name>\append\,_) $ xs $ ys) acc = len xs (len ys acc) | len (Const(\<^const_name>\rev\,_) $ xs) acc = len xs acc | len (Const(\<^const_name>\map\,_) $ _ $ xs) acc = len xs acc | len (Const(\<^const_name>\concat\,T) $ (Const(\<^const_name>\rev\,_) $ xss)) acc = len (Const(\<^const_name>\concat\,T) $ xss) acc | len t (ts,n) = (t::ts,n); val ss = simpset_of \<^context>; fun list_neq ctxt ct = let val (Const(_,eqT) $ lhs $ rhs) = Thm.term_of ct; val (ls,m) = len lhs ([],0) and (rs,n) = len rhs ([],0); fun prove_neq() = let val Type(_,listT::_) = eqT; val size = HOLogic.size_const listT; val eq_len = HOLogic.mk_eq (size $ lhs, size $ rhs); val neq_len = HOLogic.mk_Trueprop (HOLogic.Not $ eq_len); val thm = Goal.prove ctxt [] [] neq_len (K (simp_tac (put_simpset ss ctxt) 1)); in SOME (thm RS @{thm neq_if_length_neq}) end in if m < n andalso submultiset (op aconv) (ls,rs) orelse n < m andalso submultiset (op aconv) (rs,ls) then prove_neq() else NONE end; in K list_neq end \ subsubsection \\<^const>\filter\\ lemma filter_append [simp]: "filter P (xs @ ys) = filter P xs @ filter P ys" by (induct xs) auto lemma rev_filter: "rev (filter P xs) = filter P (rev xs)" by (induct xs) simp_all lemma filter_filter [simp]: "filter P (filter Q xs) = filter (\x. Q x \ P x) xs" by (induct xs) auto lemma filter_concat: "filter p (concat xs) = concat (map (filter p) xs)" by (induct xs) auto lemma length_filter_le [simp]: "length (filter P xs) \ length xs" by (induct xs) (auto simp add: le_SucI) lemma sum_length_filter_compl: "length(filter P xs) + length(filter (\x. \P x) xs) = length xs" by(induct xs) simp_all lemma filter_True [simp]: "\x \ set xs. P x \ filter P xs = xs" by (induct xs) auto lemma filter_False [simp]: "\x \ set xs. \ P x \ filter P xs = []" by (induct xs) auto lemma filter_empty_conv: "(filter P xs = []) = (\x\set xs. \ P x)" by (induct xs) simp_all lemma filter_id_conv: "(filter P xs = xs) = (\x\set xs. P x)" proof (induct xs) case (Cons x xs) then show ?case using length_filter_le by (simp add: impossible_Cons) qed auto lemma filter_map: "filter P (map f xs) = map f (filter (P \ f) xs)" by (induct xs) simp_all lemma length_filter_map[simp]: "length (filter P (map f xs)) = length(filter (P \ f) xs)" by (simp add:filter_map) lemma filter_is_subset [simp]: "set (filter P xs) \ set xs" by auto lemma length_filter_less: "\ x \ set xs; \ P x \ \ length(filter P xs) < length xs" proof (induct xs) case Nil thus ?case by simp next case (Cons x xs) thus ?case using Suc_le_eq by fastforce qed lemma length_filter_conv_card: "length(filter p xs) = card{i. i < length xs \ p(xs!i)}" proof (induct xs) case Nil thus ?case by simp next case (Cons x xs) let ?S = "{i. i < length xs \ p(xs!i)}" have fin: "finite ?S" by(fast intro: bounded_nat_set_is_finite) show ?case (is "?l = card ?S'") proof (cases) assume "p x" hence eq: "?S' = insert 0 (Suc ` ?S)" by(auto simp: image_def split:nat.split dest:gr0_implies_Suc) have "length (filter p (x # xs)) = Suc(card ?S)" using Cons \p x\ by simp also have "\ = Suc(card(Suc ` ?S))" using fin by (simp add: card_image) also have "\ = card ?S'" using eq fin by (simp add:card_insert_if) finally show ?thesis . next assume "\ p x" hence eq: "?S' = Suc ` ?S" by(auto simp add: image_def split:nat.split elim:lessE) have "length (filter p (x # xs)) = card ?S" using Cons \\ p x\ by simp also have "\ = card(Suc ` ?S)" using fin by (simp add: card_image) also have "\ = card ?S'" using eq fin by (simp add:card_insert_if) finally show ?thesis . qed qed lemma Cons_eq_filterD: "x#xs = filter P ys \ \us vs. ys = us @ x # vs \ (\u\set us. \ P u) \ P x \ xs = filter P vs" (is "_ \ \us vs. ?P ys us vs") proof(induct ys) case Nil thus ?case by simp next case (Cons y ys) show ?case (is "\x. ?Q x") proof cases assume Py: "P y" show ?thesis proof cases assume "x = y" with Py Cons.prems have "?Q []" by simp then show ?thesis .. next assume "x \ y" with Py Cons.prems show ?thesis by simp qed next assume "\ P y" with Cons obtain us vs where "?P (y#ys) (y#us) vs" by fastforce then have "?Q (y#us)" by simp then show ?thesis .. qed qed lemma filter_eq_ConsD: "filter P ys = x#xs \ \us vs. ys = us @ x # vs \ (\u\set us. \ P u) \ P x \ xs = filter P vs" by(rule Cons_eq_filterD) simp lemma filter_eq_Cons_iff: "(filter P ys = x#xs) = (\us vs. ys = us @ x # vs \ (\u\set us. \ P u) \ P x \ xs = filter P vs)" by(auto dest:filter_eq_ConsD) lemma Cons_eq_filter_iff: "(x#xs = filter P ys) = (\us vs. ys = us @ x # vs \ (\u\set us. \ P u) \ P x \ xs = filter P vs)" by(auto dest:Cons_eq_filterD) lemma inj_on_filter_key_eq: assumes "inj_on f (insert y (set xs))" shows "filter (\x. f y = f x) xs = filter (HOL.eq y) xs" using assms by (induct xs) auto lemma filter_cong[fundef_cong]: "xs = ys \ (\x. x \ set ys \ P x = Q x) \ filter P xs = filter Q ys" by (induct ys arbitrary: xs) auto subsubsection \List partitioning\ primrec partition :: "('a \ bool) \'a list \ 'a list \ 'a list" where "partition P [] = ([], [])" | "partition P (x # xs) = (let (yes, no) = partition P xs in if P x then (x # yes, no) else (yes, x # no))" lemma partition_filter1: "fst (partition P xs) = filter P xs" by (induct xs) (auto simp add: Let_def split_def) lemma partition_filter2: "snd (partition P xs) = filter (Not \ P) xs" by (induct xs) (auto simp add: Let_def split_def) lemma partition_P: assumes "partition P xs = (yes, no)" shows "(\p \ set yes. P p) \ (\p \ set no. \ P p)" proof - from assms have "yes = fst (partition P xs)" and "no = snd (partition P xs)" by simp_all then show ?thesis by (simp_all add: partition_filter1 partition_filter2) qed lemma partition_set: assumes "partition P xs = (yes, no)" shows "set yes \ set no = set xs" proof - from assms have "yes = fst (partition P xs)" and "no = snd (partition P xs)" by simp_all then show ?thesis by (auto simp add: partition_filter1 partition_filter2) qed lemma partition_filter_conv[simp]: "partition f xs = (filter f xs,filter (Not \ f) xs)" unfolding partition_filter2[symmetric] unfolding partition_filter1[symmetric] by simp declare partition.simps[simp del] subsubsection \\<^const>\nth\\ lemma nth_Cons_0 [simp, code]: "(x # xs)!0 = x" by auto lemma nth_Cons_Suc [simp, code]: "(x # xs)!(Suc n) = xs!n" by auto declare nth.simps [simp del] lemma nth_Cons_pos[simp]: "0 < n \ (x#xs) ! n = xs ! (n - 1)" by(auto simp: Nat.gr0_conv_Suc) lemma nth_append: "(xs @ ys)!n = (if n < length xs then xs!n else ys!(n - length xs))" proof (induct xs arbitrary: n) case (Cons x xs) then show ?case using less_Suc_eq_0_disj by auto qed simp lemma nth_append_length [simp]: "(xs @ x # ys) ! length xs = x" by (induct xs) auto lemma nth_append_length_plus[simp]: "(xs @ ys) ! (length xs + n) = ys ! n" by (induct xs) auto lemma nth_map [simp]: "n < length xs \ (map f xs)!n = f(xs!n)" proof (induct xs arbitrary: n) case (Cons x xs) then show ?case using less_Suc_eq_0_disj by auto qed simp lemma nth_tl: "n < length (tl xs) \ tl xs ! n = xs ! Suc n" by (induction xs) auto lemma hd_conv_nth: "xs \ [] \ hd xs = xs!0" by(cases xs) simp_all lemma list_eq_iff_nth_eq: "(xs = ys) = (length xs = length ys \ (\i ?R" by force show "?R \ ?L" using less_Suc_eq_0_disj by auto qed with Cons show ?case by simp qed simp lemma in_set_conv_nth: "(x \ set xs) = (\i < length xs. xs!i = x)" by(auto simp:set_conv_nth) lemma nth_equal_first_eq: assumes "x \ set xs" assumes "n \ length xs" shows "(x # xs) ! n = x \ n = 0" (is "?lhs \ ?rhs") proof assume ?lhs show ?rhs proof (rule ccontr) assume "n \ 0" then have "n > 0" by simp with \?lhs\ have "xs ! (n - 1) = x" by simp moreover from \n > 0\ \n \ length xs\ have "n - 1 < length xs" by simp ultimately have "\ix \ set xs\ in_set_conv_nth [of x xs] show False by simp qed next assume ?rhs then show ?lhs by simp qed lemma nth_non_equal_first_eq: assumes "x \ y" shows "(x # xs) ! n = y \ xs ! (n - 1) = y \ n > 0" (is "?lhs \ ?rhs") proof assume "?lhs" with assms have "n > 0" by (cases n) simp_all with \?lhs\ show ?rhs by simp next assume "?rhs" then show "?lhs" by simp qed lemma list_ball_nth: "\n < length xs; \x \ set xs. P x\ \ P(xs!n)" by (auto simp add: set_conv_nth) lemma nth_mem [simp]: "n < length xs \ xs!n \ set xs" by (auto simp add: set_conv_nth) lemma all_nth_imp_all_set: "\\i < length xs. P(xs!i); x \ set xs\ \ P x" by (auto simp add: set_conv_nth) lemma all_set_conv_all_nth: "(\x \ set xs. P x) = (\i. i < length xs \ P (xs ! i))" by (auto simp add: set_conv_nth) lemma rev_nth: "n < size xs \ rev xs ! n = xs ! (length xs - Suc n)" proof (induct xs arbitrary: n) case Nil thus ?case by simp next case (Cons x xs) hence n: "n < Suc (length xs)" by simp moreover { assume "n < length xs" with n obtain n' where n': "length xs - n = Suc n'" by (cases "length xs - n", auto) moreover from n' have "length xs - Suc n = n'" by simp ultimately have "xs ! (length xs - Suc n) = (x # xs) ! (length xs - n)" by simp } ultimately show ?case by (clarsimp simp add: Cons nth_append) qed lemma Skolem_list_nth: "(\ix. P i x) = (\xs. size xs = k \ (\ixs. ?P k xs)") proof(induct k) case 0 show ?case by simp next case (Suc k) show ?case (is "?L = ?R" is "_ = (\xs. ?P' xs)") proof assume "?R" thus "?L" using Suc by auto next assume "?L" with Suc obtain x xs where "?P k xs \ P k x" by (metis less_Suc_eq) hence "?P'(xs@[x])" by(simp add:nth_append less_Suc_eq) thus "?R" .. qed qed subsubsection \\<^const>\list_update\\ lemma length_list_update [simp]: "length(xs[i:=x]) = length xs" by (induct xs arbitrary: i) (auto split: nat.split) lemma nth_list_update: "i < length xs\ (xs[i:=x])!j = (if i = j then x else xs!j)" by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split) lemma nth_list_update_eq [simp]: "i < length xs \ (xs[i:=x])!i = x" by (simp add: nth_list_update) lemma nth_list_update_neq [simp]: "i \ j \ xs[i:=x]!j = xs!j" by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split) lemma list_update_id[simp]: "xs[i := xs!i] = xs" by (induct xs arbitrary: i) (simp_all split:nat.splits) lemma list_update_beyond[simp]: "length xs \ i \ xs[i:=x] = xs" proof (induct xs arbitrary: i) case (Cons x xs i) then show ?case by (metis leD length_list_update list_eq_iff_nth_eq nth_list_update_neq) qed simp lemma list_update_nonempty[simp]: "xs[k:=x] = [] \ xs=[]" by (simp only: length_0_conv[symmetric] length_list_update) lemma list_update_same_conv: "i < length xs \ (xs[i := x] = xs) = (xs!i = x)" by (induct xs arbitrary: i) (auto split: nat.split) lemma list_update_append1: "i < size xs \ (xs @ ys)[i:=x] = xs[i:=x] @ ys" by (induct xs arbitrary: i)(auto split:nat.split) lemma list_update_append: "(xs @ ys) [n:= x] = (if n < length xs then xs[n:= x] @ ys else xs @ (ys [n-length xs:= x]))" by (induct xs arbitrary: n) (auto split:nat.splits) lemma list_update_length [simp]: "(xs @ x # ys)[length xs := y] = (xs @ y # ys)" by (induct xs, auto) lemma map_update: "map f (xs[k:= y]) = (map f xs)[k := f y]" by(induct xs arbitrary: k)(auto split:nat.splits) lemma rev_update: "k < length xs \ rev (xs[k:= y]) = (rev xs)[length xs - k - 1 := y]" by (induct xs arbitrary: k) (auto simp: list_update_append split:nat.splits) lemma update_zip: "(zip xs ys)[i:=xy] = zip (xs[i:=fst xy]) (ys[i:=snd xy])" by (induct ys arbitrary: i xy xs) (auto, case_tac xs, auto split: nat.split) lemma set_update_subset_insert: "set(xs[i:=x]) \ insert x (set xs)" by (induct xs arbitrary: i) (auto split: nat.split) lemma set_update_subsetI: "\set xs \ A; x \ A\ \ set(xs[i := x]) \ A" by (blast dest!: set_update_subset_insert [THEN subsetD]) lemma set_update_memI: "n < length xs \ x \ set (xs[n := x])" by (induct xs arbitrary: n) (auto split:nat.splits) lemma list_update_overwrite[simp]: "xs [i := x, i := y] = xs [i := y]" by (induct xs arbitrary: i) (simp_all split: nat.split) lemma list_update_swap: "i \ i' \ xs [i := x, i' := x'] = xs [i' := x', i := x]" by (induct xs arbitrary: i i') (simp_all split: nat.split) lemma list_update_code [code]: "[][i := y] = []" "(x # xs)[0 := y] = y # xs" "(x # xs)[Suc i := y] = x # xs[i := y]" by simp_all subsubsection \\<^const>\last\ and \<^const>\butlast\\ lemma hd_Nil_eq_last: "hd Nil = last Nil" unfolding hd_def last_def by simp lemma last_snoc [simp]: "last (xs @ [x]) = x" by (induct xs) auto lemma butlast_snoc [simp]: "butlast (xs @ [x]) = xs" by (induct xs) auto lemma last_ConsL: "xs = [] \ last(x#xs) = x" by simp lemma last_ConsR: "xs \ [] \ last(x#xs) = last xs" by simp lemma last_append: "last(xs @ ys) = (if ys = [] then last xs else last ys)" by (induct xs) (auto) lemma last_appendL[simp]: "ys = [] \ last(xs @ ys) = last xs" by(simp add:last_append) lemma last_appendR[simp]: "ys \ [] \ last(xs @ ys) = last ys" by(simp add:last_append) lemma last_tl: "xs = [] \ tl xs \ [] \last (tl xs) = last xs" by (induct xs) simp_all lemma butlast_tl: "butlast (tl xs) = tl (butlast xs)" by (induct xs) simp_all lemma hd_rev: "hd(rev xs) = last xs" by (metis hd_Cons_tl hd_Nil_eq_last last_snoc rev_eq_Cons_iff rev_is_Nil_conv) lemma last_rev: "last(rev xs) = hd xs" by (metis hd_rev rev_swap) lemma last_in_set[simp]: "as \ [] \ last as \ set as" by (induct as) auto lemma length_butlast [simp]: "length (butlast xs) = length xs - 1" by (induct xs rule: rev_induct) auto lemma butlast_append: "butlast (xs @ ys) = (if ys = [] then butlast xs else xs @ butlast ys)" by (induct xs arbitrary: ys) auto lemma append_butlast_last_id [simp]: "xs \ [] \ butlast xs @ [last xs] = xs" by (induct xs) auto lemma in_set_butlastD: "x \ set (butlast xs) \ x \ set xs" by (induct xs) (auto split: if_split_asm) lemma in_set_butlast_appendI: "x \ set (butlast xs) \ x \ set (butlast ys) \ x \ set (butlast (xs @ ys))" by (auto dest: in_set_butlastD simp add: butlast_append) lemma last_drop[simp]: "n < length xs \ last (drop n xs) = last xs" by (induct xs arbitrary: n)(auto split:nat.split) lemma nth_butlast: assumes "n < length (butlast xs)" shows "butlast xs ! n = xs ! n" proof (cases xs) case (Cons y ys) moreover from assms have "butlast xs ! n = (butlast xs @ [last xs]) ! n" by (simp add: nth_append) ultimately show ?thesis using append_butlast_last_id by simp qed simp lemma last_conv_nth: "xs\[] \ last xs = xs!(length xs - 1)" by(induct xs)(auto simp:neq_Nil_conv) lemma butlast_conv_take: "butlast xs = take (length xs - 1) xs" by (induction xs rule: induct_list012) simp_all lemma last_list_update: "xs \ [] \ last(xs[k:=x]) = (if k = size xs - 1 then x else last xs)" by (auto simp: last_conv_nth) lemma butlast_list_update: "butlast(xs[k:=x]) = (if k = size xs - 1 then butlast xs else (butlast xs)[k:=x])" by(cases xs rule:rev_cases)(auto simp: list_update_append split: nat.splits) lemma last_map: "xs \ [] \ last (map f xs) = f (last xs)" by (cases xs rule: rev_cases) simp_all lemma map_butlast: "map f (butlast xs) = butlast (map f xs)" by (induct xs) simp_all lemma snoc_eq_iff_butlast: "xs @ [x] = ys \ (ys \ [] \ butlast ys = xs \ last ys = x)" by fastforce corollary longest_common_suffix: "\ss xs' ys'. xs = xs' @ ss \ ys = ys' @ ss \ (xs' = [] \ ys' = [] \ last xs' \ last ys')" using longest_common_prefix[of "rev xs" "rev ys"] unfolding rev_swap rev_append by (metis last_rev rev_is_Nil_conv) lemma butlast_rev [simp]: "butlast (rev xs) = rev (tl xs)" by (cases xs) simp_all subsubsection \\<^const>\take\ and \<^const>\drop\\ lemma take_0: "take 0 xs = []" by (induct xs) auto lemma drop_0: "drop 0 xs = xs" by (induct xs) auto lemma take0[simp]: "take 0 = (\xs. [])" by(rule ext) (rule take_0) lemma drop0[simp]: "drop 0 = (\x. x)" by(rule ext) (rule drop_0) lemma take_Suc_Cons [simp]: "take (Suc n) (x # xs) = x # take n xs" by simp lemma drop_Suc_Cons [simp]: "drop (Suc n) (x # xs) = drop n xs" by simp declare take_Cons [simp del] and drop_Cons [simp del] lemma take_Suc: "xs \ [] \ take (Suc n) xs = hd xs # take n (tl xs)" by(clarsimp simp add:neq_Nil_conv) lemma drop_Suc: "drop (Suc n) xs = drop n (tl xs)" by(cases xs, simp_all) lemma hd_take[simp]: "j > 0 \ hd (take j xs) = hd xs" by (metis gr0_conv_Suc list.sel(1) take.simps(1) take_Suc) lemma take_tl: "take n (tl xs) = tl (take (Suc n) xs)" by (induct xs arbitrary: n) simp_all lemma drop_tl: "drop n (tl xs) = tl(drop n xs)" by(induct xs arbitrary: n, simp_all add:drop_Cons drop_Suc split:nat.split) lemma tl_take: "tl (take n xs) = take (n - 1) (tl xs)" by (cases n, simp, cases xs, auto) lemma tl_drop: "tl (drop n xs) = drop n (tl xs)" by (simp only: drop_tl) lemma nth_via_drop: "drop n xs = y#ys \ xs!n = y" by (induct xs arbitrary: n, simp)(auto simp: drop_Cons nth_Cons split: nat.splits) lemma take_Suc_conv_app_nth: "i < length xs \ take (Suc i) xs = take i xs @ [xs!i]" proof (induct xs arbitrary: i) case Nil then show ?case by simp next case Cons then show ?case by (cases i) auto qed lemma Cons_nth_drop_Suc: "i < length xs \ (xs!i) # (drop (Suc i) xs) = drop i xs" proof (induct xs arbitrary: i) case Nil then show ?case by simp next case Cons then show ?case by (cases i) auto qed lemma length_take [simp]: "length (take n xs) = min (length xs) n" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma length_drop [simp]: "length (drop n xs) = (length xs - n)" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma take_all [simp]: "length xs \ n \ take n xs = xs" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma drop_all [simp]: "length xs \ n \ drop n xs = []" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma take_all_iff [simp]: "take n xs = xs \ length xs \ n" by (metis length_take min.order_iff take_all) lemma drop_all_iff [simp]: "drop n xs = [] \ length xs \ n" by (metis diff_is_0_eq drop_all length_drop list.size(3)) lemma take_append [simp]: "take n (xs @ ys) = (take n xs @ take (n - length xs) ys)" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma drop_append [simp]: "drop n (xs @ ys) = drop n xs @ drop (n - length xs) ys" by (induct n arbitrary: xs) (auto, case_tac xs, auto) lemma take_take [simp]: "take n (take m xs) = take (min n m) xs" proof (induct m arbitrary: xs n) case 0 then show ?case by simp next case Suc then show ?case by (cases xs; cases n) simp_all qed lemma drop_drop [simp]: "drop n (drop m xs) = drop (n + m) xs" proof (induct m arbitrary: xs) case 0 then show ?case by simp next case Suc then show ?case by (cases xs) simp_all qed lemma take_drop: "take n (drop m xs) = drop m (take (n + m) xs)" proof (induct m arbitrary: xs n) case 0 then show ?case by simp next case Suc then show ?case by (cases xs; cases n) simp_all qed lemma drop_take: "drop n (take m xs) = take (m-n) (drop n xs)" by(induct xs arbitrary: m n)(auto simp: take_Cons drop_Cons split: nat.split) lemma append_take_drop_id [simp]: "take n xs @ drop n xs = xs" proof (induct n arbitrary: xs) case 0 then show ?case by simp next case Suc then show ?case by (cases xs) simp_all qed lemma take_eq_Nil[simp]: "(take n xs = []) = (n = 0 \ xs = [])" by(induct xs arbitrary: n)(auto simp: take_Cons split:nat.split) lemma drop_eq_Nil[simp]: "(drop n xs = []) = (length xs \ n)" by (induct xs arbitrary: n) (auto simp: drop_Cons split:nat.split) lemma take_map: "take n (map f xs) = map f (take n xs)" proof (induct n arbitrary: xs) case 0 then show ?case by simp next case Suc then show ?case by (cases xs) simp_all qed lemma drop_map: "drop n (map f xs) = map f (drop n xs)" proof (induct n arbitrary: xs) case 0 then show ?case by simp next case Suc then show ?case by (cases xs) simp_all qed lemma rev_take: "rev (take i xs) = drop (length xs - i) (rev xs)" proof (induct xs arbitrary: i) case Nil then show ?case by simp next case Cons then show ?case by (cases i) auto qed lemma rev_drop: "rev (drop i xs) = take (length xs - i) (rev xs)" proof (induct xs arbitrary: i) case Nil then show ?case by simp next case Cons then show ?case by (cases i) auto qed lemma drop_rev: "drop n (rev xs) = rev (take (length xs - n) xs)" by (cases "length xs < n") (auto simp: rev_take) lemma take_rev: "take n (rev xs) = rev (drop (length xs - n) xs)" by (cases "length xs < n") (auto simp: rev_drop) lemma nth_take [simp]: "i < n \ (take n xs)!i = xs!i" proof (induct xs arbitrary: i n) case Nil then show ?case by simp next case Cons then show ?case by (cases n; cases i) simp_all qed lemma nth_drop [simp]: "n \ length xs \ (drop n xs)!i = xs!(n + i)" proof (induct n arbitrary: xs) case 0 then show ?case by simp next case Suc then show ?case by (cases xs) simp_all qed lemma butlast_take: "n \ length xs \ butlast (take n xs) = take (n - 1) xs" by (simp add: butlast_conv_take min.absorb1 min.absorb2) lemma butlast_drop: "butlast (drop n xs) = drop n (butlast xs)" by (simp add: butlast_conv_take drop_take ac_simps) lemma take_butlast: "n < length xs \ take n (butlast xs) = take n xs" by (simp add: butlast_conv_take min.absorb1) lemma drop_butlast: "drop n (butlast xs) = butlast (drop n xs)" by (simp add: butlast_conv_take drop_take ac_simps) lemma butlast_power: "(butlast ^^ n) xs = take (length xs - n) xs" by (induct n) (auto simp: butlast_take) lemma hd_drop_conv_nth: "n < length xs \ hd(drop n xs) = xs!n" by(simp add: hd_conv_nth) lemma set_take_subset_set_take: "m \ n \ set(take m xs) \ set(take n xs)" proof (induct xs arbitrary: m n) case (Cons x xs m n) then show ?case by (cases n) (auto simp: take_Cons) qed simp lemma set_take_subset: "set(take n xs) \ set xs" by(induct xs arbitrary: n)(auto simp:take_Cons split:nat.split) lemma set_drop_subset: "set(drop n xs) \ set xs" by(induct xs arbitrary: n)(auto simp:drop_Cons split:nat.split) lemma set_drop_subset_set_drop: "m \ n \ set(drop m xs) \ set(drop n xs)" proof (induct xs arbitrary: m n) case (Cons x xs m n) then show ?case by (clarsimp simp: drop_Cons split: nat.split) (metis set_drop_subset subset_iff) qed simp lemma in_set_takeD: "x \ set(take n xs) \ x \ set xs" using set_take_subset by fast lemma in_set_dropD: "x \ set(drop n xs) \ x \ set xs" using set_drop_subset by fast lemma append_eq_conv_conj: "(xs @ ys = zs) = (xs = take (length xs) zs \ ys = drop (length xs) zs)" proof (induct xs arbitrary: zs) case (Cons x xs zs) then show ?case by (cases zs, auto) qed auto lemma map_eq_append_conv: "map f xs = ys @ zs \ (\us vs. xs = us @ vs \ ys = map f us \ zs = map f vs)" proof - have "map f xs \ ys @ zs \ map f xs \ ys @ zs \ map f xs \ ys @ zs \ map f xs = ys @ zs \ (\bs bsa. xs = bs @ bsa \ ys = map f bs \ zs = map f bsa)" by (metis append_eq_conv_conj append_take_drop_id drop_map take_map) then show ?thesis using map_append by blast qed lemma append_eq_map_conv: "ys @ zs = map f xs \ (\us vs. xs = us @ vs \ ys = map f us \ zs = map f vs)" by (metis map_eq_append_conv) lemma take_add: "take (i+j) xs = take i xs @ take j (drop i xs)" proof (induct xs arbitrary: i) case (Cons x xs i) then show ?case by (cases i, auto) qed auto lemma append_eq_append_conv_if: "(xs\<^sub>1 @ xs\<^sub>2 = ys\<^sub>1 @ ys\<^sub>2) = (if size xs\<^sub>1 \ size ys\<^sub>1 then xs\<^sub>1 = take (size xs\<^sub>1) ys\<^sub>1 \ xs\<^sub>2 = drop (size xs\<^sub>1) ys\<^sub>1 @ ys\<^sub>2 else take (size ys\<^sub>1) xs\<^sub>1 = ys\<^sub>1 \ drop (size ys\<^sub>1) xs\<^sub>1 @ xs\<^sub>2 = ys\<^sub>2)" proof (induct xs\<^sub>1 arbitrary: ys\<^sub>1) case (Cons a xs\<^sub>1 ys\<^sub>1) then show ?case by (cases ys\<^sub>1, auto) qed auto lemma take_hd_drop: "n < length xs \ take n xs @ [hd (drop n xs)] = take (Suc n) xs" by (induct xs arbitrary: n) (simp_all add:drop_Cons split:nat.split) lemma id_take_nth_drop: "i < length xs \ xs = take i xs @ xs!i # drop (Suc i) xs" proof - assume si: "i < length xs" hence "xs = take (Suc i) xs @ drop (Suc i) xs" by auto moreover from si have "take (Suc i) xs = take i xs @ [xs!i]" using take_Suc_conv_app_nth by blast ultimately show ?thesis by auto qed lemma take_update_cancel[simp]: "n \ m \ take n (xs[m := y]) = take n xs" by(simp add: list_eq_iff_nth_eq) lemma drop_update_cancel[simp]: "n < m \ drop m (xs[n := x]) = drop m xs" by(simp add: list_eq_iff_nth_eq) lemma upd_conv_take_nth_drop: "i < length xs \ xs[i:=a] = take i xs @ a # drop (Suc i) xs" proof - assume i: "i < length xs" have "xs[i:=a] = (take i xs @ xs!i # drop (Suc i) xs)[i:=a]" by(rule arg_cong[OF id_take_nth_drop[OF i]]) also have "\ = take i xs @ a # drop (Suc i) xs" using i by (simp add: list_update_append) finally show ?thesis . qed lemma take_update_swap: "take m (xs[n := x]) = (take m xs)[n := x]" proof (cases "n \ length xs") case False then show ?thesis by (simp add: upd_conv_take_nth_drop take_Cons drop_take min_def diff_Suc split: nat.split) qed auto lemma drop_update_swap: assumes "m \ n" shows "drop m (xs[n := x]) = (drop m xs)[n-m := x]" proof (cases "n \ length xs") case False with assms show ?thesis by (simp add: upd_conv_take_nth_drop drop_take) qed auto lemma nth_image: "l \ size xs \ nth xs ` {0..\<^const>\takeWhile\ and \<^const>\dropWhile\\ lemma length_takeWhile_le: "length (takeWhile P xs) \ length xs" by (induct xs) auto lemma takeWhile_dropWhile_id [simp]: "takeWhile P xs @ dropWhile P xs = xs" by (induct xs) auto lemma takeWhile_append1 [simp]: "\x \ set xs; \P(x)\ \ takeWhile P (xs @ ys) = takeWhile P xs" by (induct xs) auto lemma takeWhile_append2 [simp]: "(\x. x \ set xs \ P x) \ takeWhile P (xs @ ys) = xs @ takeWhile P ys" by (induct xs) auto lemma takeWhile_append: "takeWhile P (xs @ ys) = (if \x\set xs. P x then xs @ takeWhile P ys else takeWhile P xs)" using takeWhile_append1[of _ xs P ys] takeWhile_append2[of xs P ys] by auto lemma takeWhile_tail: "\ P x \ takeWhile P (xs @ (x#l)) = takeWhile P xs" by (induct xs) auto lemma takeWhile_eq_Nil_iff: "takeWhile P xs = [] \ xs = [] \ \P (hd xs)" by (cases xs) auto lemma takeWhile_nth: "j < length (takeWhile P xs) \ takeWhile P xs ! j = xs ! j" by (metis nth_append takeWhile_dropWhile_id) lemma dropWhile_nth: "j < length (dropWhile P xs) \ dropWhile P xs ! j = xs ! (j + length (takeWhile P xs))" by (metis add.commute nth_append_length_plus takeWhile_dropWhile_id) lemma length_dropWhile_le: "length (dropWhile P xs) \ length xs" by (induct xs) auto lemma dropWhile_append1 [simp]: "\x \ set xs; \P(x)\ \ dropWhile P (xs @ ys) = (dropWhile P xs)@ys" by (induct xs) auto lemma dropWhile_append2 [simp]: "(\x. x \ set xs \ P(x)) \ dropWhile P (xs @ ys) = dropWhile P ys" by (induct xs) auto lemma dropWhile_append3: "\ P y \dropWhile P (xs @ y # ys) = dropWhile P xs @ y # ys" by (induct xs) auto lemma dropWhile_append: "dropWhile P (xs @ ys) = (if \x\set xs. P x then dropWhile P ys else dropWhile P xs @ ys)" using dropWhile_append1[of _ xs P ys] dropWhile_append2[of xs P ys] by auto lemma dropWhile_last: "x \ set xs \ \ P x \ last (dropWhile P xs) = last xs" by (auto simp add: dropWhile_append3 in_set_conv_decomp) lemma set_dropWhileD: "x \ set (dropWhile P xs) \ x \ set xs" by (induct xs) (auto split: if_split_asm) lemma set_takeWhileD: "x \ set (takeWhile P xs) \ x \ set xs \ P x" by (induct xs) (auto split: if_split_asm) lemma takeWhile_eq_all_conv[simp]: "(takeWhile P xs = xs) = (\x \ set xs. P x)" by(induct xs, auto) lemma dropWhile_eq_Nil_conv[simp]: "(dropWhile P xs = []) = (\x \ set xs. P x)" by(induct xs, auto) lemma dropWhile_eq_Cons_conv: "(dropWhile P xs = y#ys) = (xs = takeWhile P xs @ y # ys \ \ P y)" by(induct xs, auto) lemma dropWhile_eq_self_iff: "dropWhile P xs = xs \ xs = [] \ \P (hd xs)" by (cases xs) (auto simp: dropWhile_eq_Cons_conv) lemma distinct_takeWhile[simp]: "distinct xs \ distinct (takeWhile P xs)" by (induct xs) (auto dest: set_takeWhileD) lemma distinct_dropWhile[simp]: "distinct xs \ distinct (dropWhile P xs)" by (induct xs) auto lemma takeWhile_map: "takeWhile P (map f xs) = map f (takeWhile (P \ f) xs)" by (induct xs) auto lemma dropWhile_map: "dropWhile P (map f xs) = map f (dropWhile (P \ f) xs)" by (induct xs) auto lemma takeWhile_eq_take: "takeWhile P xs = take (length (takeWhile P xs)) xs" by (induct xs) auto lemma dropWhile_eq_drop: "dropWhile P xs = drop (length (takeWhile P xs)) xs" by (induct xs) auto lemma hd_dropWhile: "dropWhile P xs \ [] \ \ P (hd (dropWhile P xs))" by (induct xs) auto lemma takeWhile_eq_filter: assumes "\ x. x \ set (dropWhile P xs) \ \ P x" shows "takeWhile P xs = filter P xs" proof - have A: "filter P xs = filter P (takeWhile P xs @ dropWhile P xs)" by simp have B: "filter P (dropWhile P xs) = []" unfolding filter_empty_conv using assms by blast have "filter P xs = takeWhile P xs" unfolding A filter_append B by (auto simp add: filter_id_conv dest: set_takeWhileD) thus ?thesis .. qed lemma takeWhile_eq_take_P_nth: "\ \ i. \ i < n ; i < length xs \ \ P (xs ! i) ; n < length xs \ \ P (xs ! n) \ \ takeWhile P xs = take n xs" proof (induct xs arbitrary: n) case Nil thus ?case by simp next case (Cons x xs) show ?case proof (cases n) case 0 with Cons show ?thesis by simp next case [simp]: (Suc n') have "P x" using Cons.prems(1)[of 0] by simp moreover have "takeWhile P xs = take n' xs" proof (rule Cons.hyps) fix i assume "i < n'" "i < length xs" thus "P (xs ! i)" using Cons.prems(1)[of "Suc i"] by simp next assume "n' < length xs" thus "\ P (xs ! n')" using Cons by auto qed ultimately show ?thesis by simp qed qed lemma nth_length_takeWhile: "length (takeWhile P xs) < length xs \ \ P (xs ! length (takeWhile P xs))" by (induct xs) auto lemma length_takeWhile_less_P_nth: assumes all: "\ i. i < j \ P (xs ! i)" and "j \ length xs" shows "j \ length (takeWhile P xs)" proof (rule classical) assume "\ ?thesis" hence "length (takeWhile P xs) < length xs" using assms by simp thus ?thesis using all \\ ?thesis\ nth_length_takeWhile[of P xs] by auto qed lemma takeWhile_neq_rev: "\distinct xs; x \ set xs\ \ takeWhile (\y. y \ x) (rev xs) = rev (tl (dropWhile (\y. y \ x) xs))" by(induct xs) (auto simp: takeWhile_tail[where l="[]"]) lemma dropWhile_neq_rev: "\distinct xs; x \ set xs\ \ dropWhile (\y. y \ x) (rev xs) = x # rev (takeWhile (\y. y \ x) xs)" proof (induct xs) case (Cons a xs) then show ?case by(auto, subst dropWhile_append2, auto) qed simp lemma takeWhile_not_last: "distinct xs \ takeWhile (\y. y \ last xs) xs = butlast xs" by(induction xs rule: induct_list012) auto lemma takeWhile_cong [fundef_cong]: "\l = k; \x. x \ set l \ P x = Q x\ \ takeWhile P l = takeWhile Q k" by (induct k arbitrary: l) (simp_all) lemma dropWhile_cong [fundef_cong]: "\l = k; \x. x \ set l \ P x = Q x\ \ dropWhile P l = dropWhile Q k" by (induct k arbitrary: l, simp_all) lemma takeWhile_idem [simp]: "takeWhile P (takeWhile P xs) = takeWhile P xs" by (induct xs) auto lemma dropWhile_idem [simp]: "dropWhile P (dropWhile P xs) = dropWhile P xs" by (induct xs) auto subsubsection \\<^const>\zip\\ lemma zip_Nil [simp]: "zip [] ys = []" by (induct ys) auto lemma zip_Cons_Cons [simp]: "zip (x # xs) (y # ys) = (x, y) # zip xs ys" by simp declare zip_Cons [simp del] lemma [code]: "zip [] ys = []" "zip xs [] = []" "zip (x # xs) (y # ys) = (x, y) # zip xs ys" by (fact zip_Nil zip.simps(1) zip_Cons_Cons)+ lemma zip_Cons1: "zip (x#xs) ys = (case ys of [] \ [] | y#ys \ (x,y)#zip xs ys)" by(auto split:list.split) lemma length_zip [simp]: "length (zip xs ys) = min (length xs) (length ys)" by (induct xs ys rule:list_induct2') auto lemma zip_obtain_same_length: assumes "\zs ws n. length zs = length ws \ n = min (length xs) (length ys) \ zs = take n xs \ ws = take n ys \ P (zip zs ws)" shows "P (zip xs ys)" proof - let ?n = "min (length xs) (length ys)" have "P (zip (take ?n xs) (take ?n ys))" by (rule assms) simp_all moreover have "zip xs ys = zip (take ?n xs) (take ?n ys)" proof (induct xs arbitrary: ys) case Nil then show ?case by simp next case (Cons x xs) then show ?case by (cases ys) simp_all qed ultimately show ?thesis by simp qed lemma zip_append1: "zip (xs @ ys) zs = zip xs (take (length xs) zs) @ zip ys (drop (length xs) zs)" by (induct xs zs rule:list_induct2') auto lemma zip_append2: "zip xs (ys @ zs) = zip (take (length ys) xs) ys @ zip (drop (length ys) xs) zs" by (induct xs ys rule:list_induct2') auto lemma zip_append [simp]: "\length xs = length us\ \ zip (xs@ys) (us@vs) = zip xs us @ zip ys vs" by (simp add: zip_append1) lemma zip_rev: "length xs = length ys \ zip (rev xs) (rev ys) = rev (zip xs ys)" by (induct rule:list_induct2, simp_all) lemma zip_map_map: "zip (map f xs) (map g ys) = map (\ (x, y). (f x, g y)) (zip xs ys)" proof (induct xs arbitrary: ys) case (Cons x xs) note Cons_x_xs = Cons.hyps show ?case proof (cases ys) case (Cons y ys') show ?thesis unfolding Cons using Cons_x_xs by simp qed simp qed simp lemma zip_map1: "zip (map f xs) ys = map (\(x, y). (f x, y)) (zip xs ys)" using zip_map_map[of f xs "\x. x" ys] by simp lemma zip_map2: "zip xs (map f ys) = map (\(x, y). (x, f y)) (zip xs ys)" using zip_map_map[of "\x. x" xs f ys] by simp lemma map_zip_map: "map f (zip (map g xs) ys) = map (%(x,y). f(g x, y)) (zip xs ys)" by (auto simp: zip_map1) lemma map_zip_map2: "map f (zip xs (map g ys)) = map (%(x,y). f(x, g y)) (zip xs ys)" by (auto simp: zip_map2) text\Courtesy of Andreas Lochbihler:\ lemma zip_same_conv_map: "zip xs xs = map (\x. (x, x)) xs" by(induct xs) auto lemma nth_zip [simp]: "\i < length xs; i < length ys\ \ (zip xs ys)!i = (xs!i, ys!i)" proof (induct ys arbitrary: i xs) case (Cons y ys) then show ?case by (cases xs) (simp_all add: nth.simps split: nat.split) qed auto lemma set_zip: "set (zip xs ys) = {(xs!i, ys!i) | i. i < min (length xs) (length ys)}" by(simp add: set_conv_nth cong: rev_conj_cong) lemma zip_same: "((a,b) \ set (zip xs xs)) = (a \ set xs \ a = b)" by(induct xs) auto lemma zip_update: "zip (xs[i:=x]) (ys[i:=y]) = (zip xs ys)[i:=(x,y)]" by (simp add: update_zip) lemma zip_replicate [simp]: "zip (replicate i x) (replicate j y) = replicate (min i j) (x,y)" proof (induct i arbitrary: j) case (Suc i) then show ?case by (cases j, auto) qed auto lemma zip_replicate1: "zip (replicate n x) ys = map (Pair x) (take n ys)" by(induction ys arbitrary: n)(case_tac [2] n, simp_all) lemma take_zip: "take n (zip xs ys) = zip (take n xs) (take n ys)" proof (induct n arbitrary: xs ys) case 0 then show ?case by simp next case Suc then show ?case by (cases xs; cases ys) simp_all qed lemma drop_zip: "drop n (zip xs ys) = zip (drop n xs) (drop n ys)" proof (induct n arbitrary: xs ys) case 0 then show ?case by simp next case Suc then show ?case by (cases xs; cases ys) simp_all qed lemma zip_takeWhile_fst: "zip (takeWhile P xs) ys = takeWhile (P \ fst) (zip xs ys)" proof (induct xs arbitrary: ys) case Nil then show ?case by simp next case Cons then show ?case by (cases ys) auto qed lemma zip_takeWhile_snd: "zip xs (takeWhile P ys) = takeWhile (P \ snd) (zip xs ys)" proof (induct xs arbitrary: ys) case Nil then show ?case by simp next case Cons then show ?case by (cases ys) auto qed lemma set_zip_leftD: "(x,y)\ set (zip xs ys) \ x \ set xs" by (induct xs ys rule:list_induct2') auto lemma set_zip_rightD: "(x,y)\ set (zip xs ys) \ y \ set ys" by (induct xs ys rule:list_induct2') auto lemma in_set_zipE: "(x,y) \ set(zip xs ys) \ (\ x \ set xs; y \ set ys \ \ R) \ R" by(blast dest: set_zip_leftD set_zip_rightD) lemma zip_map_fst_snd: "zip (map fst zs) (map snd zs) = zs" by (induct zs) simp_all lemma zip_eq_conv: "length xs = length ys \ zip xs ys = zs \ map fst zs = xs \ map snd zs = ys" by (auto simp add: zip_map_fst_snd) lemma in_set_zip: "p \ set (zip xs ys) \ (\n. xs ! n = fst p \ ys ! n = snd p \ n < length xs \ n < length ys)" by (cases p) (auto simp add: set_zip) lemma in_set_impl_in_set_zip1: assumes "length xs = length ys" assumes "x \ set xs" obtains y where "(x, y) \ set (zip xs ys)" proof - from assms have "x \ set (map fst (zip xs ys))" by simp from this that show ?thesis by fastforce qed lemma in_set_impl_in_set_zip2: assumes "length xs = length ys" assumes "y \ set ys" obtains x where "(x, y) \ set (zip xs ys)" proof - from assms have "y \ set (map snd (zip xs ys))" by simp from this that show ?thesis by fastforce qed lemma zip_eq_Nil_iff: "zip xs ys = [] \ xs = [] \ ys = []" by (cases xs; cases ys) simp_all lemma zip_eq_ConsE: assumes "zip xs ys = xy # xys" obtains x xs' y ys' where "xs = x # xs'" and "ys = y # ys'" and "xy = (x, y)" and "xys = zip xs' ys'" proof - from assms have "xs \ []" and "ys \ []" using zip_eq_Nil_iff [of xs ys] by simp_all then obtain x xs' y ys' where xs: "xs = x # xs'" and ys: "ys = y # ys'" by (cases xs; cases ys) auto with assms have "xy = (x, y)" and "xys = zip xs' ys'" by simp_all with xs ys show ?thesis .. qed lemma semilattice_map2: "semilattice (map2 (\<^bold>*))" if "semilattice (\<^bold>*)" for f (infixl "\<^bold>*" 70) proof - from that interpret semilattice f . show ?thesis proof show "map2 (\<^bold>*) (map2 (\<^bold>*) xs ys) zs = map2 (\<^bold>*) xs (map2 (\<^bold>*) ys zs)" for xs ys zs :: "'a list" proof (induction "zip xs (zip ys zs)" arbitrary: xs ys zs) case Nil from Nil [symmetric] show ?case by (auto simp add: zip_eq_Nil_iff) next case (Cons xyz xyzs) from Cons.hyps(2) [symmetric] show ?case by (rule zip_eq_ConsE) (erule zip_eq_ConsE, auto intro: Cons.hyps(1) simp add: ac_simps) qed show "map2 (\<^bold>*) xs ys = map2 (\<^bold>*) ys xs" for xs ys :: "'a list" proof (induction "zip xs ys" arbitrary: xs ys) case Nil then show ?case by (auto simp add: zip_eq_Nil_iff dest: sym) next case (Cons xy xys) from Cons.hyps(2) [symmetric] show ?case by (rule zip_eq_ConsE) (auto intro: Cons.hyps(1) simp add: ac_simps) qed show "map2 (\<^bold>*) xs xs = xs" for xs :: "'a list" by (induction xs) simp_all qed qed lemma pair_list_eqI: assumes "map fst xs = map fst ys" and "map snd xs = map snd ys" shows "xs = ys" proof - from assms(1) have "length xs = length ys" by (rule map_eq_imp_length_eq) from this assms show ?thesis by (induct xs ys rule: list_induct2) (simp_all add: prod_eqI) qed lemma hd_zip: \hd (zip xs ys) = (hd xs, hd ys)\ if \xs \ []\ and \ys \ []\ using that by (cases xs; cases ys) simp_all lemma last_zip: \last (zip xs ys) = (last xs, last ys)\ if \xs \ []\ and \ys \ []\ and \length xs = length ys\ using that by (cases xs rule: rev_cases; cases ys rule: rev_cases) simp_all subsubsection \\<^const>\list_all2\\ lemma list_all2_lengthD [intro?]: "list_all2 P xs ys \ length xs = length ys" by (simp add: list_all2_iff) lemma list_all2_Nil [iff, code]: "list_all2 P [] ys = (ys = [])" by (simp add: list_all2_iff) lemma list_all2_Nil2 [iff, code]: "list_all2 P xs [] = (xs = [])" by (simp add: list_all2_iff) lemma list_all2_Cons [iff, code]: "list_all2 P (x # xs) (y # ys) = (P x y \ list_all2 P xs ys)" by (auto simp add: list_all2_iff) lemma list_all2_Cons1: "list_all2 P (x # xs) ys = (\z zs. ys = z # zs \ P x z \ list_all2 P xs zs)" by (cases ys) auto lemma list_all2_Cons2: "list_all2 P xs (y # ys) = (\z zs. xs = z # zs \ P z y \ list_all2 P zs ys)" by (cases xs) auto lemma list_all2_induct [consumes 1, case_names Nil Cons, induct set: list_all2]: assumes P: "list_all2 P xs ys" assumes Nil: "R [] []" assumes Cons: "\x xs y ys. \P x y; list_all2 P xs ys; R xs ys\ \ R (x # xs) (y # ys)" shows "R xs ys" using P by (induct xs arbitrary: ys) (auto simp add: list_all2_Cons1 Nil Cons) lemma list_all2_rev [iff]: "list_all2 P (rev xs) (rev ys) = list_all2 P xs ys" by (simp add: list_all2_iff zip_rev cong: conj_cong) lemma list_all2_rev1: "list_all2 P (rev xs) ys = list_all2 P xs (rev ys)" by (subst list_all2_rev [symmetric]) simp lemma list_all2_append1: "list_all2 P (xs @ ys) zs = (\us vs. zs = us @ vs \ length us = length xs \ length vs = length ys \ list_all2 P xs us \ list_all2 P ys vs)" (is "?lhs = ?rhs") proof assume ?lhs then show ?rhs apply (rule_tac x = "take (length xs) zs" in exI) apply (rule_tac x = "drop (length xs) zs" in exI) apply (force split: nat_diff_split simp add: list_all2_iff zip_append1) done next assume ?rhs then show ?lhs by (auto simp add: list_all2_iff) qed lemma list_all2_append2: "list_all2 P xs (ys @ zs) = (\us vs. xs = us @ vs \ length us = length ys \ length vs = length zs \ list_all2 P us ys \ list_all2 P vs zs)" (is "?lhs = ?rhs") proof assume ?lhs then show ?rhs apply (rule_tac x = "take (length ys) xs" in exI) apply (rule_tac x = "drop (length ys) xs" in exI) apply (force split: nat_diff_split simp add: list_all2_iff zip_append2) done next assume ?rhs then show ?lhs by (auto simp add: list_all2_iff) qed lemma list_all2_append: "length xs = length ys \ list_all2 P (xs@us) (ys@vs) = (list_all2 P xs ys \ list_all2 P us vs)" by (induct rule:list_induct2, simp_all) lemma list_all2_appendI [intro?, trans]: "\ list_all2 P a b; list_all2 P c d \ \ list_all2 P (a@c) (b@d)" by (simp add: list_all2_append list_all2_lengthD) lemma list_all2_conv_all_nth: "list_all2 P xs ys = (length xs = length ys \ (\i < length xs. P (xs!i) (ys!i)))" by (force simp add: list_all2_iff set_zip) lemma list_all2_trans: assumes tr: "!!a b c. P1 a b \ P2 b c \ P3 a c" shows "!!bs cs. list_all2 P1 as bs \ list_all2 P2 bs cs \ list_all2 P3 as cs" (is "!!bs cs. PROP ?Q as bs cs") proof (induct as) fix x xs bs assume I1: "!!bs cs. PROP ?Q xs bs cs" show "!!cs. PROP ?Q (x # xs) bs cs" proof (induct bs) fix y ys cs assume I2: "!!cs. PROP ?Q (x # xs) ys cs" show "PROP ?Q (x # xs) (y # ys) cs" by (induct cs) (auto intro: tr I1 I2) qed simp qed simp lemma list_all2_all_nthI [intro?]: "length a = length b \ (\n. n < length a \ P (a!n) (b!n)) \ list_all2 P a b" by (simp add: list_all2_conv_all_nth) lemma list_all2I: "\x \ set (zip a b). case_prod P x \ length a = length b \ list_all2 P a b" by (simp add: list_all2_iff) lemma list_all2_nthD: "\ list_all2 P xs ys; p < size xs \ \ P (xs!p) (ys!p)" by (simp add: list_all2_conv_all_nth) lemma list_all2_nthD2: "\list_all2 P xs ys; p < size ys\ \ P (xs!p) (ys!p)" by (frule list_all2_lengthD) (auto intro: list_all2_nthD) lemma list_all2_map1: "list_all2 P (map f as) bs = list_all2 (\x y. P (f x) y) as bs" by (simp add: list_all2_conv_all_nth) lemma list_all2_map2: "list_all2 P as (map f bs) = list_all2 (\x y. P x (f y)) as bs" by (auto simp add: list_all2_conv_all_nth) lemma list_all2_refl [intro?]: "(\x. P x x) \ list_all2 P xs xs" by (simp add: list_all2_conv_all_nth) lemma list_all2_update_cong: "\ list_all2 P xs ys; P x y \ \ list_all2 P (xs[i:=x]) (ys[i:=y])" by (cases "i < length ys") (auto simp add: list_all2_conv_all_nth nth_list_update) lemma list_all2_takeI [simp,intro?]: "list_all2 P xs ys \ list_all2 P (take n xs) (take n ys)" proof (induct xs arbitrary: n ys) case (Cons x xs) then show ?case by (cases n) (auto simp: list_all2_Cons1) qed auto lemma list_all2_dropI [simp,intro?]: "list_all2 P xs ys \ list_all2 P (drop n xs) (drop n ys)" proof (induct xs arbitrary: n ys) case (Cons x xs) then show ?case by (cases n) (auto simp: list_all2_Cons1) qed auto lemma list_all2_mono [intro?]: "list_all2 P xs ys \ (\xs ys. P xs ys \ Q xs ys) \ list_all2 Q xs ys" by (rule list.rel_mono_strong) lemma list_all2_eq: "xs = ys \ list_all2 (=) xs ys" by (induct xs ys rule: list_induct2') auto lemma list_eq_iff_zip_eq: "xs = ys \ length xs = length ys \ (\(x,y) \ set (zip xs ys). x = y)" by(auto simp add: set_zip list_all2_eq list_all2_conv_all_nth cong: conj_cong) lemma list_all2_same: "list_all2 P xs xs \ (\x\set xs. P x x)" by(auto simp add: list_all2_conv_all_nth set_conv_nth) lemma zip_assoc: "zip xs (zip ys zs) = map (\((x, y), z). (x, y, z)) (zip (zip xs ys) zs)" by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all lemma zip_commute: "zip xs ys = map (\(x, y). (y, x)) (zip ys xs)" by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all lemma zip_left_commute: "zip xs (zip ys zs) = map (\(y, (x, z)). (x, y, z)) (zip ys (zip xs zs))" by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all lemma zip_replicate2: "zip xs (replicate n y) = map (\x. (x, y)) (take n xs)" by(subst zip_commute)(simp add: zip_replicate1) subsubsection \\<^const>\List.product\ and \<^const>\product_lists\\ lemma product_concat_map: "List.product xs ys = concat (map (\x. map (\y. (x,y)) ys) xs)" by(induction xs) (simp)+ lemma set_product[simp]: "set (List.product xs ys) = set xs \ set ys" by (induct xs) auto lemma length_product [simp]: "length (List.product xs ys) = length xs * length ys" by (induct xs) simp_all lemma product_nth: assumes "n < length xs * length ys" shows "List.product xs ys ! n = (xs ! (n div length ys), ys ! (n mod length ys))" using assms proof (induct xs arbitrary: n) case Nil then show ?case by simp next case (Cons x xs n) then have "length ys > 0" by auto with Cons show ?case by (auto simp add: nth_append not_less le_mod_geq le_div_geq) qed lemma in_set_product_lists_length: "xs \ set (product_lists xss) \ length xs = length xss" by (induct xss arbitrary: xs) auto lemma product_lists_set: "set (product_lists xss) = {xs. list_all2 (\x ys. x \ set ys) xs xss}" (is "?L = Collect ?R") proof (intro equalityI subsetI, unfold mem_Collect_eq) fix xs assume "xs \ ?L" then have "length xs = length xss" by (rule in_set_product_lists_length) from this \xs \ ?L\ show "?R xs" by (induct xs xss rule: list_induct2) auto next fix xs assume "?R xs" then show "xs \ ?L" by induct auto qed subsubsection \\<^const>\fold\ with natural argument order\ lemma fold_simps [code]: \ \eta-expanded variant for generated code -- enables tail-recursion optimisation in Scala\ "fold f [] s = s" "fold f (x # xs) s = fold f xs (f x s)" by simp_all lemma fold_remove1_split: "\ \x y. x \ set xs \ y \ set xs \ f x \ f y = f y \ f x; x \ set xs \ \ fold f xs = fold f (remove1 x xs) \ f x" by (induct xs) (auto simp add: comp_assoc) lemma fold_cong [fundef_cong]: "a = b \ xs = ys \ (\x. x \ set xs \ f x = g x) \ fold f xs a = fold g ys b" by (induct ys arbitrary: a b xs) simp_all lemma fold_id: "(\x. x \ set xs \ f x = id) \ fold f xs = id" by (induct xs) simp_all lemma fold_commute: "(\x. x \ set xs \ h \ g x = f x \ h) \ h \ fold g xs = fold f xs \ h" by (induct xs) (simp_all add: fun_eq_iff) lemma fold_commute_apply: assumes "\x. x \ set xs \ h \ g x = f x \ h" shows "h (fold g xs s) = fold f xs (h s)" proof - from assms have "h \ fold g xs = fold f xs \ h" by (rule fold_commute) then show ?thesis by (simp add: fun_eq_iff) qed lemma fold_invariant: "\ \x. x \ set xs \ Q x; P s; \x s. Q x \ P s \ P (f x s) \ \ P (fold f xs s)" by (induct xs arbitrary: s) simp_all lemma fold_append [simp]: "fold f (xs @ ys) = fold f ys \ fold f xs" by (induct xs) simp_all lemma fold_map [code_unfold]: "fold g (map f xs) = fold (g \ f) xs" by (induct xs) simp_all lemma fold_filter: "fold f (filter P xs) = fold (\x. if P x then f x else id) xs" by (induct xs) simp_all lemma fold_rev: "(\x y. x \ set xs \ y \ set xs \ f y \ f x = f x \ f y) \ fold f (rev xs) = fold f xs" by (induct xs) (simp_all add: fold_commute_apply fun_eq_iff) lemma fold_Cons_rev: "fold Cons xs = append (rev xs)" by (induct xs) simp_all lemma rev_conv_fold [code]: "rev xs = fold Cons xs []" by (simp add: fold_Cons_rev) lemma fold_append_concat_rev: "fold append xss = append (concat (rev xss))" by (induct xss) simp_all text \\<^const>\Finite_Set.fold\ and \<^const>\fold\\ lemma (in comp_fun_commute_on) fold_set_fold_remdups: assumes "set xs \ S" shows "Finite_Set.fold f y (set xs) = fold f (remdups xs) y" by (rule sym, use assms in \induct xs arbitrary: y\) (simp_all add: insert_absorb fold_fun_left_comm) lemma (in comp_fun_idem_on) fold_set_fold: assumes "set xs \ S" shows "Finite_Set.fold f y (set xs) = fold f xs y" by (rule sym, use assms in \induct xs arbitrary: y\) (simp_all add: fold_fun_left_comm) lemma union_set_fold [code]: "set xs \ A = fold Set.insert xs A" proof - interpret comp_fun_idem Set.insert by (fact comp_fun_idem_insert) show ?thesis by (simp add: union_fold_insert fold_set_fold) qed lemma union_coset_filter [code]: "List.coset xs \ A = List.coset (List.filter (\x. x \ A) xs)" by auto lemma minus_set_fold [code]: "A - set xs = fold Set.remove xs A" proof - interpret comp_fun_idem Set.remove by (fact comp_fun_idem_remove) show ?thesis by (simp add: minus_fold_remove [of _ A] fold_set_fold) qed lemma minus_coset_filter [code]: "A - List.coset xs = set (List.filter (\x. x \ A) xs)" by auto lemma inter_set_filter [code]: "A \ set xs = set (List.filter (\x. x \ A) xs)" by auto lemma inter_coset_fold [code]: "A \ List.coset xs = fold Set.remove xs A" by (simp add: Diff_eq [symmetric] minus_set_fold) lemma (in semilattice_set) set_eq_fold [code]: "F (set (x # xs)) = fold f xs x" proof - interpret comp_fun_idem f by standard (simp_all add: fun_eq_iff left_commute) show ?thesis by (simp add: eq_fold fold_set_fold) qed lemma (in complete_lattice) Inf_set_fold: "Inf (set xs) = fold inf xs top" proof - interpret comp_fun_idem "inf :: 'a \ 'a \ 'a" by (fact comp_fun_idem_inf) show ?thesis by (simp add: Inf_fold_inf fold_set_fold inf_commute) qed declare Inf_set_fold [where 'a = "'a set", code] lemma (in complete_lattice) Sup_set_fold: "Sup (set xs) = fold sup xs bot" proof - interpret comp_fun_idem "sup :: 'a \ 'a \ 'a" by (fact comp_fun_idem_sup) show ?thesis by (simp add: Sup_fold_sup fold_set_fold sup_commute) qed declare Sup_set_fold [where 'a = "'a set", code] lemma (in complete_lattice) INF_set_fold: "\(f ` set xs) = fold (inf \ f) xs top" using Inf_set_fold [of "map f xs"] by (simp add: fold_map) lemma (in complete_lattice) SUP_set_fold: "\(f ` set xs) = fold (sup \ f) xs bot" using Sup_set_fold [of "map f xs"] by (simp add: fold_map) subsubsection \Fold variants: \<^const>\foldr\ and \<^const>\foldl\\ text \Correspondence\ lemma foldr_conv_fold [code_abbrev]: "foldr f xs = fold f (rev xs)" by (induct xs) simp_all lemma foldl_conv_fold: "foldl f s xs = fold (\x s. f s x) xs s" by (induct xs arbitrary: s) simp_all lemma foldr_conv_foldl: \ \The ``Third Duality Theorem'' in Bird \& Wadler:\ "foldr f xs a = foldl (\x y. f y x) a (rev xs)" by (simp add: foldr_conv_fold foldl_conv_fold) lemma foldl_conv_foldr: "foldl f a xs = foldr (\x y. f y x) (rev xs) a" by (simp add: foldr_conv_fold foldl_conv_fold) lemma foldr_fold: "(\x y. x \ set xs \ y \ set xs \ f y \ f x = f x \ f y) \ foldr f xs = fold f xs" unfolding foldr_conv_fold by (rule fold_rev) lemma foldr_cong [fundef_cong]: "a = b \ l = k \ (\a x. x \ set l \ f x a = g x a) \ foldr f l a = foldr g k b" by (auto simp add: foldr_conv_fold intro!: fold_cong) lemma foldl_cong [fundef_cong]: "a = b \ l = k \ (\a x. x \ set l \ f a x = g a x) \ foldl f a l = foldl g b k" by (auto simp add: foldl_conv_fold intro!: fold_cong) lemma foldr_append [simp]: "foldr f (xs @ ys) a = foldr f xs (foldr f ys a)" by (simp add: foldr_conv_fold) lemma foldl_append [simp]: "foldl f a (xs @ ys) = foldl f (foldl f a xs) ys" by (simp add: foldl_conv_fold) lemma foldr_map [code_unfold]: "foldr g (map f xs) a = foldr (g \ f) xs a" by (simp add: foldr_conv_fold fold_map rev_map) lemma foldr_filter: "foldr f (filter P xs) = foldr (\x. if P x then f x else id) xs" by (simp add: foldr_conv_fold rev_filter fold_filter) lemma foldl_map [code_unfold]: "foldl g a (map f xs) = foldl (\a x. g a (f x)) a xs" by (simp add: foldl_conv_fold fold_map comp_def) lemma concat_conv_foldr [code]: "concat xss = foldr append xss []" by (simp add: fold_append_concat_rev foldr_conv_fold) subsubsection \\<^const>\upt\\ lemma upt_rec[code]: "[i.. \simp does not terminate!\ by (induct j) auto lemmas upt_rec_numeral[simp] = upt_rec[of "numeral m" "numeral n"] for m n lemma upt_conv_Nil [simp]: "j \ i \ [i.. j \ i)" by(induct j)simp_all lemma upt_eq_Cons_conv: "([i.. i = x \ [i+1.. j \ [i..<(Suc j)] = [i.. \Only needed if \upt_Suc\ is deleted from the simpset.\ by simp lemma upt_conv_Cons: "i < j \ [i.. \no precondition\ "m # n # ns = [m.. n # ns = [Suc m.. [i.. \LOOPS as a simprule, since \j \ j\.\ by (induct k) auto lemma length_upt [simp]: "length [i.. [i.. hd[i.. last[i.. n \ take m [i..i. i + n) [0.. (map f [m..n. n - Suc 0) [Suc m..i. f (Suc i)) [0 ..< n]" by (induct n arbitrary: f) auto lemma nth_take_lemma: "k \ length xs \ k \ length ys \ (\i. i < k \ xs!i = ys!i) \ take k xs = take k ys" proof (induct k arbitrary: xs ys) case (Suc k) then show ?case apply (simp add: less_Suc_eq_0_disj) by (simp add: Suc.prems(3) take_Suc_conv_app_nth) qed simp lemma nth_equalityI: "\length xs = length ys; \i. i < length xs \ xs!i = ys!i\ \ xs = ys" by (frule nth_take_lemma [OF le_refl eq_imp_le]) simp_all lemma map_nth: "map (\i. xs ! i) [0.. (\x y. \P x y; Q y x\ \ x = y); list_all2 P xs ys; list_all2 Q ys xs \ \ xs = ys" by (simp add: list_all2_conv_all_nth nth_equalityI) lemma take_equalityI: "(\i. take i xs = take i ys) \ xs = ys" \ \The famous take-lemma.\ by (metis length_take min.commute order_refl take_all) lemma take_Cons': "take n (x # xs) = (if n = 0 then [] else x # take (n - 1) xs)" by (cases n) simp_all lemma drop_Cons': "drop n (x # xs) = (if n = 0 then x # xs else drop (n - 1) xs)" by (cases n) simp_all lemma nth_Cons': "(x # xs)!n = (if n = 0 then x else xs!(n - 1))" by (cases n) simp_all lemma take_Cons_numeral [simp]: "take (numeral v) (x # xs) = x # take (numeral v - 1) xs" by (simp add: take_Cons') lemma drop_Cons_numeral [simp]: "drop (numeral v) (x # xs) = drop (numeral v - 1) xs" by (simp add: drop_Cons') lemma nth_Cons_numeral [simp]: "(x # xs) ! numeral v = xs ! (numeral v - 1)" by (simp add: nth_Cons') lemma map_upt_eqI: \map f [m.. if \length xs = n - m\ \\i. i < length xs \ xs ! i = f (m + i)\ proof (rule nth_equalityI) from \length xs = n - m\ show \length (map f [m.. by simp next fix i assume \i < length (map f [m.. then have \i < n - m\ by simp with that have \xs ! i = f (m + i)\ by simp with \i < n - m\ show \map f [m.. by simp qed subsubsection \\upto\: interval-list on \<^typ>\int\\ function upto :: "int \ int \ int list" ("(1[_../_])") where "upto i j = (if i \ j then i # [i+1..j] else [])" by auto termination by(relation "measure(%(i::int,j). nat(j - i + 1))") auto declare upto.simps[simp del] lemmas upto_rec_numeral [simp] = upto.simps[of "numeral m" "numeral n"] upto.simps[of "numeral m" "- numeral n"] upto.simps[of "- numeral m" "numeral n"] upto.simps[of "- numeral m" "- numeral n"] for m n lemma upto_empty[simp]: "j < i \ [i..j] = []" by(simp add: upto.simps) lemma upto_single[simp]: "[i..i] = [i]" by(simp add: upto.simps) lemma upto_Nil[simp]: "[i..j] = [] \ j < i" by (simp add: upto.simps) lemma upto_Nil2[simp]: "[] = [i..j] \ j < i" by (simp add: upto.simps) lemma upto_rec1: "i \ j \ [i..j] = i#[i+1..j]" by(simp add: upto.simps) lemma upto_rec2: "i \ j \ [i..j] = [i..j - 1]@[j]" proof(induct "nat(j-i)" arbitrary: i j) case 0 thus ?case by(simp add: upto.simps) next case (Suc n) hence "n = nat (j - (i + 1))" "i < j" by linarith+ from this(2) Suc.hyps(1)[OF this(1)] Suc(2,3) upto_rec1 show ?case by simp qed lemma length_upto[simp]: "length [i..j] = nat(j - i + 1)" by(induction i j rule: upto.induct) (auto simp: upto.simps) lemma set_upto[simp]: "set[i..j] = {i..j}" proof(induct i j rule:upto.induct) case (1 i j) from this show ?case unfolding upto.simps[of i j] by auto qed lemma nth_upto[simp]: "i + int k \ j \ [i..j] ! k = i + int k" proof(induction i j arbitrary: k rule: upto.induct) case (1 i j) then show ?case by (auto simp add: upto_rec1 [of i j] nth_Cons') qed lemma upto_split1: "i \ j \ j \ k \ [i..k] = [i..j-1] @ [j..k]" proof (induction j rule: int_ge_induct) case base thus ?case by (simp add: upto_rec1) next case step thus ?case using upto_rec1 upto_rec2 by simp qed lemma upto_split2: "i \ j \ j \ k \ [i..k] = [i..j] @ [j+1..k]" using upto_rec1 upto_rec2 upto_split1 by auto lemma upto_split3: "\ i \ j; j \ k \ \ [i..k] = [i..j-1] @ j # [j+1..k]" using upto_rec1 upto_split1 by auto text\Tail recursive version for code generation:\ definition upto_aux :: "int \ int \ int list \ int list" where "upto_aux i j js = [i..j] @ js" lemma upto_aux_rec [code]: "upto_aux i j js = (if j\<^const>\successively\\ lemma successively_Cons: "successively P (x # xs) \ xs = [] \ P x (hd xs) \ successively P xs" by (cases xs) auto lemma successively_cong [cong]: assumes "\x y. x \ set xs \ y \ set xs \ P x y \ Q x y" "xs = ys" shows "successively P xs \ successively Q ys" unfolding assms(2) [symmetric] using assms(1) by (induction xs) (auto simp: successively_Cons) lemma successively_append_iff: "successively P (xs @ ys) \ successively P xs \ successively P ys \ (xs = [] \ ys = [] \ P (last xs) (hd ys))" by (induction xs) (auto simp: successively_Cons) lemma successively_if_sorted_wrt: "sorted_wrt P xs \ successively P xs" by (induction xs rule: induct_list012) auto lemma successively_iff_sorted_wrt_strong: assumes "\x y z. x \ set xs \ y \ set xs \ z \ set xs \ P x y \ P y z \ P x z" shows "successively P xs \ sorted_wrt P xs" proof assume "successively P xs" from this and assms show "sorted_wrt P xs" proof (induction xs rule: induct_list012) case (3 x y xs) from "3.prems" have "P x y" by auto have IH: "sorted_wrt P (y # xs)" using "3.prems" by(intro "3.IH"(2) list.set_intros(2))(simp, blast intro: list.set_intros(2)) have "P x z" if asm: "z \ set xs" for z proof - from IH and asm have "P y z" by auto with \P x y\ show "P x z" using "3.prems" asm by auto qed with IH and \P x y\ show ?case by auto qed auto qed (use successively_if_sorted_wrt in blast) lemma successively_conv_sorted_wrt: assumes "transp P" shows "successively P xs \ sorted_wrt P xs" using assms unfolding transp_def by (intro successively_iff_sorted_wrt_strong) blast lemma successively_rev [simp]: "successively P (rev xs) \ successively (\x y. P y x) xs" by (induction xs rule: remdups_adj.induct) (auto simp: successively_append_iff successively_Cons) lemma successively_map: "successively P (map f xs) \ successively (\x y. P (f x) (f y)) xs" by (induction xs rule: induct_list012) auto lemma successively_mono: assumes "successively P xs" assumes "\x y. x \ set xs \ y \ set xs \ P x y \ Q x y" shows "successively Q xs" using assms by (induction Q xs rule: successively.induct) auto lemma successively_altdef: "successively = (\P. rec_list True (\x xs b. case xs of [] \ True | y # _ \ P x y \ b))" proof (intro ext) fix P and xs :: "'a list" show "successively P xs = rec_list True (\x xs b. case xs of [] \ True | y # _ \ P x y \ b) xs" by (induction xs) (auto simp: successively_Cons split: list.splits) qed subsubsection \\<^const>\distinct\ and \<^const>\remdups\ and \<^const>\remdups_adj\\ lemma distinct_tl: "distinct xs \ distinct (tl xs)" by (cases xs) simp_all lemma distinct_append [simp]: "distinct (xs @ ys) = (distinct xs \ distinct ys \ set xs \ set ys = {})" by (induct xs) auto lemma distinct_rev[simp]: "distinct(rev xs) = distinct xs" by(induct xs) auto lemma set_remdups [simp]: "set (remdups xs) = set xs" by (induct xs) (auto simp add: insert_absorb) lemma distinct_remdups [iff]: "distinct (remdups xs)" by (induct xs) auto lemma distinct_remdups_id: "distinct xs \ remdups xs = xs" by (induct xs, auto) lemma remdups_id_iff_distinct [simp]: "remdups xs = xs \ distinct xs" by (metis distinct_remdups distinct_remdups_id) lemma finite_distinct_list: "finite A \ \xs. set xs = A \ distinct xs" by (metis distinct_remdups finite_list set_remdups) lemma remdups_eq_nil_iff [simp]: "(remdups x = []) = (x = [])" by (induct x, auto) lemma remdups_eq_nil_right_iff [simp]: "([] = remdups x) = (x = [])" by (induct x, auto) lemma length_remdups_leq[iff]: "length(remdups xs) \ length xs" by (induct xs) auto lemma length_remdups_eq[iff]: "(length (remdups xs) = length xs) = (remdups xs = xs)" proof (induct xs) case (Cons a xs) then show ?case by simp (metis Suc_n_not_le_n impossible_Cons length_remdups_leq) qed auto lemma remdups_filter: "remdups(filter P xs) = filter P (remdups xs)" by (induct xs) auto lemma distinct_map: "distinct(map f xs) = (distinct xs \ inj_on f (set xs))" by (induct xs) auto lemma distinct_map_filter: "distinct (map f xs) \ distinct (map f (filter P xs))" by (induct xs) auto lemma distinct_filter [simp]: "distinct xs \ distinct (filter P xs)" by (induct xs) auto lemma distinct_upt[simp]: "distinct[i.. distinct (take i xs)" proof (induct xs arbitrary: i) case (Cons a xs) then show ?case by (metis Cons.prems append_take_drop_id distinct_append) qed auto lemma distinct_drop[simp]: "distinct xs \ distinct (drop i xs)" proof (induct xs arbitrary: i) case (Cons a xs) then show ?case by (metis Cons.prems append_take_drop_id distinct_append) qed auto lemma distinct_list_update: assumes d: "distinct xs" and a: "a \ set xs - {xs!i}" shows "distinct (xs[i:=a])" proof (cases "i < length xs") case True with a have anot: "a \ set (take i xs @ xs ! i # drop (Suc i) xs) - {xs!i}" by simp (metis in_set_dropD in_set_takeD) show ?thesis proof (cases "a = xs!i") case True with d show ?thesis by auto next case False have "set (take i xs) \ set (drop (Suc i) xs) = {}" by (metis True d disjoint_insert(1) distinct_append id_take_nth_drop list.set(2)) then show ?thesis using d False anot \i < length xs\ by (simp add: upd_conv_take_nth_drop) qed next case False with d show ?thesis by auto qed lemma distinct_concat: "\ distinct xs; \ ys. ys \ set xs \ distinct ys; \ ys zs. \ ys \ set xs ; zs \ set xs ; ys \ zs \ \ set ys \ set zs = {} \ \ distinct (concat xs)" by (induct xs) auto text \An iff-version of @{thm distinct_concat} is available further down as \distinct_concat_iff\.\ text \It is best to avoid the following indexed version of distinct, but sometimes it is useful.\ lemma distinct_conv_nth: "distinct xs = (\i < size xs. \j < size xs. i \ j \ xs!i \ xs!j)" proof (induct xs) case (Cons x xs) show ?case apply (auto simp add: Cons nth_Cons split: nat.split_asm) apply (metis Suc_less_eq2 in_set_conv_nth less_not_refl zero_less_Suc)+ done qed auto lemma nth_eq_iff_index_eq: "\ distinct xs; i < length xs; j < length xs \ \ (xs!i = xs!j) = (i = j)" by(auto simp: distinct_conv_nth) lemma distinct_Ex1: "distinct xs \ x \ set xs \ (\!i. i < length xs \ xs ! i = x)" by (auto simp: in_set_conv_nth nth_eq_iff_index_eq) lemma inj_on_nth: "distinct xs \ \i \ I. i < length xs \ inj_on (nth xs) I" by (rule inj_onI) (simp add: nth_eq_iff_index_eq) lemma bij_betw_nth: assumes "distinct xs" "A = {.. distinct xs; n < length xs \ \ set(xs[n := x]) = insert x (set xs - {xs!n})" by(auto simp: set_eq_iff in_set_conv_nth nth_list_update nth_eq_iff_index_eq) lemma distinct_swap[simp]: "\ i < size xs; j < size xs\ \ distinct(xs[i := xs!j, j := xs!i]) = distinct xs" apply (simp add: distinct_conv_nth nth_list_update) apply (safe; metis) done lemma set_swap[simp]: "\ i < size xs; j < size xs \ \ set(xs[i := xs!j, j := xs!i]) = set xs" by(simp add: set_conv_nth nth_list_update) metis lemma distinct_card: "distinct xs \ card (set xs) = size xs" by (induct xs) auto lemma card_distinct: "card (set xs) = size xs \ distinct xs" proof (induct xs) case (Cons x xs) show ?case proof (cases "x \ set xs") case False with Cons show ?thesis by simp next case True with Cons.prems have "card (set xs) = Suc (length xs)" by (simp add: card_insert_if split: if_split_asm) moreover have "card (set xs) \ length xs" by (rule card_length) ultimately have False by simp thus ?thesis .. qed qed simp lemma distinct_length_filter: "distinct xs \ length (filter P xs) = card ({x. P x} Int set xs)" by (induct xs) (auto) lemma not_distinct_decomp: "\ distinct ws \ \xs ys zs y. ws = xs@[y]@ys@[y]@zs" proof (induct n == "length ws" arbitrary:ws) case (Suc n ws) then show ?case using length_Suc_conv [of ws n] apply (auto simp: eq_commute) apply (metis append_Nil in_set_conv_decomp_first) by (metis append_Cons) qed simp lemma not_distinct_conv_prefix: defines "dec as xs y ys \ y \ set xs \ distinct xs \ as = xs @ y # ys" shows "\distinct as \ (\xs y ys. dec as xs y ys)" (is "?L = ?R") proof assume "?L" then show "?R" proof (induct "length as" arbitrary: as rule: less_induct) case less obtain xs ys zs y where decomp: "as = (xs @ y # ys) @ y # zs" using not_distinct_decomp[OF less.prems] by auto show ?case proof (cases "distinct (xs @ y # ys)") case True with decomp have "dec as (xs @ y # ys) y zs" by (simp add: dec_def) then show ?thesis by blast next case False with less decomp obtain xs' y' ys' where "dec (xs @ y # ys) xs' y' ys'" by atomize_elim auto with decomp have "dec as xs' y' (ys' @ y # zs)" by (simp add: dec_def) then show ?thesis by blast qed qed qed (auto simp: dec_def) lemma distinct_product: "distinct xs \ distinct ys \ distinct (List.product xs ys)" by (induct xs) (auto intro: inj_onI simp add: distinct_map) lemma distinct_product_lists: assumes "\xs \ set xss. distinct xs" shows "distinct (product_lists xss)" using assms proof (induction xss) case (Cons xs xss) note * = this then show ?case proof (cases "product_lists xss") case Nil then show ?thesis by (induct xs) simp_all next case (Cons ps pss) with * show ?thesis by (auto intro!: inj_onI distinct_concat simp add: distinct_map) qed qed simp lemma length_remdups_concat: "length (remdups (concat xss)) = card (\xs\set xss. set xs)" by (simp add: distinct_card [symmetric]) lemma remdups_append2: "remdups (xs @ remdups ys) = remdups (xs @ ys)" by(induction xs) auto lemma length_remdups_card_conv: "length(remdups xs) = card(set xs)" proof - have xs: "concat[xs] = xs" by simp from length_remdups_concat[of "[xs]"] show ?thesis unfolding xs by simp qed lemma remdups_remdups: "remdups (remdups xs) = remdups xs" by (induct xs) simp_all lemma distinct_butlast: assumes "distinct xs" shows "distinct (butlast xs)" proof (cases "xs = []") case False from \xs \ []\ obtain ys y where "xs = ys @ [y]" by (cases xs rule: rev_cases) auto with \distinct xs\ show ?thesis by simp qed (auto) lemma remdups_map_remdups: "remdups (map f (remdups xs)) = remdups (map f xs)" by (induct xs) simp_all lemma distinct_zipI1: assumes "distinct xs" shows "distinct (zip xs ys)" proof (rule zip_obtain_same_length) fix xs' :: "'a list" and ys' :: "'b list" and n assume "length xs' = length ys'" assume "xs' = take n xs" with assms have "distinct xs'" by simp with \length xs' = length ys'\ show "distinct (zip xs' ys')" by (induct xs' ys' rule: list_induct2) (auto elim: in_set_zipE) qed lemma distinct_zipI2: assumes "distinct ys" shows "distinct (zip xs ys)" proof (rule zip_obtain_same_length) fix xs' :: "'b list" and ys' :: "'a list" and n assume "length xs' = length ys'" assume "ys' = take n ys" with assms have "distinct ys'" by simp with \length xs' = length ys'\ show "distinct (zip xs' ys')" by (induct xs' ys' rule: list_induct2) (auto elim: in_set_zipE) qed lemma set_take_disj_set_drop_if_distinct: "distinct vs \ i \ j \ set (take i vs) \ set (drop j vs) = {}" by (auto simp: in_set_conv_nth distinct_conv_nth) (* The next two lemmas help Sledgehammer. *) lemma distinct_singleton: "distinct [x]" by simp lemma distinct_length_2_or_more: "distinct (a # b # xs) \ (a \ b \ distinct (a # xs) \ distinct (b # xs))" by force lemma remdups_adj_altdef: "(remdups_adj xs = ys) \ (\f::nat => nat. mono f \ f ` {0 ..< size xs} = {0 ..< size ys} \ (\i < size xs. xs!i = ys!(f i)) \ (\i. i + 1 < size xs \ (xs!i = xs!(i+1) \ f i = f(i+1))))" (is "?L \ (\f. ?p f xs ys)") proof assume ?L then show "\f. ?p f xs ys" proof (induct xs arbitrary: ys rule: remdups_adj.induct) case (1 ys) thus ?case by (intro exI[of _ id]) (auto simp: mono_def) next case (2 x ys) thus ?case by (intro exI[of _ id]) (auto simp: mono_def) next case (3 x1 x2 xs ys) let ?xs = "x1 # x2 # xs" let ?cond = "x1 = x2" define zs where "zs = remdups_adj (x2 # xs)" from 3(1-2)[of zs] obtain f where p: "?p f (x2 # xs) zs" unfolding zs_def by (cases ?cond) auto then have f0: "f 0 = 0" by (intro mono_image_least[where f=f]) blast+ from p have mono: "mono f" and f_xs_zs: "f ` {0.. []" unfolding zs_def by (induct xs) auto let ?Succ = "if ?cond then id else Suc" let ?x1 = "if ?cond then id else Cons x1" let ?f = "\ i. if i = 0 then 0 else ?Succ (f (i - 1))" have ys: "ys = ?x1 zs" unfolding ys by (cases ?cond, auto) have mono: "mono ?f" using \mono f\ unfolding mono_def by auto show ?case unfolding ys proof (intro exI[of _ ?f] conjI allI impI) show "mono ?f" by fact next fix i assume i: "i < length ?xs" with p show "?xs ! i = ?x1 zs ! (?f i)" using zs0 by auto next fix i assume i: "i + 1 < length ?xs" with p show "(?xs ! i = ?xs ! (i + 1)) = (?f i = ?f (i + 1))" by (cases i) (auto simp: f0) next have id: "{0 ..< length (?x1 zs)} = insert 0 (?Succ ` {0 ..< length zs})" using zsne by (cases ?cond, auto) { fix i assume "i < Suc (length xs)" hence "Suc i \ {0.. Collect ((<) 0)" by auto from imageI[OF this, of "\i. ?Succ (f (i - Suc 0))"] have "?Succ (f i) \ (\i. ?Succ (f (i - Suc 0))) ` ({0.. Collect ((<) 0))" by auto } then show "?f ` {0 ..< length ?xs} = {0 ..< length (?x1 zs)}" unfolding id f_xs_zs[symmetric] by auto qed qed next assume "\ f. ?p f xs ys" then show ?L proof (induct xs arbitrary: ys rule: remdups_adj.induct) case 1 then show ?case by auto next case (2 x) then obtain f where f_img: "f ` {0 ..< size [x]} = {0 ..< size ys}" and f_nth: "\i. i < size [x] \ [x]!i = ys!(f i)" by blast have "length ys = card (f ` {0 ..< size [x]})" using f_img by auto then have *: "length ys = 1" by auto then have "f 0 = 0" using f_img by auto with * show ?case using f_nth by (cases ys) auto next case (3 x1 x2 xs) from "3.prems" obtain f where f_mono: "mono f" and f_img: "f ` {0..i. i < length (x1 # x2 # xs) \ (x1 # x2 # xs) ! i = ys ! f i" "\i. i + 1 < length (x1 # x2 #xs) \ ((x1 # x2 # xs) ! i = (x1 # x2 # xs) ! (i + 1)) = (f i = f (i + 1))" by blast show ?case proof cases assume "x1 = x2" let ?f' = "f \ Suc" have "remdups_adj (x1 # xs) = ys" proof (intro "3.hyps" exI conjI impI allI) show "mono ?f'" using f_mono by (simp add: mono_iff_le_Suc) next have "?f' ` {0 ..< length (x1 # xs)} = f ` {Suc 0 ..< length (x1 # x2 # xs)}" using less_Suc_eq_0_disj by auto also have "\ = f ` {0 ..< length (x1 # x2 # xs)}" proof - have "f 0 = f (Suc 0)" using \x1 = x2\ f_nth[of 0] by simp then show ?thesis using less_Suc_eq_0_disj by auto qed also have "\ = {0 ..< length ys}" by fact finally show "?f' ` {0 ..< length (x1 # xs)} = {0 ..< length ys}" . qed (insert f_nth[of "Suc i" for i], auto simp: \x1 = x2\) then show ?thesis using \x1 = x2\ by simp next assume "x1 \ x2" have two: "Suc (Suc 0) \ length ys" proof - have "2 = card {f 0, f 1}" using \x1 \ x2\ f_nth[of 0] by auto also have "\ \ card (f ` {0..< length (x1 # x2 # xs)})" by (rule card_mono) auto finally show ?thesis using f_img by simp qed have "f 0 = 0" using f_mono f_img by (rule mono_image_least) simp have "f (Suc 0) = Suc 0" proof (rule ccontr) assume "f (Suc 0) \ Suc 0" then have "Suc 0 < f (Suc 0)" using f_nth[of 0] \x1 \ x2\ \f 0 = 0\ by auto then have "\i. Suc 0 < f (Suc i)" using f_mono by (meson Suc_le_mono le0 less_le_trans monoD) then have "Suc 0 \ f i" for i using \f 0 = 0\ by (cases i) fastforce+ then have "Suc 0 \ f ` {0 ..< length (x1 # x2 # xs)}" by auto then show False using f_img two by auto qed obtain ys' where "ys = x1 # x2 # ys'" using two f_nth[of 0] f_nth[of 1] by (auto simp: Suc_le_length_iff \f 0 = 0\ \f (Suc 0) = Suc 0\) have Suc0_le_f_Suc: "Suc 0 \ f (Suc i)" for i by (metis Suc_le_mono \f (Suc 0) = Suc 0\ f_mono le0 mono_def) define f' where "f' x = f (Suc x) - 1" for x have f_Suc: "f (Suc i) = Suc (f' i)" for i using Suc0_le_f_Suc[of i] by (auto simp: f'_def) have "remdups_adj (x2 # xs) = (x2 # ys')" proof (intro "3.hyps" exI conjI impI allI) show "mono f'" using Suc0_le_f_Suc f_mono by (auto simp: f'_def mono_iff_le_Suc le_diff_iff) next have "f' ` {0 ..< length (x2 # xs)} = (\x. f x - 1) ` {0 ..< length (x1 # x2 #xs)}" by (auto simp: f'_def \f 0 = 0\ \f (Suc 0) = Suc 0\ image_def Bex_def less_Suc_eq_0_disj) also have "\ = (\x. x - 1) ` f ` {0 ..< length (x1 # x2 #xs)}" by (auto simp: image_comp) also have "\ = (\x. x - 1) ` {0 ..< length ys}" by (simp only: f_img) also have "\ = {0 ..< length (x2 # ys')}" using \ys = _\ by (fastforce intro: rev_image_eqI) finally show "f' ` {0 ..< length (x2 # xs)} = {0 ..< length (x2 # ys')}" . qed (insert f_nth[of "Suc i" for i] \x1 \ x2\, auto simp add: f_Suc \ys = _\) then show ?case using \ys = _\ \x1 \ x2\ by simp qed qed qed lemma hd_remdups_adj[simp]: "hd (remdups_adj xs) = hd xs" by (induction xs rule: remdups_adj.induct) simp_all lemma remdups_adj_Cons: "remdups_adj (x # xs) = (case remdups_adj xs of [] \ [x] | y # xs \ if x = y then y # xs else x # y # xs)" by (induct xs arbitrary: x) (auto split: list.splits) lemma remdups_adj_append_two: "remdups_adj (xs @ [x,y]) = remdups_adj (xs @ [x]) @ (if x = y then [] else [y])" by (induct xs rule: remdups_adj.induct, simp_all) lemma remdups_adj_adjacent: "Suc i < length (remdups_adj xs) \ remdups_adj xs ! i \ remdups_adj xs ! Suc i" proof (induction xs arbitrary: i rule: remdups_adj.induct) case (3 x y xs i) thus ?case by (cases i, cases "x = y") (simp, auto simp: hd_conv_nth[symmetric]) qed simp_all lemma remdups_adj_rev[simp]: "remdups_adj (rev xs) = rev (remdups_adj xs)" by (induct xs rule: remdups_adj.induct, simp_all add: remdups_adj_append_two) lemma remdups_adj_length[simp]: "length (remdups_adj xs) \ length xs" by (induct xs rule: remdups_adj.induct, auto) lemma remdups_adj_length_ge1[simp]: "xs \ [] \ length (remdups_adj xs) \ Suc 0" by (induct xs rule: remdups_adj.induct, simp_all) lemma remdups_adj_Nil_iff[simp]: "remdups_adj xs = [] \ xs = []" by (induct xs rule: remdups_adj.induct, simp_all) lemma remdups_adj_set[simp]: "set (remdups_adj xs) = set xs" by (induct xs rule: remdups_adj.induct, simp_all) lemma last_remdups_adj [simp]: "last (remdups_adj xs) = last xs" by (induction xs rule: remdups_adj.induct) auto lemma remdups_adj_Cons_alt[simp]: "x # tl (remdups_adj (x # xs)) = remdups_adj (x # xs)" by (induct xs rule: remdups_adj.induct, auto) lemma remdups_adj_distinct: "distinct xs \ remdups_adj xs = xs" by (induct xs rule: remdups_adj.induct, simp_all) lemma remdups_adj_append: "remdups_adj (xs\<^sub>1 @ x # xs\<^sub>2) = remdups_adj (xs\<^sub>1 @ [x]) @ tl (remdups_adj (x # xs\<^sub>2))" by (induct xs\<^sub>1 rule: remdups_adj.induct, simp_all) lemma remdups_adj_singleton: "remdups_adj xs = [x] \ xs = replicate (length xs) x" by (induct xs rule: remdups_adj.induct, auto split: if_split_asm) lemma remdups_adj_map_injective: assumes "inj f" shows "remdups_adj (map f xs) = map f (remdups_adj xs)" by (induct xs rule: remdups_adj.induct) (auto simp add: injD[OF assms]) lemma remdups_adj_replicate: "remdups_adj (replicate n x) = (if n = 0 then [] else [x])" by (induction n) (auto simp: remdups_adj_Cons) lemma remdups_upt [simp]: "remdups [m.. n") case False then show ?thesis by simp next case True then obtain q where "n = m + q" by (auto simp add: le_iff_add) moreover have "remdups [m.. successively P (remdups_adj xs)" by (induction xs rule: remdups_adj.induct) (auto simp: successively_Cons) lemma successively_remdups_adj_iff: "(\x. x \ set xs \ P x x) \ successively P (remdups_adj xs) \ successively P xs" by (induction xs rule: remdups_adj.induct)(auto simp: successively_Cons) lemma remdups_adj_Cons': "remdups_adj (x # xs) = x # remdups_adj (dropWhile (\y. y = x) xs)" by (induction xs) auto lemma remdups_adj_singleton_iff: "length (remdups_adj xs) = Suc 0 \ xs \ [] \ xs = replicate (length xs) (hd xs)" proof safe assume *: "xs = replicate (length xs) (hd xs)" and [simp]: "xs \ []" show "length (remdups_adj xs) = Suc 0" by (subst *) (auto simp: remdups_adj_replicate) next assume "length (remdups_adj xs) = Suc 0" thus "xs = replicate (length xs) (hd xs)" by (induction xs rule: remdups_adj.induct) (auto split: if_splits) qed auto lemma tl_remdups_adj: "ys \ [] \ tl (remdups_adj ys) = remdups_adj (dropWhile (\x. x = hd ys) (tl ys))" by (cases ys) (simp_all add: remdups_adj_Cons') lemma remdups_adj_append_dropWhile: "remdups_adj (xs @ y # ys) = remdups_adj (xs @ [y]) @ remdups_adj (dropWhile (\x. x = y) ys)" by (subst remdups_adj_append) (simp add: tl_remdups_adj) lemma remdups_adj_append': assumes "xs = [] \ ys = [] \ last xs \ hd ys" shows "remdups_adj (xs @ ys) = remdups_adj xs @ remdups_adj ys" proof - have ?thesis if [simp]: "xs \ []" "ys \ []" and "last xs \ hd ys" proof - obtain x xs' where xs: "xs = xs' @ [x]" by (cases xs rule: rev_cases) auto have "remdups_adj (xs' @ x # ys) = remdups_adj (xs' @ [x]) @ remdups_adj ys" using \last xs \ hd ys\ unfolding xs by (metis (full_types) dropWhile_eq_self_iff last_snoc remdups_adj_append_dropWhile) thus ?thesis by (simp add: xs) qed thus ?thesis using assms by (cases "xs = []"; cases "ys = []") auto qed lemma remdups_adj_append'': "xs \ [] \ remdups_adj (xs @ ys) = remdups_adj xs @ remdups_adj (dropWhile (\y. y = last xs) ys)" by (induction xs rule: remdups_adj.induct) (auto simp: remdups_adj_Cons') subsection \@{const distinct_adj}\ lemma distinct_adj_Nil [simp]: "distinct_adj []" and distinct_adj_singleton [simp]: "distinct_adj [x]" and distinct_adj_Cons_Cons [simp]: "distinct_adj (x # y # xs) \ x \ y \ distinct_adj (y # xs)" by (auto simp: distinct_adj_def) lemma distinct_adj_Cons: "distinct_adj (x # xs) \ xs = [] \ x \ hd xs \ distinct_adj xs" by (cases xs) auto lemma distinct_adj_ConsD: "distinct_adj (x # xs) \ distinct_adj xs" by (cases xs) auto lemma distinct_adj_remdups_adj[simp]: "distinct_adj (remdups_adj xs)" by (induction xs rule: remdups_adj.induct) (auto simp: distinct_adj_Cons) lemma distinct_adj_altdef: "distinct_adj xs \ remdups_adj xs = xs" proof assume "remdups_adj xs = xs" with distinct_adj_remdups_adj[of xs] show "distinct_adj xs" by simp next assume "distinct_adj xs" thus "remdups_adj xs = xs" by (induction xs rule: induct_list012) auto qed lemma distinct_adj_rev [simp]: "distinct_adj (rev xs) \ distinct_adj xs" by (simp add: distinct_adj_def eq_commute) lemma distinct_adj_append_iff: "distinct_adj (xs @ ys) \ distinct_adj xs \ distinct_adj ys \ (xs = [] \ ys = [] \ last xs \ hd ys)" by (auto simp: distinct_adj_def successively_append_iff) lemma distinct_adj_appendD1 [dest]: "distinct_adj (xs @ ys) \ distinct_adj xs" and distinct_adj_appendD2 [dest]: "distinct_adj (xs @ ys) \ distinct_adj ys" by (auto simp: distinct_adj_append_iff) lemma distinct_adj_mapI: "distinct_adj xs \ inj_on f (set xs) \ distinct_adj (map f xs)" unfolding distinct_adj_def successively_map by (erule successively_mono) (auto simp: inj_on_def) lemma distinct_adj_mapD: "distinct_adj (map f xs) \ distinct_adj xs" unfolding distinct_adj_def successively_map by (erule successively_mono) auto lemma distinct_adj_map_iff: "inj_on f (set xs) \ distinct_adj (map f xs) \ distinct_adj xs" using distinct_adj_mapD distinct_adj_mapI by blast subsubsection \\<^const>\insert\\ lemma in_set_insert [simp]: "x \ set xs \ List.insert x xs = xs" by (simp add: List.insert_def) lemma not_in_set_insert [simp]: "x \ set xs \ List.insert x xs = x # xs" by (simp add: List.insert_def) lemma insert_Nil [simp]: "List.insert x [] = [x]" by simp lemma set_insert [simp]: "set (List.insert x xs) = insert x (set xs)" by (auto simp add: List.insert_def) lemma distinct_insert [simp]: "distinct (List.insert x xs) = distinct xs" by (simp add: List.insert_def) lemma insert_remdups: "List.insert x (remdups xs) = remdups (List.insert x xs)" by (simp add: List.insert_def) subsubsection \\<^const>\List.union\\ text\This is all one should need to know about union:\ lemma set_union[simp]: "set (List.union xs ys) = set xs \ set ys" unfolding List.union_def by(induct xs arbitrary: ys) simp_all lemma distinct_union[simp]: "distinct(List.union xs ys) = distinct ys" unfolding List.union_def by(induct xs arbitrary: ys) simp_all subsubsection \\<^const>\List.find\\ lemma find_None_iff: "List.find P xs = None \ \ (\x. x \ set xs \ P x)" proof (induction xs) case Nil thus ?case by simp next case (Cons x xs) thus ?case by (fastforce split: if_splits) qed lemma find_Some_iff: "List.find P xs = Some x \ (\i x = xs!i \ (\j P (xs!j)))" proof (induction xs) case Nil thus ?case by simp next case (Cons x xs) thus ?case apply(auto simp: nth_Cons' split: if_splits) using diff_Suc_1[unfolded One_nat_def] less_Suc_eq_0_disj by fastforce qed lemma find_cong[fundef_cong]: assumes "xs = ys" and "\x. x \ set ys \ P x = Q x" shows "List.find P xs = List.find Q ys" proof (cases "List.find P xs") case None thus ?thesis by (metis find_None_iff assms) next case (Some x) hence "List.find Q ys = Some x" using assms by (auto simp add: find_Some_iff) thus ?thesis using Some by auto qed lemma find_dropWhile: "List.find P xs = (case dropWhile (Not \ P) xs of [] \ None | x # _ \ Some x)" by (induct xs) simp_all subsubsection \\<^const>\count_list\\ lemma count_notin[simp]: "x \ set xs \ count_list xs x = 0" by (induction xs) auto lemma count_le_length: "count_list xs x \ length xs" by (induction xs) auto lemma sum_count_set: "set xs \ X \ finite X \ sum (count_list xs) X = length xs" proof (induction xs arbitrary: X) case (Cons x xs) then show ?case using sum.remove [of X x "count_list xs"] by (auto simp: sum.If_cases simp flip: diff_eq) qed simp subsubsection \\<^const>\List.extract\\ lemma extract_None_iff: "List.extract P xs = None \ \ (\ x\set xs. P x)" by(auto simp: extract_def dropWhile_eq_Cons_conv split: list.splits) (metis in_set_conv_decomp) lemma extract_SomeE: "List.extract P xs = Some (ys, y, zs) \ xs = ys @ y # zs \ P y \ \ (\ y \ set ys. P y)" by(auto simp: extract_def dropWhile_eq_Cons_conv split: list.splits) lemma extract_Some_iff: "List.extract P xs = Some (ys, y, zs) \ xs = ys @ y # zs \ P y \ \ (\ y \ set ys. P y)" by(auto simp: extract_def dropWhile_eq_Cons_conv dest: set_takeWhileD split: list.splits) lemma extract_Nil_code[code]: "List.extract P [] = None" by(simp add: extract_def) lemma extract_Cons_code[code]: "List.extract P (x # xs) = (if P x then Some ([], x, xs) else (case List.extract P xs of None \ None | Some (ys, y, zs) \ Some (x#ys, y, zs)))" by(auto simp add: extract_def comp_def split: list.splits) (metis dropWhile_eq_Nil_conv list.distinct(1)) subsubsection \\<^const>\remove1\\ lemma remove1_append: "remove1 x (xs @ ys) = (if x \ set xs then remove1 x xs @ ys else xs @ remove1 x ys)" by (induct xs) auto lemma remove1_commute: "remove1 x (remove1 y zs) = remove1 y (remove1 x zs)" by (induct zs) auto lemma in_set_remove1[simp]: "a \ b \ a \ set(remove1 b xs) = (a \ set xs)" by (induct xs) auto lemma set_remove1_subset: "set(remove1 x xs) \ set xs" by (induct xs) auto lemma set_remove1_eq [simp]: "distinct xs \ set(remove1 x xs) = set xs - {x}" by (induct xs) auto lemma length_remove1: "length(remove1 x xs) = (if x \ set xs then length xs - 1 else length xs)" by (induct xs) (auto dest!:length_pos_if_in_set) lemma remove1_filter_not[simp]: "\ P x \ remove1 x (filter P xs) = filter P xs" by(induct xs) auto lemma filter_remove1: "filter Q (remove1 x xs) = remove1 x (filter Q xs)" by (induct xs) auto lemma notin_set_remove1[simp]: "x \ set xs \ x \ set(remove1 y xs)" by(insert set_remove1_subset) fast lemma distinct_remove1[simp]: "distinct xs \ distinct(remove1 x xs)" by (induct xs) simp_all lemma remove1_remdups: "distinct xs \ remove1 x (remdups xs) = remdups (remove1 x xs)" by (induct xs) simp_all lemma remove1_idem: "x \ set xs \ remove1 x xs = xs" by (induct xs) simp_all lemma remove1_split: "a \ set xs \ remove1 a xs = ys \ (\ls rs. xs = ls @ a # rs \ a \ set ls \ ys = ls @ rs)" by (metis remove1.simps(2) remove1_append split_list_first) subsubsection \\<^const>\removeAll\\ lemma removeAll_filter_not_eq: "removeAll x = filter (\y. x \ y)" proof fix xs show "removeAll x xs = filter (\y. x \ y) xs" by (induct xs) auto qed lemma removeAll_append[simp]: "removeAll x (xs @ ys) = removeAll x xs @ removeAll x ys" by (induct xs) auto lemma set_removeAll[simp]: "set(removeAll x xs) = set xs - {x}" by (induct xs) auto lemma removeAll_id[simp]: "x \ set xs \ removeAll x xs = xs" by (induct xs) auto (* Needs count:: 'a \ 'a list \ nat lemma length_removeAll: "length(removeAll x xs) = length xs - count x xs" *) lemma removeAll_filter_not[simp]: "\ P x \ removeAll x (filter P xs) = filter P xs" by(induct xs) auto lemma distinct_removeAll: "distinct xs \ distinct (removeAll x xs)" by (simp add: removeAll_filter_not_eq) lemma distinct_remove1_removeAll: "distinct xs \ remove1 x xs = removeAll x xs" by (induct xs) simp_all lemma map_removeAll_inj_on: "inj_on f (insert x (set xs)) \ map f (removeAll x xs) = removeAll (f x) (map f xs)" by (induct xs) (simp_all add:inj_on_def) lemma map_removeAll_inj: "inj f \ map f (removeAll x xs) = removeAll (f x) (map f xs)" by (rule map_removeAll_inj_on, erule subset_inj_on, rule subset_UNIV) lemma length_removeAll_less_eq [simp]: "length (removeAll x xs) \ length xs" by (simp add: removeAll_filter_not_eq) lemma length_removeAll_less [termination_simp]: "x \ set xs \ length (removeAll x xs) < length xs" by (auto dest: length_filter_less simp add: removeAll_filter_not_eq) lemma distinct_concat_iff: "distinct (concat xs) \ distinct (removeAll [] xs) \ (\ys. ys \ set xs \ distinct ys) \ (\ys zs. ys \ set xs \ zs \ set xs \ ys \ zs \ set ys \ set zs = {})" apply (induct xs) apply(simp_all, safe, auto) by (metis Int_iff UN_I empty_iff equals0I set_empty) subsubsection \\<^const>\replicate\\ lemma length_replicate [simp]: "length (replicate n x) = n" by (induct n) auto lemma replicate_eqI: assumes "length xs = n" and "\y. y \ set xs \ y = x" shows "xs = replicate n x" using assms proof (induct xs arbitrary: n) case Nil then show ?case by simp next case (Cons x xs) then show ?case by (cases n) simp_all qed lemma Ex_list_of_length: "\xs. length xs = n" by (rule exI[of _ "replicate n undefined"]) simp lemma map_replicate [simp]: "map f (replicate n x) = replicate n (f x)" by (induct n) auto lemma map_replicate_const: "map (\ x. k) lst = replicate (length lst) k" by (induct lst) auto lemma replicate_app_Cons_same: "(replicate n x) @ (x # xs) = x # replicate n x @ xs" by (induct n) auto lemma rev_replicate [simp]: "rev (replicate n x) = replicate n x" by (induct n) (auto simp: replicate_app_Cons_same) lemma replicate_add: "replicate (n + m) x = replicate n x @ replicate m x" by (induct n) auto text\Courtesy of Matthias Daum:\ lemma append_replicate_commute: "replicate n x @ replicate k x = replicate k x @ replicate n x" by (metis add.commute replicate_add) text\Courtesy of Andreas Lochbihler:\ lemma filter_replicate: "filter P (replicate n x) = (if P x then replicate n x else [])" by(induct n) auto lemma hd_replicate [simp]: "n \ 0 \ hd (replicate n x) = x" by (induct n) auto lemma tl_replicate [simp]: "tl (replicate n x) = replicate (n - 1) x" by (induct n) auto lemma last_replicate [simp]: "n \ 0 \ last (replicate n x) = x" by (atomize (full), induct n) auto lemma nth_replicate[simp]: "i < n \ (replicate n x)!i = x" by (induct n arbitrary: i)(auto simp: nth_Cons split: nat.split) text\Courtesy of Matthias Daum (2 lemmas):\ lemma take_replicate[simp]: "take i (replicate k x) = replicate (min i k) x" proof (cases "k \ i") case True then show ?thesis by (simp add: min_def) next case False then have "replicate k x = replicate i x @ replicate (k - i) x" by (simp add: replicate_add [symmetric]) then show ?thesis by (simp add: min_def) qed lemma drop_replicate[simp]: "drop i (replicate k x) = replicate (k-i) x" proof (induct k arbitrary: i) case (Suc k) then show ?case by (simp add: drop_Cons') qed simp lemma set_replicate_Suc: "set (replicate (Suc n) x) = {x}" by (induct n) auto lemma set_replicate [simp]: "n \ 0 \ set (replicate n x) = {x}" by (fast dest!: not0_implies_Suc intro!: set_replicate_Suc) lemma set_replicate_conv_if: "set (replicate n x) = (if n = 0 then {} else {x})" by auto lemma in_set_replicate[simp]: "(x \ set (replicate n y)) = (x = y \ n \ 0)" by (simp add: set_replicate_conv_if) lemma Ball_set_replicate[simp]: "(\x \ set(replicate n a). P x) = (P a \ n=0)" by(simp add: set_replicate_conv_if) lemma Bex_set_replicate[simp]: "(\x \ set(replicate n a). P x) = (P a \ n\0)" by(simp add: set_replicate_conv_if) lemma replicate_append_same: "replicate i x @ [x] = x # replicate i x" by (induct i) simp_all lemma map_replicate_trivial: "map (\i. x) [0.. n=0" by (induct n) auto lemma empty_replicate[simp]: "([] = replicate n x) \ n=0" by (induct n) auto lemma replicate_eq_replicate[simp]: "(replicate m x = replicate n y) \ (m=n \ (m\0 \ x=y))" proof (induct m arbitrary: n) case (Suc m n) then show ?case by (induct n) auto qed simp lemma takeWhile_replicate[simp]: "takeWhile P (replicate n x) = (if P x then replicate n x else [])" using takeWhile_eq_Nil_iff by fastforce lemma dropWhile_replicate[simp]: "dropWhile P (replicate n x) = (if P x then [] else replicate n x)" using dropWhile_eq_self_iff by fastforce lemma replicate_length_filter: "replicate (length (filter (\y. x = y) xs)) x = filter (\y. x = y) xs" by (induct xs) auto lemma comm_append_are_replicate: "xs @ ys = ys @ xs \ \m n zs. concat (replicate m zs) = xs \ concat (replicate n zs) = ys" proof (induction "length (xs @ ys) + length xs" arbitrary: xs ys rule: less_induct) case less consider (1) "length ys < length xs" | (2) "xs = []" | (3) "length xs \ length ys \ xs \ []" by linarith then show ?case proof (cases) case 1 then show ?thesis using less.hyps[OF _ less.prems[symmetric]] nat_add_left_cancel_less by auto next case 2 then have "concat (replicate 0 ys) = xs \ concat (replicate 1 ys) = ys" by simp then show ?thesis by blast next case 3 then have "length xs \ length ys" and "xs \ []" by blast+ from \length xs \ length ys\ and \xs @ ys = ys @ xs\ obtain ws where "ys = xs @ ws" by (auto simp: append_eq_append_conv2) from this and \xs \ []\ have "length ws < length ys" by simp from \xs @ ys = ys @ xs\[unfolded \ys = xs @ ws\] have "xs @ ws = ws @ xs" by simp from less.hyps[OF _ this] \length ws < length ys\ obtain m n' zs where "concat (replicate m zs) = xs" and "concat (replicate n' zs) = ws" by auto then have "concat (replicate (m+n') zs) = ys" using \ys = xs @ ws\ by (simp add: replicate_add) then show ?thesis using \concat (replicate m zs) = xs\ by blast qed qed lemma comm_append_is_replicate: fixes xs ys :: "'a list" assumes "xs \ []" "ys \ []" assumes "xs @ ys = ys @ xs" shows "\n zs. n > 1 \ concat (replicate n zs) = xs @ ys" proof - obtain m n zs where "concat (replicate m zs) = xs" and "concat (replicate n zs) = ys" using comm_append_are_replicate[OF assms(3)] by blast then have "m + n > 1" and "concat (replicate (m+n) zs) = xs @ ys" using \xs \ []\ and \ys \ []\ by (auto simp: replicate_add) then show ?thesis by blast qed lemma Cons_replicate_eq: "x # xs = replicate n y \ x = y \ n > 0 \ xs = replicate (n - 1) x" by (induct n) auto lemma replicate_length_same: "(\y\set xs. y = x) \ replicate (length xs) x = xs" by (induct xs) simp_all lemma foldr_replicate [simp]: "foldr f (replicate n x) = f x ^^ n" by (induct n) (simp_all) lemma fold_replicate [simp]: "fold f (replicate n x) = f x ^^ n" by (subst foldr_fold [symmetric]) simp_all subsubsection \\<^const>\enumerate\\ lemma enumerate_simps [simp, code]: "enumerate n [] = []" "enumerate n (x # xs) = (n, x) # enumerate (Suc n) xs" by (simp_all add: enumerate_eq_zip upt_rec) lemma length_enumerate [simp]: "length (enumerate n xs) = length xs" by (simp add: enumerate_eq_zip) lemma map_fst_enumerate [simp]: "map fst (enumerate n xs) = [n.. set (enumerate n xs) \ n \ fst p \ fst p < length xs + n \ nth xs (fst p - n) = snd p" proof - { fix m assume "n \ m" moreover assume "m < length xs + n" ultimately have "[n.. xs ! (m - n) = xs ! (m - n) \ m - n < length xs" by auto then have "\q. [n.. xs ! q = xs ! (m - n) \ q < length xs" .. } then show ?thesis by (cases p) (auto simp add: enumerate_eq_zip in_set_zip) qed lemma nth_enumerate_eq: "m < length xs \ enumerate n xs ! m = (n + m, xs ! m)" by (simp add: enumerate_eq_zip) lemma enumerate_replicate_eq: "enumerate n (replicate m a) = map (\q. (q, a)) [n..k. (k, f k)) [n.. m") (simp_all add: zip_map2 zip_same_conv_map enumerate_eq_zip) subsubsection \\<^const>\rotate1\ and \<^const>\rotate\\ lemma rotate0[simp]: "rotate 0 = id" by(simp add:rotate_def) lemma rotate_Suc[simp]: "rotate (Suc n) xs = rotate1(rotate n xs)" by(simp add:rotate_def) lemma rotate_add: "rotate (m+n) = rotate m \ rotate n" by(simp add:rotate_def funpow_add) lemma rotate_rotate: "rotate m (rotate n xs) = rotate (m+n) xs" by(simp add:rotate_add) lemma rotate1_map: "rotate1 (map f xs) = map f (rotate1 xs)" by(cases xs) simp_all lemma rotate1_rotate_swap: "rotate1 (rotate n xs) = rotate n (rotate1 xs)" by(simp add:rotate_def funpow_swap1) lemma rotate1_length01[simp]: "length xs \ 1 \ rotate1 xs = xs" by(cases xs) simp_all lemma rotate_length01[simp]: "length xs \ 1 \ rotate n xs = xs" by (induct n) (simp_all add:rotate_def) lemma rotate1_hd_tl: "xs \ [] \ rotate1 xs = tl xs @ [hd xs]" by (cases xs) simp_all lemma rotate_drop_take: "rotate n xs = drop (n mod length xs) xs @ take (n mod length xs) xs" proof (induct n) case (Suc n) show ?case proof (cases "xs = []") case False then show ?thesis proof (cases "n mod length xs = 0") case True then show ?thesis by (auto simp add: mod_Suc False Suc.hyps drop_Suc rotate1_hd_tl take_Suc Suc_length_conv) next case False with \xs \ []\ Suc show ?thesis by (simp add: rotate_def mod_Suc rotate1_hd_tl drop_Suc[symmetric] drop_tl[symmetric] take_hd_drop linorder_not_le) qed qed simp qed simp lemma rotate_conv_mod: "rotate n xs = rotate (n mod length xs) xs" by(simp add:rotate_drop_take) lemma rotate_id[simp]: "n mod length xs = 0 \ rotate n xs = xs" by(simp add:rotate_drop_take) lemma length_rotate1[simp]: "length(rotate1 xs) = length xs" by (cases xs) simp_all lemma length_rotate[simp]: "length(rotate n xs) = length xs" by (induct n arbitrary: xs) (simp_all add:rotate_def) lemma distinct1_rotate[simp]: "distinct(rotate1 xs) = distinct xs" by (cases xs) auto lemma distinct_rotate[simp]: "distinct(rotate n xs) = distinct xs" by (induct n) (simp_all add:rotate_def) lemma rotate_map: "rotate n (map f xs) = map f (rotate n xs)" by(simp add:rotate_drop_take take_map drop_map) lemma set_rotate1[simp]: "set(rotate1 xs) = set xs" by (cases xs) auto lemma set_rotate[simp]: "set(rotate n xs) = set xs" by (induct n) (simp_all add:rotate_def) lemma rotate1_is_Nil_conv[simp]: "(rotate1 xs = []) = (xs = [])" by (cases xs) auto lemma rotate_is_Nil_conv[simp]: "(rotate n xs = []) = (xs = [])" by (induct n) (simp_all add:rotate_def) lemma rotate_rev: "rotate n (rev xs) = rev(rotate (length xs - (n mod length xs)) xs)" proof (cases "length xs = 0 \ n mod length xs = 0") case False then show ?thesis by(simp add:rotate_drop_take rev_drop rev_take) qed force lemma hd_rotate_conv_nth: assumes "xs \ []" shows "hd(rotate n xs) = xs!(n mod length xs)" proof - have "n mod length xs < length xs" using assms by simp then show ?thesis by (metis drop_eq_Nil hd_append2 hd_drop_conv_nth leD rotate_drop_take) qed lemma rotate_append: "rotate (length l) (l @ q) = q @ l" by (induct l arbitrary: q) (auto simp add: rotate1_rotate_swap) lemma nth_rotate: \rotate m xs ! n = xs ! ((m + n) mod length xs)\ if \n < length xs\ using that apply (auto simp add: rotate_drop_take nth_append not_less less_diff_conv ac_simps dest!: le_Suc_ex) apply (metis add.commute mod_add_right_eq mod_less) apply (metis (no_types, lifting) Nat.diff_diff_right add.commute add_diff_cancel_right' diff_le_self dual_order.strict_trans2 length_greater_0_conv less_nat_zero_code list.size(3) mod_add_right_eq mod_add_self2 mod_le_divisor mod_less) done lemma nth_rotate1: \rotate1 xs ! n = xs ! (Suc n mod length xs)\ if \n < length xs\ using that nth_rotate [of n xs 1] by simp subsubsection \\<^const>\nths\ --- a generalization of \<^const>\nth\ to sets\ lemma nths_empty [simp]: "nths xs {} = []" by (auto simp add: nths_def) lemma nths_nil [simp]: "nths [] A = []" by (auto simp add: nths_def) lemma nths_all: "\i < length xs. i \ I \ nths xs I = xs" apply (simp add: nths_def) apply (subst filter_True) apply (auto simp: in_set_zip subset_iff) done lemma length_nths: "length (nths xs I) = card{i. i < length xs \ i \ I}" by(simp add: nths_def length_filter_conv_card cong:conj_cong) lemma nths_shift_lemma_Suc: "map fst (filter (\p. P(Suc(snd p))) (zip xs is)) = map fst (filter (\p. P(snd p)) (zip xs (map Suc is)))" proof (induct xs arbitrary: "is") case (Cons x xs "is") show ?case by (cases "is") (auto simp add: Cons.hyps) qed simp lemma nths_shift_lemma: "map fst (filter (\p. snd p \ A) (zip xs [i..p. snd p + i \ A) (zip xs [0.. A}" unfolding nths_def proof (induct l' rule: rev_induct) case (snoc x xs) then show ?case by (simp add: upt_add_eq_append[of 0] nths_shift_lemma add.commute) qed auto lemma nths_Cons: "nths (x # l) A = (if 0 \ A then [x] else []) @ nths l {j. Suc j \ A}" proof (induct l rule: rev_induct) case (snoc x xs) then show ?case by (simp flip: append_Cons add: nths_append) qed (auto simp: nths_def) lemma nths_map: "nths (map f xs) I = map f (nths xs I)" by(induction xs arbitrary: I) (simp_all add: nths_Cons) lemma set_nths: "set(nths xs I) = {xs!i|i. i i \ I}" by (induct xs arbitrary: I) (auto simp: nths_Cons nth_Cons split:nat.split dest!: gr0_implies_Suc) lemma set_nths_subset: "set(nths xs I) \ set xs" by(auto simp add:set_nths) lemma notin_set_nthsI[simp]: "x \ set xs \ x \ set(nths xs I)" by(auto simp add:set_nths) lemma in_set_nthsD: "x \ set(nths xs I) \ x \ set xs" by(auto simp add:set_nths) lemma nths_singleton [simp]: "nths [x] A = (if 0 \ A then [x] else [])" by (simp add: nths_Cons) lemma distinct_nthsI[simp]: "distinct xs \ distinct (nths xs I)" by (induct xs arbitrary: I) (auto simp: nths_Cons) lemma nths_upt_eq_take [simp]: "nths l {.. A. \j \ B. card {i' \ A. i' < i} = j}" by (induction xs arbitrary: A B) (auto simp add: nths_Cons card_less_Suc card_less_Suc2) lemma drop_eq_nths: "drop n xs = nths xs {i. i \ n}" by (induction xs arbitrary: n) (auto simp add: nths_Cons nths_all drop_Cons' intro: arg_cong2[where f=nths, OF refl]) lemma nths_drop: "nths (drop n xs) I = nths xs ((+) n ` I)" by(force simp: drop_eq_nths nths_nths simp flip: atLeastLessThan_iff intro: arg_cong2[where f=nths, OF refl]) lemma filter_eq_nths: "filter P xs = nths xs {i. i P(xs!i)}" by(induction xs) (auto simp: nths_Cons) lemma filter_in_nths: "distinct xs \ filter (%x. x \ set (nths xs s)) xs = nths xs s" proof (induct xs arbitrary: s) case Nil thus ?case by simp next case (Cons a xs) then have "\x. x \ set xs \ x \ a" by auto with Cons show ?case by(simp add: nths_Cons cong:filter_cong) qed subsubsection \\<^const>\subseqs\ and \<^const>\List.n_lists\\ lemma length_subseqs: "length (subseqs xs) = 2 ^ length xs" by (induct xs) (simp_all add: Let_def) lemma subseqs_powset: "set ` set (subseqs xs) = Pow (set xs)" proof - have aux: "\x A. set ` Cons x ` A = insert x ` set ` A" by (auto simp add: image_def) have "set (map set (subseqs xs)) = Pow (set xs)" by (induct xs) (simp_all add: aux Let_def Pow_insert Un_commute comp_def del: map_map) then show ?thesis by simp qed lemma distinct_set_subseqs: assumes "distinct xs" shows "distinct (map set (subseqs xs))" proof (rule card_distinct) have "finite (set xs)" .. then have "card (Pow (set xs)) = 2 ^ card (set xs)" by (rule card_Pow) with assms distinct_card [of xs] have "card (Pow (set xs)) = 2 ^ length xs" by simp then show "card (set (map set (subseqs xs))) = length (map set (subseqs xs))" by (simp add: subseqs_powset length_subseqs) qed lemma n_lists_Nil [simp]: "List.n_lists n [] = (if n = 0 then [[]] else [])" by (induct n) simp_all lemma length_n_lists_elem: "ys \ set (List.n_lists n xs) \ length ys = n" by (induct n arbitrary: ys) auto lemma set_n_lists: "set (List.n_lists n xs) = {ys. length ys = n \ set ys \ set xs}" proof (rule set_eqI) fix ys :: "'a list" show "ys \ set (List.n_lists n xs) \ ys \ {ys. length ys = n \ set ys \ set xs}" proof - have "ys \ set (List.n_lists n xs) \ length ys = n" by (induct n arbitrary: ys) auto moreover have "\x. ys \ set (List.n_lists n xs) \ x \ set ys \ x \ set xs" by (induct n arbitrary: ys) auto moreover have "set ys \ set xs \ ys \ set (List.n_lists (length ys) xs)" by (induct ys) auto ultimately show ?thesis by auto qed qed lemma subseqs_refl: "xs \ set (subseqs xs)" by (induct xs) (simp_all add: Let_def) lemma subset_subseqs: "X \ set xs \ X \ set ` set (subseqs xs)" unfolding subseqs_powset by simp lemma Cons_in_subseqsD: "y # ys \ set (subseqs xs) \ ys \ set (subseqs xs)" by (induct xs) (auto simp: Let_def) lemma subseqs_distinctD: "\ ys \ set (subseqs xs); distinct xs \ \ distinct ys" proof (induct xs arbitrary: ys) case (Cons x xs ys) then show ?case by (auto simp: Let_def) (metis Pow_iff contra_subsetD image_eqI subseqs_powset) qed simp subsubsection \\<^const>\splice\\ lemma splice_Nil2 [simp]: "splice xs [] = xs" by (cases xs) simp_all lemma length_splice[simp]: "length(splice xs ys) = length xs + length ys" by (induct xs ys rule: splice.induct) auto lemma split_Nil_iff[simp]: "splice xs ys = [] \ xs = [] \ ys = []" by (induct xs ys rule: splice.induct) auto lemma splice_replicate[simp]: "splice (replicate m x) (replicate n x) = replicate (m+n) x" proof (induction "replicate m x" "replicate n x" arbitrary: m n rule: splice.induct) case (2 x xs) then show ?case by (auto simp add: Cons_replicate_eq dest: gr0_implies_Suc) qed auto subsubsection \\<^const>\shuffles\\ lemma shuffles_commutes: "shuffles xs ys = shuffles ys xs" by (induction xs ys rule: shuffles.induct) (simp_all add: Un_commute) lemma Nil_in_shuffles[simp]: "[] \ shuffles xs ys \ xs = [] \ ys = []" by (induct xs ys rule: shuffles.induct) auto lemma shufflesE: "zs \ shuffles xs ys \ (zs = xs \ ys = [] \ P) \ (zs = ys \ xs = [] \ P) \ (\x xs' z zs'. xs = x # xs' \ zs = z # zs' \ x = z \ zs' \ shuffles xs' ys \ P) \ (\y ys' z zs'. ys = y # ys' \ zs = z # zs' \ y = z \ zs' \ shuffles xs ys' \ P) \ P" by (induct xs ys rule: shuffles.induct) auto lemma Cons_in_shuffles_iff: "z # zs \ shuffles xs ys \ (xs \ [] \ hd xs = z \ zs \ shuffles (tl xs) ys \ ys \ [] \ hd ys = z \ zs \ shuffles xs (tl ys))" by (induct xs ys rule: shuffles.induct) auto lemma splice_in_shuffles [simp, intro]: "splice xs ys \ shuffles xs ys" by (induction xs ys rule: splice.induct) (simp_all add: Cons_in_shuffles_iff shuffles_commutes) lemma Nil_in_shufflesI: "xs = [] \ ys = [] \ [] \ shuffles xs ys" by simp lemma Cons_in_shuffles_leftI: "zs \ shuffles xs ys \ z # zs \ shuffles (z # xs) ys" by (cases ys) auto lemma Cons_in_shuffles_rightI: "zs \ shuffles xs ys \ z # zs \ shuffles xs (z # ys)" by (cases xs) auto lemma finite_shuffles [simp, intro]: "finite (shuffles xs ys)" by (induction xs ys rule: shuffles.induct) simp_all lemma length_shuffles: "zs \ shuffles xs ys \ length zs = length xs + length ys" by (induction xs ys arbitrary: zs rule: shuffles.induct) auto lemma set_shuffles: "zs \ shuffles xs ys \ set zs = set xs \ set ys" by (induction xs ys arbitrary: zs rule: shuffles.induct) auto lemma distinct_disjoint_shuffles: assumes "distinct xs" "distinct ys" "set xs \ set ys = {}" "zs \ shuffles xs ys" shows "distinct zs" using assms proof (induction xs ys arbitrary: zs rule: shuffles.induct) case (3 x xs y ys) show ?case proof (cases zs) case (Cons z zs') with "3.prems" and "3.IH"[of zs'] show ?thesis by (force dest: set_shuffles) qed simp_all qed simp_all lemma Cons_shuffles_subset1: "(#) x ` shuffles xs ys \ shuffles (x # xs) ys" by (cases ys) auto lemma Cons_shuffles_subset2: "(#) y ` shuffles xs ys \ shuffles xs (y # ys)" by (cases xs) auto lemma filter_shuffles: "filter P ` shuffles xs ys = shuffles (filter P xs) (filter P ys)" proof - have *: "filter P ` (#) x ` A = (if P x then (#) x ` filter P ` A else filter P ` A)" for x A by (auto simp: image_image) show ?thesis by (induction xs ys rule: shuffles.induct) (simp_all split: if_splits add: image_Un * Un_absorb1 Un_absorb2 Cons_shuffles_subset1 Cons_shuffles_subset2) qed lemma filter_shuffles_disjoint1: assumes "set xs \ set ys = {}" "zs \ shuffles xs ys" shows "filter (\x. x \ set xs) zs = xs" (is "filter ?P _ = _") and "filter (\x. x \ set xs) zs = ys" (is "filter ?Q _ = _") using assms proof - from assms have "filter ?P zs \ filter ?P ` shuffles xs ys" by blast also have "filter ?P ` shuffles xs ys = shuffles (filter ?P xs) (filter ?P ys)" by (rule filter_shuffles) also have "filter ?P xs = xs" by (rule filter_True) simp_all also have "filter ?P ys = []" by (rule filter_False) (insert assms(1), auto) also have "shuffles xs [] = {xs}" by simp finally show "filter ?P zs = xs" by simp next from assms have "filter ?Q zs \ filter ?Q ` shuffles xs ys" by blast also have "filter ?Q ` shuffles xs ys = shuffles (filter ?Q xs) (filter ?Q ys)" by (rule filter_shuffles) also have "filter ?Q ys = ys" by (rule filter_True) (insert assms(1), auto) also have "filter ?Q xs = []" by (rule filter_False) (insert assms(1), auto) also have "shuffles [] ys = {ys}" by simp finally show "filter ?Q zs = ys" by simp qed lemma filter_shuffles_disjoint2: assumes "set xs \ set ys = {}" "zs \ shuffles xs ys" shows "filter (\x. x \ set ys) zs = ys" "filter (\x. x \ set ys) zs = xs" using filter_shuffles_disjoint1[of ys xs zs] assms by (simp_all add: shuffles_commutes Int_commute) lemma partition_in_shuffles: "xs \ shuffles (filter P xs) (filter (\x. \P x) xs)" proof (induction xs) case (Cons x xs) show ?case proof (cases "P x") case True hence "x # xs \ (#) x ` shuffles (filter P xs) (filter (\x. \P x) xs)" by (intro imageI Cons.IH) also have "\ \ shuffles (filter P (x # xs)) (filter (\x. \P x) (x # xs))" by (simp add: True Cons_shuffles_subset1) finally show ?thesis . next case False hence "x # xs \ (#) x ` shuffles (filter P xs) (filter (\x. \P x) xs)" by (intro imageI Cons.IH) also have "\ \ shuffles (filter P (x # xs)) (filter (\x. \P x) (x # xs))" by (simp add: False Cons_shuffles_subset2) finally show ?thesis . qed qed auto lemma inv_image_partition: assumes "\x. x \ set xs \ P x" "\y. y \ set ys \ \P y" shows "partition P -` {(xs, ys)} = shuffles xs ys" proof (intro equalityI subsetI) fix zs assume zs: "zs \ shuffles xs ys" hence [simp]: "set zs = set xs \ set ys" by (rule set_shuffles) from assms have "filter P zs = filter (\x. x \ set xs) zs" "filter (\x. \P x) zs = filter (\x. x \ set ys) zs" by (intro filter_cong refl; force)+ moreover from assms have "set xs \ set ys = {}" by auto ultimately show "zs \ partition P -` {(xs, ys)}" using zs by (simp add: o_def filter_shuffles_disjoint1 filter_shuffles_disjoint2) next fix zs assume "zs \ partition P -` {(xs, ys)}" thus "zs \ shuffles xs ys" using partition_in_shuffles[of zs] by (auto simp: o_def) qed subsubsection \Transpose\ function transpose where "transpose [] = []" | "transpose ([] # xss) = transpose xss" | "transpose ((x#xs) # xss) = (x # [h. (h#t) \ xss]) # transpose (xs # [t. (h#t) \ xss])" by pat_completeness auto lemma transpose_aux_filter_head: "concat (map (case_list [] (\h t. [h])) xss) = map (\xs. hd xs) (filter (\ys. ys \ []) xss)" by (induct xss) (auto split: list.split) lemma transpose_aux_filter_tail: "concat (map (case_list [] (\h t. [t])) xss) = map (\xs. tl xs) (filter (\ys. ys \ []) xss)" by (induct xss) (auto split: list.split) lemma transpose_aux_max: "max (Suc (length xs)) (foldr (\xs. max (length xs)) xss 0) = Suc (max (length xs) (foldr (\x. max (length x - Suc 0)) (filter (\ys. ys \ []) xss) 0))" (is "max _ ?foldB = Suc (max _ ?foldA)") proof (cases "(filter (\ys. ys \ []) xss) = []") case True hence "foldr (\xs. max (length xs)) xss 0 = 0" proof (induct xss) case (Cons x xs) then have "x = []" by (cases x) auto with Cons show ?case by auto qed simp thus ?thesis using True by simp next case False have foldA: "?foldA = foldr (\x. max (length x)) (filter (\ys. ys \ []) xss) 0 - 1" by (induct xss) auto have foldB: "?foldB = foldr (\x. max (length x)) (filter (\ys. ys \ []) xss) 0" by (induct xss) auto have "0 < ?foldB" proof - from False obtain z zs where zs: "(filter (\ys. ys \ []) xss) = z#zs" by (auto simp: neq_Nil_conv) hence "z \ set (filter (\ys. ys \ []) xss)" by auto hence "z \ []" by auto thus ?thesis unfolding foldB zs by (auto simp: max_def intro: less_le_trans) qed thus ?thesis unfolding foldA foldB max_Suc_Suc[symmetric] by simp qed termination transpose by (relation "measure (\xs. foldr (\xs. max (length xs)) xs 0 + length xs)") (auto simp: transpose_aux_filter_tail foldr_map comp_def transpose_aux_max less_Suc_eq_le) lemma transpose_empty: "(transpose xs = []) \ (\x \ set xs. x = [])" by (induct rule: transpose.induct) simp_all lemma length_transpose: fixes xs :: "'a list list" shows "length (transpose xs) = foldr (\xs. max (length xs)) xs 0" by (induct rule: transpose.induct) (auto simp: transpose_aux_filter_tail foldr_map comp_def transpose_aux_max max_Suc_Suc[symmetric] simp del: max_Suc_Suc) lemma nth_transpose: fixes xs :: "'a list list" assumes "i < length (transpose xs)" shows "transpose xs ! i = map (\xs. xs ! i) (filter (\ys. i < length ys) xs)" using assms proof (induct arbitrary: i rule: transpose.induct) case (3 x xs xss) define XS where "XS = (x # xs) # xss" hence [simp]: "XS \ []" by auto thus ?case proof (cases i) case 0 thus ?thesis by (simp add: transpose_aux_filter_head hd_conv_nth) next case (Suc j) have *: "\xss. xs # map tl xss = map tl ((x#xs)#xss)" by simp have **: "\xss. (x#xs) # filter (\ys. ys \ []) xss = filter (\ys. ys \ []) ((x#xs)#xss)" by simp { fix x have "Suc j < length x \ x \ [] \ j < length x - Suc 0" by (cases x) simp_all } note *** = this have j_less: "j < length (transpose (xs # concat (map (case_list [] (\h t. [t])) xss)))" using "3.prems" by (simp add: transpose_aux_filter_tail length_transpose Suc) show ?thesis unfolding transpose.simps \i = Suc j\ nth_Cons_Suc "3.hyps"[OF j_less] apply (auto simp: transpose_aux_filter_tail filter_map comp_def length_transpose * ** *** XS_def[symmetric]) by (simp add: nth_tl) qed qed simp_all lemma transpose_map_map: "transpose (map (map f) xs) = map (map f) (transpose xs)" proof (rule nth_equalityI) have [simp]: "length (transpose (map (map f) xs)) = length (transpose xs)" by (simp add: length_transpose foldr_map comp_def) show "length (transpose (map (map f) xs)) = length (map (map f) (transpose xs))" by simp fix i assume "i < length (transpose (map (map f) xs))" thus "transpose (map (map f) xs) ! i = map (map f) (transpose xs) ! i" by (simp add: nth_transpose filter_map comp_def) qed subsubsection \\<^const>\min\ and \<^const>\arg_min\\ lemma min_list_Min: "xs \ [] \ min_list xs = Min (set xs)" by (induction xs rule: induct_list012)(auto) lemma f_arg_min_list_f: "xs \ [] \ f (arg_min_list f xs) = Min (f ` (set xs))" by(induction f xs rule: arg_min_list.induct) (auto simp: min_def intro!: antisym) lemma arg_min_list_in: "xs \ [] \ arg_min_list f xs \ set xs" by(induction xs rule: induct_list012) (auto simp: Let_def) subsubsection \(In)finiteness\ lemma finite_maxlen: "finite (M::'a list set) \ \n. \s\M. size s < n" proof (induct rule: finite.induct) case emptyI show ?case by simp next case (insertI M xs) then obtain n where "\s\M. length s < n" by blast hence "\s\insert xs M. size s < max n (size xs) + 1" by auto thus ?case .. qed lemma lists_length_Suc_eq: "{xs. set xs \ A \ length xs = Suc n} = (\(xs, n). n#xs) ` ({xs. set xs \ A \ length xs = n} \ A)" by (auto simp: length_Suc_conv) lemma assumes "finite A" shows finite_lists_length_eq: "finite {xs. set xs \ A \ length xs = n}" and card_lists_length_eq: "card {xs. set xs \ A \ length xs = n} = (card A)^n" using \finite A\ by (induct n) (auto simp: card_image inj_split_Cons lists_length_Suc_eq cong: conj_cong) lemma finite_lists_length_le: assumes "finite A" shows "finite {xs. set xs \ A \ length xs \ n}" (is "finite ?S") proof- have "?S = (\n\{0..n}. {xs. set xs \ A \ length xs = n})" by auto thus ?thesis by (auto intro!: finite_lists_length_eq[OF \finite A\] simp only:) qed lemma card_lists_length_le: assumes "finite A" shows "card {xs. set xs \ A \ length xs \ n} = (\i\n. card A^i)" proof - have "(\i\n. card A^i) = card (\i\n. {xs. set xs \ A \ length xs = i})" using \finite A\ by (subst card_UN_disjoint) (auto simp add: card_lists_length_eq finite_lists_length_eq) also have "(\i\n. {xs. set xs \ A \ length xs = i}) = {xs. set xs \ A \ length xs \ n}" by auto finally show ?thesis by simp qed lemma finite_lists_distinct_length_eq [intro]: assumes "finite A" shows "finite {xs. length xs = n \ distinct xs \ set xs \ A}" (is "finite ?S") proof - have "finite {xs. set xs \ A \ length xs = n}" using \finite A\ by (rule finite_lists_length_eq) moreover have "?S \ {xs. set xs \ A \ length xs = n}" by auto ultimately show ?thesis using finite_subset by auto qed lemma card_lists_distinct_length_eq: assumes "finite A" "k \ card A" shows "card {xs. length xs = k \ distinct xs \ set xs \ A} = \{card A - k + 1 .. card A}" using assms proof (induct k) case 0 then have "{xs. length xs = 0 \ distinct xs \ set xs \ A} = {[]}" by auto then show ?case by simp next case (Suc k) let "?k_list" = "\k xs. length xs = k \ distinct xs \ set xs \ A" have inj_Cons: "\A. inj_on (\(xs, n). n # xs) A" by (rule inj_onI) auto from Suc have "k \ card A" by simp moreover note \finite A\ moreover have "finite {xs. ?k_list k xs}" by (rule finite_subset) (use finite_lists_length_eq[OF \finite A\, of k] in auto) moreover have "\i j. i \ j \ {i} \ (A - set i) \ {j} \ (A - set j) = {}" by auto moreover have "\i. i \ {xs. ?k_list k xs} \ card (A - set i) = card A - k" by (simp add: card_Diff_subset distinct_card) moreover have "{xs. ?k_list (Suc k) xs} = (\(xs, n). n#xs) ` \((\xs. {xs} \ (A - set xs)) ` {xs. ?k_list k xs})" by (auto simp: length_Suc_conv) moreover have "Suc (card A - Suc k) = card A - k" using Suc.prems by simp then have "(card A - k) * \{Suc (card A - k)..card A} = \{Suc (card A - Suc k)..card A}" by (subst prod.insert[symmetric]) (simp add: atLeastAtMost_insertL)+ ultimately show ?case by (simp add: card_image inj_Cons card_UN_disjoint Suc.hyps algebra_simps) qed lemma card_lists_distinct_length_eq': assumes "k < card A" shows "card {xs. length xs = k \ distinct xs \ set xs \ A} = \{card A - k + 1 .. card A}" proof - from \k < card A\ have "finite A" and "k \ card A" using card.infinite by force+ from this show ?thesis by (rule card_lists_distinct_length_eq) qed lemma infinite_UNIV_listI: "\ finite(UNIV::'a list set)" by (metis UNIV_I finite_maxlen length_replicate less_irrefl) lemma same_length_different: assumes "xs \ ys" and "length xs = length ys" shows "\pre x xs' y ys'. x\y \ xs = pre @ [x] @ xs' \ ys = pre @ [y] @ ys'" using assms proof (induction xs arbitrary: ys) case Nil then show ?case by auto next case (Cons x xs) then obtain z zs where ys: "ys = Cons z zs" by (metis length_Suc_conv) show ?case proof (cases "x=z") case True then have "xs \ zs" "length xs = length zs" using Cons.prems ys by auto then obtain pre u xs' v ys' where "u\v" and xs: "xs = pre @ [u] @ xs'" and zs: "zs = pre @ [v] @ys'" using Cons.IH by meson then have "x # xs = (z#pre) @ [u] @ xs' \ ys = (z#pre) @ [v] @ ys'" by (simp add: True ys) with \u\v\ show ?thesis by blast next case False then have "x # xs = [] @ [x] @ xs \ ys = [] @ [z] @ zs" by (simp add: ys) then show ?thesis using False by blast qed qed subsection \Sorting\ subsubsection \\<^const>\sorted_wrt\\ text \Sometimes the second equation in the definition of \<^const>\sorted_wrt\ is too aggressive because it relates each list element to \emph{all} its successors. Then this equation should be removed and \sorted_wrt2_simps\ should be added instead.\ lemma sorted_wrt1: "sorted_wrt P [x] = True" by(simp) lemma sorted_wrt2: "transp P \ sorted_wrt P (x # y # zs) = (P x y \ sorted_wrt P (y # zs))" proof (induction zs arbitrary: x y) case (Cons z zs) then show ?case by simp (meson transpD)+ qed auto lemmas sorted_wrt2_simps = sorted_wrt1 sorted_wrt2 lemma sorted_wrt_true [simp]: "sorted_wrt (\_ _. True) xs" by (induction xs) simp_all lemma sorted_wrt_append: "sorted_wrt P (xs @ ys) \ sorted_wrt P xs \ sorted_wrt P ys \ (\x\set xs. \y\set ys. P x y)" by (induction xs) auto lemma sorted_wrt_map: "sorted_wrt R (map f xs) = sorted_wrt (\x y. R (f x) (f y)) xs" by (induction xs) simp_all lemma assumes "sorted_wrt f xs" shows sorted_wrt_take: "sorted_wrt f (take n xs)" and sorted_wrt_drop: "sorted_wrt f (drop n xs)" proof - from assms have "sorted_wrt f (take n xs @ drop n xs)" by simp thus "sorted_wrt f (take n xs)" and "sorted_wrt f (drop n xs)" unfolding sorted_wrt_append by simp_all qed lemma sorted_wrt_filter: "sorted_wrt f xs \ sorted_wrt f (filter P xs)" by (induction xs) auto lemma sorted_wrt_rev: "sorted_wrt P (rev xs) = sorted_wrt (\x y. P y x) xs" by (induction xs) (auto simp add: sorted_wrt_append) lemma sorted_wrt_mono_rel: "(\x y. \ x \ set xs; y \ set xs; P x y \ \ Q x y) \ sorted_wrt P xs \ sorted_wrt Q xs" by(induction xs)(auto) lemma sorted_wrt01: "length xs \ 1 \ sorted_wrt P xs" by(auto simp: le_Suc_eq length_Suc_conv) lemma sorted_wrt_iff_nth_less: "sorted_wrt P xs = (\i j. i < j \ j < length xs \ P (xs ! i) (xs ! j))" by (induction xs) (auto simp add: in_set_conv_nth Ball_def nth_Cons split: nat.split) lemma sorted_wrt_nth_less: "\ sorted_wrt P xs; i < j; j < length xs \ \ P (xs ! i) (xs ! j)" by(auto simp: sorted_wrt_iff_nth_less) lemma sorted_wrt_iff_nth_Suc_transp: assumes "transp P" shows "sorted_wrt P xs \ (\i. Suc i < length xs \ P (xs!i) (xs!(Suc i)))" (is "?L = ?R") proof assume ?L thus ?R by (simp add: sorted_wrt_iff_nth_less) next assume ?R have "i < j \ j < length xs \ P (xs ! i) (xs ! j)" for i j by(induct i j rule: less_Suc_induct)(simp add: \?R\, meson assms transpE transp_less) thus ?L by (simp add: sorted_wrt_iff_nth_less) qed lemma sorted_wrt_upt[simp]: "sorted_wrt (<) [m..Each element is greater or equal to its index:\ lemma sorted_wrt_less_idx: "sorted_wrt (<) ns \ i < length ns \ i \ ns!i" proof (induction ns arbitrary: i rule: rev_induct) case Nil thus ?case by simp next case snoc thus ?case by (auto simp: nth_append sorted_wrt_append) (metis less_antisym not_less nth_mem) qed subsubsection \\<^const>\sorted\\ context linorder begin text \Sometimes the second equation in the definition of \<^const>\sorted\ is too aggressive because it relates each list element to \emph{all} its successors. Then this equation should be removed and \sorted2_simps\ should be added instead. Executable code is one such use case.\ lemma sorted0: "sorted [] = True" by simp lemma sorted1: "sorted [x] = True" by simp lemma sorted2: "sorted (x # y # zs) = (x \ y \ sorted (y # zs))" by(induction zs) auto lemmas sorted2_simps = sorted1 sorted2 lemmas [code] = sorted0 sorted2_simps lemma sorted_append: "sorted (xs@ys) = (sorted xs \ sorted ys \ (\x \ set xs. \y \ set ys. x\y))" by (simp add: sorted_wrt_append) lemma sorted_map: "sorted (map f xs) = sorted_wrt (\x y. f x \ f y) xs" by (simp add: sorted_wrt_map) lemma sorted01: "length xs \ 1 \ sorted xs" by (simp add: sorted_wrt01) lemma sorted_tl: "sorted xs \ sorted (tl xs)" by (cases xs) (simp_all) lemma sorted_iff_nth_mono_less: "sorted xs = (\i j. i < j \ j < length xs \ xs ! i \ xs ! j)" by (simp add: sorted_wrt_iff_nth_less) lemma sorted_iff_nth_mono: "sorted xs = (\i j. i \ j \ j < length xs \ xs ! i \ xs ! j)" by (auto simp: sorted_iff_nth_mono_less nat_less_le) lemma sorted_nth_mono: "sorted xs \ i \ j \ j < length xs \ xs!i \ xs!j" by (auto simp: sorted_iff_nth_mono) lemma sorted_iff_nth_Suc: "sorted xs \ (\i. Suc i < length xs \ xs!i \ xs!(Suc i))" by(simp add: sorted_wrt_iff_nth_Suc_transp) lemma sorted_rev_nth_mono: "sorted (rev xs) \ i \ j \ j < length xs \ xs!j \ xs!i" using sorted_nth_mono[ of "rev xs" "length xs - j - 1" "length xs - i - 1"] rev_nth[of "length xs - i - 1" "xs"] rev_nth[of "length xs - j - 1" "xs"] by auto lemma sorted_rev_iff_nth_mono: "sorted (rev xs) \ (\ i j. i \ j \ j < length xs \ xs!j \ xs!i)" (is "?L = ?R") proof assume ?L thus ?R by (blast intro: sorted_rev_nth_mono) next assume ?R have "rev xs ! k \ rev xs ! l" if asms: "k \ l" "l < length(rev xs)" for k l proof - have "k < length xs" "l < length xs" "length xs - Suc l \ length xs - Suc k" "length xs - Suc k < length xs" using asms by auto thus "rev xs ! k \ rev xs ! l" using \?R\ \k \ l\ unfolding rev_nth[OF \k < length xs\] rev_nth[OF \l < length xs\] by blast qed thus ?L by (simp add: sorted_iff_nth_mono) qed lemma sorted_rev_iff_nth_Suc: "sorted (rev xs) \ (\i. Suc i < length xs \ xs!(Suc i) \ xs!i)" proof- interpret dual: linorder "(\x y. y \ x)" "(\x y. y < x)" using dual_linorder . show ?thesis using dual_linorder dual.sorted_iff_nth_Suc dual.sorted_iff_nth_mono unfolding sorted_rev_iff_nth_mono by simp qed lemma sorted_map_remove1: "sorted (map f xs) \ sorted (map f (remove1 x xs))" by (induct xs) (auto) lemma sorted_remove1: "sorted xs \ sorted (remove1 a xs)" using sorted_map_remove1 [of "\x. x"] by simp lemma sorted_butlast: assumes "xs \ []" and "sorted xs" shows "sorted (butlast xs)" proof - from \xs \ []\ obtain ys y where "xs = ys @ [y]" by (cases xs rule: rev_cases) auto with \sorted xs\ show ?thesis by (simp add: sorted_append) qed lemma sorted_replicate [simp]: "sorted(replicate n x)" by(induction n) (auto) lemma sorted_remdups[simp]: "sorted xs \ sorted (remdups xs)" by (induct xs) (auto) lemma sorted_remdups_adj[simp]: "sorted xs \ sorted (remdups_adj xs)" by (induct xs rule: remdups_adj.induct, simp_all split: if_split_asm) lemma sorted_nths: "sorted xs \ sorted (nths xs I)" by(induction xs arbitrary: I)(auto simp: nths_Cons) lemma sorted_distinct_set_unique: assumes "sorted xs" "distinct xs" "sorted ys" "distinct ys" "set xs = set ys" shows "xs = ys" proof - from assms have 1: "length xs = length ys" by (auto dest!: distinct_card) from assms show ?thesis proof(induct rule:list_induct2[OF 1]) case 1 show ?case by simp next case (2 x xs y ys) then show ?case by (cases \x = y\) (auto simp add: insert_eq_iff) qed qed lemma map_sorted_distinct_set_unique: assumes "inj_on f (set xs \ set ys)" assumes "sorted (map f xs)" "distinct (map f xs)" "sorted (map f ys)" "distinct (map f ys)" assumes "set xs = set ys" shows "xs = ys" proof - from assms have "map f xs = map f ys" by (simp add: sorted_distinct_set_unique) with \inj_on f (set xs \ set ys)\ show "xs = ys" by (blast intro: map_inj_on) qed lemma assumes "sorted xs" shows sorted_take: "sorted (take n xs)" and sorted_drop: "sorted (drop n xs)" proof - from assms have "sorted (take n xs @ drop n xs)" by simp then show "sorted (take n xs)" and "sorted (drop n xs)" unfolding sorted_append by simp_all qed lemma sorted_dropWhile: "sorted xs \ sorted (dropWhile P xs)" by (auto dest: sorted_drop simp add: dropWhile_eq_drop) lemma sorted_takeWhile: "sorted xs \ sorted (takeWhile P xs)" by (subst takeWhile_eq_take) (auto dest: sorted_take) lemma sorted_filter: "sorted (map f xs) \ sorted (map f (filter P xs))" by (induct xs) simp_all lemma foldr_max_sorted: assumes "sorted (rev xs)" shows "foldr max xs y = (if xs = [] then y else max (xs ! 0) y)" using assms proof (induct xs) case (Cons x xs) then have "sorted (rev xs)" using sorted_append by auto with Cons show ?case by (cases xs) (auto simp add: sorted_append max_def) qed simp lemma filter_equals_takeWhile_sorted_rev: assumes sorted: "sorted (rev (map f xs))" shows "filter (\x. t < f x) xs = takeWhile (\ x. t < f x) xs" (is "filter ?P xs = ?tW") proof (rule takeWhile_eq_filter[symmetric]) let "?dW" = "dropWhile ?P xs" fix x assume "x \ set ?dW" then obtain i where i: "i < length ?dW" and nth_i: "x = ?dW ! i" unfolding in_set_conv_nth by auto hence "length ?tW + i < length (?tW @ ?dW)" unfolding length_append by simp hence i': "length (map f ?tW) + i < length (map f xs)" by simp have "(map f ?tW @ map f ?dW) ! (length (map f ?tW) + i) \ (map f ?tW @ map f ?dW) ! (length (map f ?tW) + 0)" using sorted_rev_nth_mono[OF sorted _ i', of "length ?tW"] unfolding map_append[symmetric] by simp hence "f x \ f (?dW ! 0)" unfolding nth_append_length_plus nth_i using i preorder_class.le_less_trans[OF le0 i] by simp also have "... \ t" using hd_dropWhile[of "?P" xs] le0[THEN preorder_class.le_less_trans, OF i] using hd_conv_nth[of "?dW"] by simp finally show "\ t < f x" by simp qed lemma sorted_map_same: "sorted (map f (filter (\x. f x = g xs) xs))" proof (induct xs arbitrary: g) case Nil then show ?case by simp next case (Cons x xs) then have "sorted (map f (filter (\y. f y = (\xs. f x) xs) xs))" . moreover from Cons have "sorted (map f (filter (\y. f y = (g \ Cons x) xs) xs))" . ultimately show ?case by simp_all qed lemma sorted_same: "sorted (filter (\x. x = g xs) xs)" using sorted_map_same [of "\x. x"] by simp end lemma sorted_upt[simp]: "sorted [m..Sorting functions\ text\Currently it is not shown that \<^const>\sort\ returns a permutation of its input because the nicest proof is via multisets, which are not part of Main. Alternatively one could define a function that counts the number of occurrences of an element in a list and use that instead of multisets to state the correctness property.\ context linorder begin lemma set_insort_key: "set (insort_key f x xs) = insert x (set xs)" by (induct xs) auto lemma length_insort [simp]: "length (insort_key f x xs) = Suc (length xs)" by (induct xs) simp_all lemma insort_key_left_comm: assumes "f x \ f y" shows "insort_key f y (insort_key f x xs) = insort_key f x (insort_key f y xs)" by (induct xs) (auto simp add: assms dest: order.antisym) lemma insort_left_comm: "insort x (insort y xs) = insort y (insort x xs)" by (cases "x = y") (auto intro: insort_key_left_comm) lemma comp_fun_commute_insort: "comp_fun_commute insort" proof qed (simp add: insort_left_comm fun_eq_iff) lemma sort_key_simps [simp]: "sort_key f [] = []" "sort_key f (x#xs) = insort_key f x (sort_key f xs)" by (simp_all add: sort_key_def) lemma sort_key_conv_fold: assumes "inj_on f (set xs)" shows "sort_key f xs = fold (insort_key f) xs []" proof - have "fold (insort_key f) (rev xs) = fold (insort_key f) xs" proof (rule fold_rev, rule ext) fix zs fix x y assume "x \ set xs" "y \ set xs" with assms have *: "f y = f x \ y = x" by (auto dest: inj_onD) have **: "x = y \ y = x" by auto show "(insort_key f y \ insort_key f x) zs = (insort_key f x \ insort_key f y) zs" by (induct zs) (auto intro: * simp add: **) qed then show ?thesis by (simp add: sort_key_def foldr_conv_fold) qed lemma sort_conv_fold: "sort xs = fold insort xs []" by (rule sort_key_conv_fold) simp lemma length_sort[simp]: "length (sort_key f xs) = length xs" by (induct xs, auto) lemma set_sort[simp]: "set(sort_key f xs) = set xs" by (induct xs) (simp_all add: set_insort_key) lemma distinct_insort: "distinct (insort_key f x xs) = (x \ set xs \ distinct xs)" by(induct xs)(auto simp: set_insort_key) lemma distinct_insort_key: "distinct (map f (insort_key f x xs)) = (f x \ f ` set xs \ (distinct (map f xs)))" by (induct xs) (auto simp: set_insort_key) lemma distinct_sort[simp]: "distinct (sort_key f xs) = distinct xs" by (induct xs) (simp_all add: distinct_insort) lemma sorted_insort_key: "sorted (map f (insort_key f x xs)) = sorted (map f xs)" by (induct xs) (auto simp: set_insort_key) lemma sorted_insort: "sorted (insort x xs) = sorted xs" using sorted_insort_key [where f="\x. x"] by simp theorem sorted_sort_key [simp]: "sorted (map f (sort_key f xs))" by (induct xs) (auto simp:sorted_insort_key) theorem sorted_sort [simp]: "sorted (sort xs)" using sorted_sort_key [where f="\x. x"] by simp lemma insort_not_Nil [simp]: "insort_key f a xs \ []" by (induction xs) simp_all lemma insort_is_Cons: "\x\set xs. f a \ f x \ insort_key f a xs = a # xs" by (cases xs) auto lemma sorted_sort_id: "sorted xs \ sort xs = xs" by (induct xs) (auto simp add: insort_is_Cons) lemma insort_key_remove1: assumes "a \ set xs" and "sorted (map f xs)" and "hd (filter (\x. f a = f x) xs) = a" shows "insort_key f a (remove1 a xs) = xs" using assms proof (induct xs) case (Cons x xs) then show ?case proof (cases "x = a") case False then have "f x \ f a" using Cons.prems by auto then have "f x < f a" using Cons.prems by auto with \f x \ f a\ show ?thesis using Cons by (auto simp: insort_is_Cons) qed (auto simp: insort_is_Cons) qed simp lemma insort_remove1: assumes "a \ set xs" and "sorted xs" shows "insort a (remove1 a xs) = xs" proof (rule insort_key_remove1) define n where "n = length (filter ((=) a) xs) - 1" from \a \ set xs\ show "a \ set xs" . from \sorted xs\ show "sorted (map (\x. x) xs)" by simp from \a \ set xs\ have "a \ set (filter ((=) a) xs)" by auto then have "set (filter ((=) a) xs) \ {}" by auto then have "filter ((=) a) xs \ []" by (auto simp only: set_empty) then have "length (filter ((=) a) xs) > 0" by simp then have n: "Suc n = length (filter ((=) a) xs)" by (simp add: n_def) moreover have "replicate (Suc n) a = a # replicate n a" by simp ultimately show "hd (filter ((=) a) xs) = a" by (simp add: replicate_length_filter) qed lemma finite_sorted_distinct_unique: assumes "finite A" shows "\!xs. set xs = A \ sorted xs \ distinct xs" proof - obtain xs where "distinct xs" "A = set xs" using finite_distinct_list [OF assms] by metis then show ?thesis by (rule_tac a="sort xs" in ex1I) (auto simp: sorted_distinct_set_unique) qed lemma insort_insert_key_triv: "f x \ f ` set xs \ insort_insert_key f x xs = xs" by (simp add: insort_insert_key_def) lemma insort_insert_triv: "x \ set xs \ insort_insert x xs = xs" using insort_insert_key_triv [of "\x. x"] by simp lemma insort_insert_insort_key: "f x \ f ` set xs \ insort_insert_key f x xs = insort_key f x xs" by (simp add: insort_insert_key_def) lemma insort_insert_insort: "x \ set xs \ insort_insert x xs = insort x xs" using insort_insert_insort_key [of "\x. x"] by simp lemma set_insort_insert: "set (insort_insert x xs) = insert x (set xs)" by (auto simp add: insort_insert_key_def set_insort_key) lemma distinct_insort_insert: assumes "distinct xs" shows "distinct (insort_insert_key f x xs)" using assms by (induct xs) (auto simp add: insort_insert_key_def set_insort_key) lemma sorted_insort_insert_key: assumes "sorted (map f xs)" shows "sorted (map f (insort_insert_key f x xs))" using assms by (simp add: insort_insert_key_def sorted_insort_key) lemma sorted_insort_insert: assumes "sorted xs" shows "sorted (insort_insert x xs)" using assms sorted_insort_insert_key [of "\x. x"] by simp lemma filter_insort_triv: "\ P x \ filter P (insort_key f x xs) = filter P xs" by (induct xs) simp_all lemma filter_insort: "sorted (map f xs) \ P x \ filter P (insort_key f x xs) = insort_key f x (filter P xs)" by (induct xs) (auto, subst insort_is_Cons, auto) lemma filter_sort: "filter P (sort_key f xs) = sort_key f (filter P xs)" by (induct xs) (simp_all add: filter_insort_triv filter_insort) lemma remove1_insort_key [simp]: "remove1 x (insort_key f x xs) = xs" by (induct xs) simp_all end lemma sort_upt [simp]: "sort [m.. \x \ set xs. P x \ List.find P xs = Some (Min {x\set xs. P x})" proof (induct xs) case Nil then show ?case by simp next case (Cons x xs) show ?case proof (cases "P x") case True with Cons show ?thesis by (auto intro: Min_eqI [symmetric]) next case False then have "{y. (y = x \ y \ set xs) \ P y} = {y \ set xs. P y}" by auto with Cons False show ?thesis by (simp_all) qed qed lemma sorted_enumerate [simp]: "sorted (map fst (enumerate n xs))" by (simp add: enumerate_eq_zip) text \Stability of \<^const>\sort_key\:\ lemma sort_key_stable: "filter (\y. f y = k) (sort_key f xs) = filter (\y. f y = k) xs" by (induction xs) (auto simp: filter_insort insort_is_Cons filter_insort_triv) corollary stable_sort_key_sort_key: "stable_sort_key sort_key" by(simp add: stable_sort_key_def sort_key_stable) lemma sort_key_const: "sort_key (\x. c) xs = xs" by (metis (mono_tags) filter_True sort_key_stable) subsubsection \\<^const>\transpose\ on sorted lists\ lemma sorted_transpose[simp]: "sorted (rev (map length (transpose xs)))" by (auto simp: sorted_iff_nth_mono rev_nth nth_transpose length_filter_conv_card intro: card_mono) lemma transpose_max_length: "foldr (\xs. max (length xs)) (transpose xs) 0 = length (filter (\x. x \ []) xs)" (is "?L = ?R") proof (cases "transpose xs = []") case False have "?L = foldr max (map length (transpose xs)) 0" by (simp add: foldr_map comp_def) also have "... = length (transpose xs ! 0)" using False sorted_transpose by (simp add: foldr_max_sorted) finally show ?thesis using False by (simp add: nth_transpose) next case True hence "filter (\x. x \ []) xs = []" by (auto intro!: filter_False simp: transpose_empty) thus ?thesis by (simp add: transpose_empty True) qed lemma length_transpose_sorted: fixes xs :: "'a list list" assumes sorted: "sorted (rev (map length xs))" shows "length (transpose xs) = (if xs = [] then 0 else length (xs ! 0))" proof (cases "xs = []") case False thus ?thesis using foldr_max_sorted[OF sorted] False unfolding length_transpose foldr_map comp_def by simp qed simp lemma nth_nth_transpose_sorted[simp]: fixes xs :: "'a list list" assumes sorted: "sorted (rev (map length xs))" and i: "i < length (transpose xs)" and j: "j < length (filter (\ys. i < length ys) xs)" shows "transpose xs ! i ! j = xs ! j ! i" using j filter_equals_takeWhile_sorted_rev[OF sorted, of i] nth_transpose[OF i] nth_map[OF j] by (simp add: takeWhile_nth) lemma transpose_column_length: fixes xs :: "'a list list" assumes sorted: "sorted (rev (map length xs))" and "i < length xs" shows "length (filter (\ys. i < length ys) (transpose xs)) = length (xs ! i)" proof - have "xs \ []" using \i < length xs\ by auto note filter_equals_takeWhile_sorted_rev[OF sorted, simp] { fix j assume "j \ i" note sorted_rev_nth_mono[OF sorted, of j i, simplified, OF this \i < length xs\] } note sortedE = this[consumes 1] have "{j. j < length (transpose xs) \ i < length (transpose xs ! j)} = {..< length (xs ! i)}" proof safe fix j assume "j < length (transpose xs)" and "i < length (transpose xs ! j)" with this(2) nth_transpose[OF this(1)] have "i < length (takeWhile (\ys. j < length ys) xs)" by simp from nth_mem[OF this] takeWhile_nth[OF this] show "j < length (xs ! i)" by (auto dest: set_takeWhileD) next fix j assume "j < length (xs ! i)" thus "j < length (transpose xs)" using foldr_max_sorted[OF sorted] \xs \ []\ sortedE[OF le0] by (auto simp: length_transpose comp_def foldr_map) have "Suc i \ length (takeWhile (\ys. j < length ys) xs)" using \i < length xs\ \j < length (xs ! i)\ less_Suc_eq_le by (auto intro!: length_takeWhile_less_P_nth dest!: sortedE) with nth_transpose[OF \j < length (transpose xs)\] show "i < length (transpose xs ! j)" by simp qed thus ?thesis by (simp add: length_filter_conv_card) qed lemma transpose_column: fixes xs :: "'a list list" assumes sorted: "sorted (rev (map length xs))" and "i < length xs" shows "map (\ys. ys ! i) (filter (\ys. i < length ys) (transpose xs)) = xs ! i" (is "?R = _") proof (rule nth_equalityI) show length: "length ?R = length (xs ! i)" using transpose_column_length[OF assms] by simp fix j assume j: "j < length ?R" note * = less_le_trans[OF this, unfolded length_map, OF length_filter_le] from j have j_less: "j < length (xs ! i)" using length by simp have i_less_tW: "Suc i \ length (takeWhile (\ys. Suc j \ length ys) xs)" proof (rule length_takeWhile_less_P_nth) show "Suc i \ length xs" using \i < length xs\ by simp fix k assume "k < Suc i" hence "k \ i" by auto with sorted_rev_nth_mono[OF sorted this] \i < length xs\ have "length (xs ! i) \ length (xs ! k)" by simp thus "Suc j \ length (xs ! k)" using j_less by simp qed have i_less_filter: "i < length (filter (\ys. j < length ys) xs) " unfolding filter_equals_takeWhile_sorted_rev[OF sorted, of j] using i_less_tW by (simp_all add: Suc_le_eq) from j show "?R ! j = xs ! i ! j" unfolding filter_equals_takeWhile_sorted_rev[OF sorted_transpose, of i] by (simp add: takeWhile_nth nth_nth_transpose_sorted[OF sorted * i_less_filter]) qed lemma transpose_transpose: fixes xs :: "'a list list" assumes sorted: "sorted (rev (map length xs))" shows "transpose (transpose xs) = takeWhile (\x. x \ []) xs" (is "?L = ?R") proof - have len: "length ?L = length ?R" unfolding length_transpose transpose_max_length using filter_equals_takeWhile_sorted_rev[OF sorted, of 0] by simp { fix i assume "i < length ?R" with less_le_trans[OF _ length_takeWhile_le[of _ xs]] have "i < length xs" by simp } note * = this show ?thesis by (rule nth_equalityI) (simp_all add: len nth_transpose transpose_column[OF sorted] * takeWhile_nth) qed theorem transpose_rectangle: assumes "xs = [] \ n = 0" assumes rect: "\ i. i < length xs \ length (xs ! i) = n" shows "transpose xs = map (\ i. map (\ j. xs ! j ! i) [0..ys. i < length ys) xs = xs" using rect by (auto simp: in_set_conv_nth intro!: filter_True) } ultimately show "\i. i < length (transpose xs) \ ?trans ! i = ?map ! i" by (auto simp: nth_transpose intro: nth_equalityI) qed subsubsection \\sorted_key_list_of_set\\ text\ This function maps (finite) linearly ordered sets to sorted lists. The linear order is obtained by a key function that maps the elements of the set to a type that is linearly ordered. Warning: in most cases it is not a good idea to convert from sets to lists but one should convert in the other direction (via \<^const>\set\). Note: this is a generalisation of the older \sorted_list_of_set\ that is obtained by setting the key function to the identity. Consequently, new theorems should be added to the locale below. They should also be aliased to more convenient names for use with \sorted_list_of_set\ as seen further below. \ definition (in linorder) sorted_key_list_of_set :: "('b \ 'a) \ 'b set \ 'b list" where "sorted_key_list_of_set f \ folding_on.F (insort_key f) []" locale folding_insort_key = lo?: linorder "less_eq :: 'a \ 'a \ bool" less for less_eq (infix "\" 50) and less (infix "\" 50) + fixes S fixes f :: "'b \ 'a" assumes inj_on: "inj_on f S" begin lemma insort_key_commute: "x \ S \ y \ S \ insort_key f y o insort_key f x = insort_key f x o insort_key f y" proof(rule ext, goal_cases) case (1 xs) with inj_on show ?case by (induction xs) (auto simp: inj_onD) qed sublocale fold_insort_key: folding_on S "insort_key f" "[]" rewrites "folding_on.F (insort_key f) [] = sorted_key_list_of_set f" proof - show "folding_on S (insort_key f)" by standard (simp add: insort_key_commute) qed (simp add: sorted_key_list_of_set_def) lemma idem_if_sorted_distinct: assumes "set xs \ S" and "sorted (map f xs)" "distinct xs" shows "sorted_key_list_of_set f (set xs) = xs" proof(cases "S = {}") case True then show ?thesis using \set xs \ S\ by auto next case False with assms show ?thesis proof(induction xs) case (Cons a xs) with Cons show ?case by (cases xs) auto qed simp qed lemma sorted_key_list_of_set_empty: "sorted_key_list_of_set f {} = []" by (fact fold_insort_key.empty) lemma sorted_key_list_of_set_insert: assumes "insert x A \ S" and "finite A" "x \ A" shows "sorted_key_list_of_set f (insert x A) = insort_key f x (sorted_key_list_of_set f A)" using assms by (fact fold_insort_key.insert) lemma sorted_key_list_of_set_insert_remove [simp]: assumes "insert x A \ S" and "finite A" shows "sorted_key_list_of_set f (insert x A) = insort_key f x (sorted_key_list_of_set f (A - {x}))" using assms by (fact fold_insort_key.insert_remove) lemma sorted_key_list_of_set_eq_Nil_iff [simp]: assumes "A \ S" and "finite A" shows "sorted_key_list_of_set f A = [] \ A = {}" using assms by (auto simp: fold_insort_key.remove) lemma set_sorted_key_list_of_set [simp]: assumes "A \ S" and "finite A" shows "set (sorted_key_list_of_set f A) = A" using assms(2,1) by (induct A rule: finite_induct) (simp_all add: set_insort_key) lemma sorted_sorted_key_list_of_set [simp]: assumes "A \ S" shows "sorted (map f (sorted_key_list_of_set f A))" proof (cases "finite A") case True thus ?thesis using \A \ S\ by (induction A) (simp_all add: sorted_insort_key) next case False thus ?thesis by simp qed lemma distinct_if_distinct_map: "distinct (map f xs) \ distinct xs" using inj_on by (simp add: distinct_map) lemma distinct_sorted_key_list_of_set [simp]: assumes "A \ S" shows "distinct (map f (sorted_key_list_of_set f A))" proof (cases "finite A") case True thus ?thesis using \A \ S\ inj_on by (induction A) (force simp: distinct_insort_key dest: inj_onD)+ next case False thus ?thesis by simp qed lemma length_sorted_key_list_of_set [simp]: assumes "A \ S" shows "length (sorted_key_list_of_set f A) = card A" proof (cases "finite A") case True with assms inj_on show ?thesis using distinct_card[symmetric, OF distinct_sorted_key_list_of_set] by (auto simp: subset_inj_on intro!: card_image) qed auto lemmas sorted_key_list_of_set = set_sorted_key_list_of_set sorted_sorted_key_list_of_set distinct_sorted_key_list_of_set lemma sorted_key_list_of_set_remove: assumes "insert x A \ S" and "finite A" shows "sorted_key_list_of_set f (A - {x}) = remove1 x (sorted_key_list_of_set f A)" proof (cases "x \ A") case False with assms have "x \ set (sorted_key_list_of_set f A)" by simp with False show ?thesis by (simp add: remove1_idem) next case True then obtain B where A: "A = insert x B" by (rule Set.set_insert) with assms show ?thesis by simp qed lemma strict_sorted_key_list_of_set [simp]: "A \ S \ sorted_wrt (\) (map f (sorted_key_list_of_set f A))" by (cases "finite A") (auto simp: strict_sorted_iff subset_inj_on[OF inj_on]) lemma finite_set_strict_sorted: assumes "A \ S" and "finite A" obtains l where "sorted_wrt (\) (map f l)" "set l = A" "length l = card A" using assms by (meson length_sorted_key_list_of_set set_sorted_key_list_of_set strict_sorted_key_list_of_set) lemma (in linorder) strict_sorted_equal: assumes "sorted_wrt (<) xs" and "sorted_wrt (<) ys" and "set ys = set xs" shows "ys = xs" using assms proof (induction xs arbitrary: ys) case (Cons x xs) show ?case proof (cases ys) case Nil then show ?thesis using Cons.prems by auto next case (Cons y ys') then have "xs = ys'" by (metis Cons.prems list.inject sorted_distinct_set_unique strict_sorted_iff) moreover have "x = y" using Cons.prems \xs = ys'\ local.Cons by fastforce ultimately show ?thesis using local.Cons by blast qed qed auto lemma (in linorder) strict_sorted_equal_Uniq: "\\<^sub>\\<^sub>1xs. sorted_wrt (<) xs \ set xs = A" by (simp add: Uniq_def strict_sorted_equal) lemma sorted_key_list_of_set_inject: assumes "A \ S" "B \ S" assumes "sorted_key_list_of_set f A = sorted_key_list_of_set f B" "finite A" "finite B" shows "A = B" using assms set_sorted_key_list_of_set by metis lemma sorted_key_list_of_set_unique: assumes "A \ S" and "finite A" shows "sorted_wrt (\) (map f l) \ set l = A \ length l = card A \ sorted_key_list_of_set f A = l" using assms by (auto simp: strict_sorted_iff card_distinct idem_if_sorted_distinct) end context linorder begin definition "sorted_list_of_set \ sorted_key_list_of_set (\x::'a. x)" text \ We abuse the \rewrites\ functionality of locales to remove trivial assumptions that result from instantiating the key function to the identity. \ sublocale sorted_list_of_set: folding_insort_key "(\)" "(<)" UNIV "(\x. x)" rewrites "sorted_key_list_of_set (\x. x) = sorted_list_of_set" and "\xs. map (\x. x) xs \ xs" and "\X. (X \ UNIV) \ True" and "\x. x \ UNIV \ True" and "\P. (True \ P) \ Trueprop P" and "\P Q. (True \ PROP P \ PROP Q) \ (PROP P \ True \ PROP Q)" proof - show "folding_insort_key (\) (<) UNIV (\x. x)" by standard simp qed (simp_all add: sorted_list_of_set_def) text \Alias theorems for backwards compatibility and ease of use.\ lemmas sorted_list_of_set = sorted_list_of_set.sorted_key_list_of_set and sorted_list_of_set_empty = sorted_list_of_set.sorted_key_list_of_set_empty and sorted_list_of_set_insert = sorted_list_of_set.sorted_key_list_of_set_insert and sorted_list_of_set_insert_remove = sorted_list_of_set.sorted_key_list_of_set_insert_remove and sorted_list_of_set_eq_Nil_iff = sorted_list_of_set.sorted_key_list_of_set_eq_Nil_iff and set_sorted_list_of_set = sorted_list_of_set.set_sorted_key_list_of_set and sorted_sorted_list_of_set = sorted_list_of_set.sorted_sorted_key_list_of_set and distinct_sorted_list_of_set = sorted_list_of_set.distinct_sorted_key_list_of_set and length_sorted_list_of_set = sorted_list_of_set.length_sorted_key_list_of_set and sorted_list_of_set_remove = sorted_list_of_set.sorted_key_list_of_set_remove and strict_sorted_list_of_set = sorted_list_of_set.strict_sorted_key_list_of_set and sorted_list_of_set_inject = sorted_list_of_set.sorted_key_list_of_set_inject and sorted_list_of_set_unique = sorted_list_of_set.sorted_key_list_of_set_unique and finite_set_strict_sorted = sorted_list_of_set.finite_set_strict_sorted lemma sorted_list_of_set_sort_remdups [code]: "sorted_list_of_set (set xs) = sort (remdups xs)" proof - interpret comp_fun_commute insort by (fact comp_fun_commute_insort) show ?thesis by (simp add: sorted_list_of_set.fold_insort_key.eq_fold sort_conv_fold fold_set_fold_remdups) qed end lemma sorted_list_of_set_range [simp]: "sorted_list_of_set {m.. {}" shows "sorted_list_of_set A = Min A # sorted_list_of_set (A - {Min A})" using assms by (auto simp: less_le simp flip: sorted_list_of_set.sorted_key_list_of_set_unique intro: Min_in) lemma sorted_list_of_set_greaterThanLessThan: assumes "Suc i < j" shows "sorted_list_of_set {i<.. j" shows "sorted_list_of_set {i<..j} = Suc i # sorted_list_of_set {Suc i<..j}" using sorted_list_of_set_greaterThanLessThan [of i "Suc j"] by (metis assms greaterThanAtMost_def greaterThanLessThan_eq le_imp_less_Suc lessThan_Suc_atMost) lemma nth_sorted_list_of_set_greaterThanLessThan: "n < j - Suc i \ sorted_list_of_set {i<.. sorted_list_of_set {i<..j} ! n = Suc (i+n)" using nth_sorted_list_of_set_greaterThanLessThan [of n "Suc j" i] by (simp add: greaterThanAtMost_def greaterThanLessThan_eq lessThan_Suc_atMost) subsubsection \\lists\: the list-forming operator over sets\ inductive_set lists :: "'a set => 'a list set" for A :: "'a set" where Nil [intro!, simp]: "[] \ lists A" | Cons [intro!, simp]: "\a \ A; l \ lists A\ \ a#l \ lists A" inductive_cases listsE [elim!]: "x#l \ lists A" inductive_cases listspE [elim!]: "listsp A (x # l)" inductive_simps listsp_simps[code]: "listsp A []" "listsp A (x # xs)" lemma listsp_mono [mono]: "A \ B \ listsp A \ listsp B" by (rule predicate1I, erule listsp.induct, blast+) lemmas lists_mono = listsp_mono [to_set] lemma listsp_infI: assumes l: "listsp A l" shows "listsp B l \ listsp (inf A B) l" using l by induct blast+ lemmas lists_IntI = listsp_infI [to_set] lemma listsp_inf_eq [simp]: "listsp (inf A B) = inf (listsp A) (listsp B)" proof (rule mono_inf [where f=listsp, THEN order_antisym]) show "mono listsp" by (simp add: mono_def listsp_mono) show "inf (listsp A) (listsp B) \ listsp (inf A B)" by (blast intro!: listsp_infI) qed lemmas listsp_conj_eq [simp] = listsp_inf_eq [simplified inf_fun_def inf_bool_def] lemmas lists_Int_eq [simp] = listsp_inf_eq [to_set] lemma Cons_in_lists_iff[simp]: "x#xs \ lists A \ x \ A \ xs \ lists A" by auto lemma append_in_listsp_conv [iff]: "(listsp A (xs @ ys)) = (listsp A xs \ listsp A ys)" by (induct xs) auto lemmas append_in_lists_conv [iff] = append_in_listsp_conv [to_set] lemma in_listsp_conv_set: "(listsp A xs) = (\x \ set xs. A x)" \ \eliminate \listsp\ in favour of \set\\ by (induct xs) auto lemmas in_lists_conv_set [code_unfold] = in_listsp_conv_set [to_set] lemma in_listspD [dest!]: "listsp A xs \ \x\set xs. A x" by (rule in_listsp_conv_set [THEN iffD1]) lemmas in_listsD [dest!] = in_listspD [to_set] lemma in_listspI [intro!]: "\x\set xs. A x \ listsp A xs" by (rule in_listsp_conv_set [THEN iffD2]) lemmas in_listsI [intro!] = in_listspI [to_set] lemma lists_eq_set: "lists A = {xs. set xs \ A}" by auto lemma lists_empty [simp]: "lists {} = {[]}" by auto lemma lists_UNIV [simp]: "lists UNIV = UNIV" by auto lemma lists_image: "lists (f`A) = map f ` lists A" proof - { fix xs have "\x\set xs. x \ f ` A \ xs \ map f ` lists A" by (induct xs) (auto simp del: list.map simp add: list.map[symmetric] intro!: imageI) } then show ?thesis by auto qed subsubsection \Inductive definition for membership\ inductive ListMem :: "'a \ 'a list \ bool" where elem: "ListMem x (x # xs)" | insert: "ListMem x xs \ ListMem x (y # xs)" lemma ListMem_iff: "(ListMem x xs) = (x \ set xs)" proof show "ListMem x xs \ x \ set xs" by (induct set: ListMem) auto show "x \ set xs \ ListMem x xs" by (induct xs) (auto intro: ListMem.intros) qed subsubsection \Lists as Cartesian products\ text\\set_Cons A Xs\: the set of lists with head drawn from \<^term>\A\ and tail drawn from \<^term>\Xs\.\ definition set_Cons :: "'a set \ 'a list set \ 'a list set" where "set_Cons A XS = {z. \x xs. z = x # xs \ x \ A \ xs \ XS}" lemma set_Cons_sing_Nil [simp]: "set_Cons A {[]} = (%x. [x])`A" by (auto simp add: set_Cons_def) text\Yields the set of lists, all of the same length as the argument and with elements drawn from the corresponding element of the argument.\ primrec listset :: "'a set list \ 'a list set" where "listset [] = {[]}" | "listset (A # As) = set_Cons A (listset As)" subsection \Relations on Lists\ subsubsection \Length Lexicographic Ordering\ text\These orderings preserve well-foundedness: shorter lists precede longer lists. These ordering are not used in dictionaries.\ primrec \ \The lexicographic ordering for lists of the specified length\ lexn :: "('a \ 'a) set \ nat \ ('a list \ 'a list) set" where "lexn r 0 = {}" | "lexn r (Suc n) = (map_prod (%(x, xs). x#xs) (%(x, xs). x#xs) ` (r <*lex*> lexn r n)) Int {(xs, ys). length xs = Suc n \ length ys = Suc n}" definition lex :: "('a \ 'a) set \ ('a list \ 'a list) set" where "lex r = (\n. lexn r n)" \ \Holds only between lists of the same length\ definition lenlex :: "('a \ 'a) set => ('a list \ 'a list) set" where "lenlex r = inv_image (less_than <*lex*> lex r) (\xs. (length xs, xs))" \ \Compares lists by their length and then lexicographically\ lemma wf_lexn: assumes "wf r" shows "wf (lexn r n)" proof (induct n) case (Suc n) have inj: "inj (\(x, xs). x # xs)" using assms by (auto simp: inj_on_def) have wf: "wf (map_prod (\(x, xs). x # xs) (\(x, xs). x # xs) ` (r <*lex*> lexn r n))" by (simp add: Suc.hyps assms wf_lex_prod wf_map_prod_image [OF _ inj]) then show ?case by (rule wf_subset) auto qed auto lemma lexn_length: "(xs, ys) \ lexn r n \ length xs = n \ length ys = n" by (induct n arbitrary: xs ys) auto lemma wf_lex [intro!]: assumes "wf r" shows "wf (lex r)" unfolding lex_def proof (rule wf_UN) show "wf (lexn r i)" for i by (simp add: assms wf_lexn) show "\i j. lexn r i \ lexn r j \ Domain (lexn r i) \ Range (lexn r j) = {}" by (metis DomainE Int_emptyI RangeE lexn_length) qed lemma lexn_conv: "lexn r n = {(xs,ys). length xs = n \ length ys = n \ (\xys x y xs' ys'. xs= xys @ x#xs' \ ys= xys @ y # ys' \ (x, y) \ r)}" proof (induction n) case (Suc n) then show ?case apply (simp add: image_Collect lex_prod_def, safe, blast) apply (rule_tac x = "ab # xys" in exI, simp) apply (case_tac xys; force) done qed auto text\By Mathias Fleury:\ proposition lexn_transI: assumes "trans r" shows "trans (lexn r n)" unfolding trans_def proof (intro allI impI) fix as bs cs assume asbs: "(as, bs) \ lexn r n" and bscs: "(bs, cs) \ lexn r n" obtain abs a b as' bs' where n: "length as = n" and "length bs = n" and as: "as = abs @ a # as'" and bs: "bs = abs @ b # bs'" and abr: "(a, b) \ r" using asbs unfolding lexn_conv by blast obtain bcs b' c' cs' bs' where n': "length cs = n" and "length bs = n" and bs': "bs = bcs @ b' # bs'" and cs: "cs = bcs @ c' # cs'" and b'c'r: "(b', c') \ r" using bscs unfolding lexn_conv by blast consider (le) "length bcs < length abs" | (eq) "length bcs = length abs" | (ge) "length bcs > length abs" by linarith thus "(as, cs) \ lexn r n" proof cases let ?k = "length bcs" case le hence "as ! ?k = bs ! ?k" unfolding as bs by (simp add: nth_append) hence "(as ! ?k, cs ! ?k) \ r" using b'c'r unfolding bs' cs by auto moreover have "length bcs < length as" using le unfolding as by simp from id_take_nth_drop[OF this] have "as = take ?k as @ as ! ?k # drop (Suc ?k) as" . moreover have "length bcs < length cs" unfolding cs by simp from id_take_nth_drop[OF this] have "cs = take ?k cs @ cs ! ?k # drop (Suc ?k) cs" . moreover have "take ?k as = take ?k cs" using le arg_cong[OF bs, of "take (length bcs)"] unfolding cs as bs' by auto ultimately show ?thesis using n n' unfolding lexn_conv by auto next let ?k = "length abs" case ge hence "bs ! ?k = cs ! ?k" unfolding bs' cs by (simp add: nth_append) hence "(as ! ?k, cs ! ?k) \ r" using abr unfolding as bs by auto moreover have "length abs < length as" using ge unfolding as by simp from id_take_nth_drop[OF this] have "as = take ?k as @ as ! ?k # drop (Suc ?k) as" . moreover have "length abs < length cs" using n n' unfolding as by simp from id_take_nth_drop[OF this] have "cs = take ?k cs @ cs ! ?k # drop (Suc ?k) cs" . moreover have "take ?k as = take ?k cs" using ge arg_cong[OF bs', of "take (length abs)"] unfolding cs as bs by auto ultimately show ?thesis using n n' unfolding lexn_conv by auto next let ?k = "length abs" case eq hence *: "abs = bcs" "b = b'" using bs bs' by auto hence "(a, c') \ r" using abr b'c'r assms unfolding trans_def by blast with * show ?thesis using n n' unfolding lexn_conv as bs cs by auto qed qed corollary lex_transI: assumes "trans r" shows "trans (lex r)" using lexn_transI [OF assms] by (clarsimp simp add: lex_def trans_def) (metis lexn_length) lemma lex_conv: "lex r = {(xs,ys). length xs = length ys \ (\xys x y xs' ys'. xs = xys @ x # xs' \ ys = xys @ y # ys' \ (x, y) \ r)}" by (force simp add: lex_def lexn_conv) lemma wf_lenlex [intro!]: "wf r \ wf (lenlex r)" by (unfold lenlex_def) blast lemma lenlex_conv: "lenlex r = {(xs,ys). length xs < length ys \ length xs = length ys \ (xs, ys) \ lex r}" by (auto simp add: lenlex_def Id_on_def lex_prod_def inv_image_def) lemma total_lenlex: assumes "total r" shows "total (lenlex r)" proof - have "(xs,ys) \ lexn r (length xs) \ (ys,xs) \ lexn r (length xs)" if "xs \ ys" and len: "length xs = length ys" for xs ys proof - obtain pre x xs' y ys' where "x\y" and xs: "xs = pre @ [x] @ xs'" and ys: "ys = pre @ [y] @ys'" by (meson len \xs \ ys\ same_length_different) then consider "(x,y) \ r" | "(y,x) \ r" by (meson UNIV_I assms total_on_def) then show ?thesis by cases (use len in \(force simp add: lexn_conv xs ys)+\) qed then show ?thesis by (fastforce simp: lenlex_def total_on_def lex_def) qed lemma lenlex_transI [intro]: "trans r \ trans (lenlex r)" unfolding lenlex_def by (meson lex_transI trans_inv_image trans_less_than trans_lex_prod) lemma Nil_notin_lex [iff]: "([], ys) \ lex r" by (simp add: lex_conv) lemma Nil2_notin_lex [iff]: "(xs, []) \ lex r" by (simp add:lex_conv) lemma Cons_in_lex [simp]: "(x # xs, y # ys) \ lex r \ (x, y) \ r \ length xs = length ys \ x = y \ (xs, ys) \ lex r" (is "?lhs = ?rhs") proof assume ?lhs then show ?rhs by (simp add: lex_conv) (metis hd_append list.sel(1) list.sel(3) tl_append2) next assume ?rhs then show ?lhs by (simp add: lex_conv) (blast intro: Cons_eq_appendI) qed lemma Nil_lenlex_iff1 [simp]: "([], ns) \ lenlex r \ ns \ []" and Nil_lenlex_iff2 [simp]: "(ns,[]) \ lenlex r" by (auto simp: lenlex_def) lemma Cons_lenlex_iff: "((m # ms, n # ns) \ lenlex r) \ length ms < length ns \ length ms = length ns \ (m,n) \ r \ (m = n \ (ms,ns) \ lenlex r)" by (auto simp: lenlex_def) lemma lenlex_irreflexive: "(\x. (x,x) \ r) \ (xs,xs) \ lenlex r" by (induction xs) (auto simp add: Cons_lenlex_iff) lemma lenlex_trans: "\(x,y) \ lenlex r; (y,z) \ lenlex r; trans r\ \ (x,z) \ lenlex r" by (meson lenlex_transI transD) lemma lenlex_length: "(ms, ns) \ lenlex r \ length ms \ length ns" by (auto simp: lenlex_def) lemma lex_append_rightI: "(xs, ys) \ lex r \ length vs = length us \ (xs @ us, ys @ vs) \ lex r" by (fastforce simp: lex_def lexn_conv) lemma lex_append_leftI: "(ys, zs) \ lex r \ (xs @ ys, xs @ zs) \ lex r" by (induct xs) auto lemma lex_append_leftD: "\x. (x,x) \ r \ (xs @ ys, xs @ zs) \ lex r \ (ys, zs) \ lex r" by (induct xs) auto lemma lex_append_left_iff: "\x. (x,x) \ r \ (xs @ ys, xs @ zs) \ lex r \ (ys, zs) \ lex r" by(metis lex_append_leftD lex_append_leftI) lemma lex_take_index: assumes "(xs, ys) \ lex r" obtains i where "i < length xs" and "i < length ys" and "take i xs = take i ys" and "(xs ! i, ys ! i) \ r" proof - obtain n us x xs' y ys' where "(xs, ys) \ lexn r n" and "length xs = n" and "length ys = n" and "xs = us @ x # xs'" and "ys = us @ y # ys'" and "(x, y) \ r" using assms by (fastforce simp: lex_def lexn_conv) then show ?thesis by (intro that [of "length us"]) auto qed lemma irrefl_lex: "irrefl r \ irrefl (lex r)" by (meson irrefl_def lex_take_index) lemma lexl_not_refl [simp]: "irrefl r \ (x,x) \ lex r" by (meson irrefl_def lex_take_index) subsubsection \Lexicographic Ordering\ text \Classical lexicographic ordering on lists, ie. "a" < "ab" < "b". This ordering does \emph{not} preserve well-foundedness. Author: N. Voelker, March 2005.\ definition lexord :: "('a \ 'a) set \ ('a list \ 'a list) set" where "lexord r = {(x,y). \ a v. y = x @ a # v \ (\ u a b v w. (a,b) \ r \ x = u @ (a # v) \ y = u @ (b # w))}" lemma lexord_Nil_left[simp]: "([],y) \ lexord r = (\ a x. y = a # x)" by (unfold lexord_def, induct_tac y, auto) lemma lexord_Nil_right[simp]: "(x,[]) \ lexord r" by (unfold lexord_def, induct_tac x, auto) lemma lexord_cons_cons[simp]: "(a # x, b # y) \ lexord r \ (a,b)\ r \ (a = b \ (x,y)\ lexord r)" (is "?lhs = ?rhs") proof assume ?lhs then show ?rhs apply (simp add: lexord_def) apply (metis hd_append list.sel(1) list.sel(3) tl_append2) done qed (auto simp add: lexord_def; (blast | meson Cons_eq_appendI)) lemmas lexord_simps = lexord_Nil_left lexord_Nil_right lexord_cons_cons lemma lexord_same_pref_iff: "(xs @ ys, xs @ zs) \ lexord r \ (\x \ set xs. (x,x) \ r) \ (ys, zs) \ lexord r" by(induction xs) auto lemma lexord_same_pref_if_irrefl[simp]: "irrefl r \ (xs @ ys, xs @ zs) \ lexord r \ (ys, zs) \ lexord r" by (simp add: irrefl_def lexord_same_pref_iff) lemma lexord_append_rightI: "\ b z. y = b # z \ (x, x @ y) \ lexord r" by (metis append_Nil2 lexord_Nil_left lexord_same_pref_iff) lemma lexord_append_left_rightI: "(a,b) \ r \ (u @ a # x, u @ b # y) \ lexord r" by (simp add: lexord_same_pref_iff) lemma lexord_append_leftI: "(u,v) \ lexord r \ (x @ u, x @ v) \ lexord r" by (simp add: lexord_same_pref_iff) lemma lexord_append_leftD: "\(x @ u, x @ v) \ lexord r; (\a. (a,a) \ r) \ \ (u,v) \ lexord r" by (simp add: lexord_same_pref_iff) lemma lexord_take_index_conv: "((x,y) \ lexord r) = ((length x < length y \ take (length x) y = x) \ (\i. i < min(length x)(length y) \ take i x = take i y \ (x!i,y!i) \ r))" proof - have "(\a v. y = x @ a # v) = (length x < length y \ take (length x) y = x)" by (metis Cons_nth_drop_Suc append_eq_conv_conj drop_all list.simps(3) not_le) moreover have "(\u a b. (a, b) \ r \ (\v. x = u @ a # v) \ (\w. y = u @ b # w)) = (\i take i x = take i y \ (x ! i, y ! i) \ r)" apply safe using less_iff_Suc_add apply auto[1] by (metis id_take_nth_drop) ultimately show ?thesis by (auto simp: lexord_def Let_def) qed \ \lexord is extension of partial ordering List.lex\ lemma lexord_lex: "(x,y) \ lex r = ((x,y) \ lexord r \ length x = length y)" proof (induction x arbitrary: y) case (Cons a x y) then show ?case by (cases y) (force+) qed auto lemma lexord_sufI: assumes "(u,w) \ lexord r" "length w \ length u" shows "(u@v,w@z) \ lexord r" proof- from leD[OF assms(2)] assms(1)[unfolded lexord_take_index_conv[of u w r] min_absorb2[OF assms(2)]] obtain i where "take i u = take i w" and "(u!i,w!i) \ r" and "i < length w" by blast hence "((u@v)!i, (w@z)!i) \ r" unfolding nth_append using less_le_trans[OF \i < length w\ assms(2)] \(u!i,w!i) \ r\ by presburger moreover have "i < min (length (u@v)) (length (w@z))" using assms(2) \i < length w\ by simp moreover have "take i (u@v) = take i (w@z)" using assms(2) \i < length w\ \take i u = take i w\ by simp ultimately show ?thesis using lexord_take_index_conv by blast qed lemma lexord_sufE: assumes "(xs@zs,ys@qs) \ lexord r" "xs \ ys" "length xs = length ys" "length zs = length qs" shows "(xs,ys) \ lexord r" proof- obtain i where "i < length (xs@zs)" and "i < length (ys@qs)" and "take i (xs@zs) = take i (ys@qs)" and "((xs@zs) ! i, (ys@qs) ! i) \ r" using assms(1) lex_take_index[unfolded lexord_lex,of "xs @ zs" "ys @ qs" r] length_append[of xs zs, unfolded assms(3,4), folded length_append[of ys qs]] by blast have "length (take i xs) = length (take i ys)" by (simp add: assms(3)) have "i < length xs" using assms(2,3) le_less_linear take_all[of xs i] take_all[of ys i] \take i (xs @ zs) = take i (ys @ qs)\ append_eq_append_conv take_append by metis hence "(xs ! i, ys ! i) \ r" using \((xs @ zs) ! i, (ys @ qs) ! i) \ r\ assms(3) by (simp add: nth_append) moreover have "take i xs = take i ys" using assms(3) \take i (xs @ zs) = take i (ys @ qs)\ by auto ultimately show ?thesis unfolding lexord_take_index_conv using \i < length xs\ assms(3) by fastforce qed lemma lexord_irreflexive: "\x. (x,x) \ r \ (xs,xs) \ lexord r" by (induct xs) auto text\By Ren\'e Thiemann:\ lemma lexord_partial_trans: "(\x y z. x \ set xs \ (x,y) \ r \ (y,z) \ r \ (x,z) \ r) \ (xs,ys) \ lexord r \ (ys,zs) \ lexord r \ (xs,zs) \ lexord r" proof (induct xs arbitrary: ys zs) case Nil from Nil(3) show ?case unfolding lexord_def by (cases zs, auto) next case (Cons x xs yys zzs) from Cons(3) obtain y ys where yys: "yys = y # ys" unfolding lexord_def by (cases yys, auto) note Cons = Cons[unfolded yys] from Cons(3) have one: "(x,y) \ r \ x = y \ (xs,ys) \ lexord r" by auto from Cons(4) obtain z zs where zzs: "zzs = z # zs" unfolding lexord_def by (cases zzs, auto) note Cons = Cons[unfolded zzs] from Cons(4) have two: "(y,z) \ r \ y = z \ (ys,zs) \ lexord r" by auto { assume "(xs,ys) \ lexord r" and "(ys,zs) \ lexord r" from Cons(1)[OF _ this] Cons(2) have "(xs,zs) \ lexord r" by auto } note ind1 = this { assume "(x,y) \ r" and "(y,z) \ r" from Cons(2)[OF _ this] have "(x,z) \ r" by auto } note ind2 = this from one two ind1 ind2 have "(x,z) \ r \ x = z \ (xs,zs) \ lexord r" by blast thus ?case unfolding zzs by auto qed lemma lexord_trans: "\ (x, y) \ lexord r; (y, z) \ lexord r; trans r \ \ (x, z) \ lexord r" by(auto simp: trans_def intro:lexord_partial_trans) lemma lexord_transI: "trans r \ trans (lexord r)" by (meson lexord_trans transI) lemma total_lexord: "total r \ total (lexord r)" unfolding total_on_def proof clarsimp fix x y assume "\x y. x \ y \ (x, y) \ r \ (y, x) \ r" and "(x::'a list) \ y" and "(y, x) \ lexord r" then show "(x, y) \ lexord r" proof (induction x arbitrary: y) case Nil then show ?case by (metis lexord_Nil_left list.exhaust) next case (Cons a x y) then show ?case by (cases y) (force+) qed qed corollary lexord_linear: "(\a b. (a,b) \ r \ a = b \ (b,a) \ r) \ (x,y) \ lexord r \ x = y \ (y,x) \ lexord r" using total_lexord by (metis UNIV_I total_on_def) lemma lexord_irrefl: "irrefl R \ irrefl (lexord R)" by (simp add: irrefl_def lexord_irreflexive) lemma lexord_asym: assumes "asym R" shows "asym (lexord R)" proof fix xs ys assume "(xs, ys) \ lexord R" then show "(ys, xs) \ lexord R" proof (induct xs arbitrary: ys) case Nil then show ?case by simp next case (Cons x xs) then obtain z zs where ys: "ys = z # zs" by (cases ys) auto with assms Cons show ?case by (auto elim: asym.cases) qed qed lemma lexord_asymmetric: assumes "asym R" assumes hyp: "(a, b) \ lexord R" shows "(b, a) \ lexord R" proof - from \asym R\ have "asym (lexord R)" by (rule lexord_asym) then show ?thesis by (rule asym.cases) (auto simp add: hyp) qed lemma asym_lex: "asym R \ asym (lex R)" by (meson asym.simps irrefl_lex lexord_asym lexord_lex) lemma asym_lenlex: "asym R \ asym (lenlex R)" by (simp add: lenlex_def asym_inv_image asym_less_than asym_lex asym_lex_prod) lemma lenlex_append1: assumes len: "(us,xs) \ lenlex R" and eq: "length vs = length ys" shows "(us @ vs, xs @ ys) \ lenlex R" using len proof (induction us) case Nil then show ?case by (simp add: lenlex_def eq) next case (Cons u us) with lex_append_rightI show ?case by (fastforce simp add: lenlex_def eq) qed lemma lenlex_append2 [simp]: assumes "irrefl R" shows "(us @ xs, us @ ys) \ lenlex R \ (xs, ys) \ lenlex R" proof (induction us) case Nil then show ?case by (simp add: lenlex_def) next case (Cons u us) with assms show ?case by (auto simp: lenlex_def irrefl_def) qed text \ Predicate version of lexicographic order integrated with Isabelle's order type classes. Author: Andreas Lochbihler \ context ord begin context notes [[inductive_internals]] begin inductive lexordp :: "'a list \ 'a list \ bool" where Nil: "lexordp [] (y # ys)" | Cons: "x < y \ lexordp (x # xs) (y # ys)" | Cons_eq: "\ \ x < y; \ y < x; lexordp xs ys \ \ lexordp (x # xs) (y # ys)" end lemma lexordp_simps [simp]: "lexordp [] ys = (ys \ [])" "lexordp xs [] = False" "lexordp (x # xs) (y # ys) \ x < y \ \ y < x \ lexordp xs ys" by(subst lexordp.simps, fastforce simp add: neq_Nil_conv)+ inductive lexordp_eq :: "'a list \ 'a list \ bool" where Nil: "lexordp_eq [] ys" | Cons: "x < y \ lexordp_eq (x # xs) (y # ys)" | Cons_eq: "\ \ x < y; \ y < x; lexordp_eq xs ys \ \ lexordp_eq (x # xs) (y # ys)" lemma lexordp_eq_simps [simp]: "lexordp_eq [] ys = True" "lexordp_eq xs [] \ xs = []" "lexordp_eq (x # xs) [] = False" "lexordp_eq (x # xs) (y # ys) \ x < y \ \ y < x \ lexordp_eq xs ys" by(subst lexordp_eq.simps, fastforce)+ lemma lexordp_append_rightI: "ys \ Nil \ lexordp xs (xs @ ys)" by(induct xs)(auto simp add: neq_Nil_conv) lemma lexordp_append_left_rightI: "x < y \ lexordp (us @ x # xs) (us @ y # ys)" by(induct us) auto lemma lexordp_eq_refl: "lexordp_eq xs xs" by(induct xs) simp_all lemma lexordp_append_leftI: "lexordp us vs \ lexordp (xs @ us) (xs @ vs)" by(induct xs) auto lemma lexordp_append_leftD: "\ lexordp (xs @ us) (xs @ vs); \a. \ a < a \ \ lexordp us vs" by(induct xs) auto lemma lexordp_irreflexive: assumes irrefl: "\x. \ x < x" shows "\ lexordp xs xs" proof assume "lexordp xs xs" thus False by(induct xs ys\xs)(simp_all add: irrefl) qed lemma lexordp_into_lexordp_eq: "lexordp xs ys \ lexordp_eq xs ys" by (induction rule: lexordp.induct) simp_all lemma lexordp_eq_pref: "lexordp_eq u (u @ v)" by (metis append_Nil2 lexordp_append_rightI lexordp_eq_refl lexordp_into_lexordp_eq) end declare ord.lexordp_simps [simp, code] declare ord.lexordp_eq_simps [code, simp] context order begin lemma lexordp_antisym: assumes "lexordp xs ys" "lexordp ys xs" shows False using assms by induct auto lemma lexordp_irreflexive': "\ lexordp xs xs" by(rule lexordp_irreflexive) simp end context linorder begin lemma lexordp_cases [consumes 1, case_names Nil Cons Cons_eq, cases pred: lexordp]: assumes "lexordp xs ys" obtains (Nil) y ys' where "xs = []" "ys = y # ys'" | (Cons) x xs' y ys' where "xs = x # xs'" "ys = y # ys'" "x < y" | (Cons_eq) x xs' ys' where "xs = x # xs'" "ys = x # ys'" "lexordp xs' ys'" using assms by cases (fastforce simp add: not_less_iff_gr_or_eq)+ lemma lexordp_induct [consumes 1, case_names Nil Cons Cons_eq, induct pred: lexordp]: assumes major: "lexordp xs ys" and Nil: "\y ys. P [] (y # ys)" and Cons: "\x xs y ys. x < y \ P (x # xs) (y # ys)" and Cons_eq: "\x xs ys. \ lexordp xs ys; P xs ys \ \ P (x # xs) (x # ys)" shows "P xs ys" using major by induct (simp_all add: Nil Cons not_less_iff_gr_or_eq Cons_eq) lemma lexordp_iff: "lexordp xs ys \ (\x vs. ys = xs @ x # vs) \ (\us a b vs ws. a < b \ xs = us @ a # vs \ ys = us @ b # ws)" (is "?lhs = ?rhs") proof assume ?lhs thus ?rhs proof induct case Cons_eq thus ?case by simp (metis append.simps(2)) qed(fastforce intro: disjI2 del: disjCI intro: exI[where x="[]"])+ next assume ?rhs thus ?lhs by(auto intro: lexordp_append_leftI[where us="[]", simplified] lexordp_append_leftI) qed lemma lexordp_conv_lexord: "lexordp xs ys \ (xs, ys) \ lexord {(x, y). x < y}" by(simp add: lexordp_iff lexord_def) lemma lexordp_eq_antisym: assumes "lexordp_eq xs ys" "lexordp_eq ys xs" shows "xs = ys" using assms by induct simp_all lemma lexordp_eq_trans: assumes "lexordp_eq xs ys" and "lexordp_eq ys zs" shows "lexordp_eq xs zs" using assms by (induct arbitrary: zs) (case_tac zs; auto)+ lemma lexordp_trans: assumes "lexordp xs ys" "lexordp ys zs" shows "lexordp xs zs" using assms by (induct arbitrary: zs) (case_tac zs; auto)+ lemma lexordp_linear: "lexordp xs ys \ xs = ys \ lexordp ys xs" by(induct xs arbitrary: ys; case_tac ys; fastforce) lemma lexordp_conv_lexordp_eq: "lexordp xs ys \ lexordp_eq xs ys \ \ lexordp_eq ys xs" (is "?lhs \ ?rhs") proof assume ?lhs hence "\ lexordp_eq ys xs" by induct simp_all with \?lhs\ show ?rhs by (simp add: lexordp_into_lexordp_eq) next assume ?rhs hence "lexordp_eq xs ys" "\ lexordp_eq ys xs" by simp_all thus ?lhs by induct simp_all qed lemma lexordp_eq_conv_lexord: "lexordp_eq xs ys \ xs = ys \ lexordp xs ys" by(auto simp add: lexordp_conv_lexordp_eq lexordp_eq_refl dest: lexordp_eq_antisym) lemma lexordp_eq_linear: "lexordp_eq xs ys \ lexordp_eq ys xs" by (induct xs arbitrary: ys) (case_tac ys; auto)+ lemma lexordp_linorder: "class.linorder lexordp_eq lexordp" by unfold_locales (auto simp add: lexordp_conv_lexordp_eq lexordp_eq_refl lexordp_eq_antisym intro: lexordp_eq_trans del: disjCI intro: lexordp_eq_linear) end lemma sorted_insort_is_snoc: "sorted xs \ \x \ set xs. a \ x \ insort a xs = xs @ [a]" by (induct xs) (auto dest!: insort_is_Cons) subsubsection \Lexicographic combination of measure functions\ text \These are useful for termination proofs\ definition "measures fs = inv_image (lex less_than) (%a. map (%f. f a) fs)" lemma wf_measures[simp]: "wf (measures fs)" unfolding measures_def by blast lemma in_measures[simp]: "(x, y) \ measures [] = False" "(x, y) \ measures (f # fs) = (f x < f y \ (f x = f y \ (x, y) \ measures fs))" unfolding measures_def by auto lemma measures_less: "f x < f y \ (x, y) \ measures (f#fs)" by simp lemma measures_lesseq: "f x \ f y \ (x, y) \ measures fs \ (x, y) \ measures (f#fs)" by auto subsubsection \Lifting Relations to Lists: one element\ definition listrel1 :: "('a \ 'a) set \ ('a list \ 'a list) set" where "listrel1 r = {(xs,ys). \us z z' vs. xs = us @ z # vs \ (z,z') \ r \ ys = us @ z' # vs}" lemma listrel1I: "\ (x, y) \ r; xs = us @ x # vs; ys = us @ y # vs \ \ (xs, ys) \ listrel1 r" unfolding listrel1_def by auto lemma listrel1E: "\ (xs, ys) \ listrel1 r; !!x y us vs. \ (x, y) \ r; xs = us @ x # vs; ys = us @ y # vs \ \ P \ \ P" unfolding listrel1_def by auto lemma not_Nil_listrel1 [iff]: "([], xs) \ listrel1 r" unfolding listrel1_def by blast lemma not_listrel1_Nil [iff]: "(xs, []) \ listrel1 r" unfolding listrel1_def by blast lemma Cons_listrel1_Cons [iff]: "(x # xs, y # ys) \ listrel1 r \ (x,y) \ r \ xs = ys \ x = y \ (xs, ys) \ listrel1 r" by (simp add: listrel1_def Cons_eq_append_conv) (blast) lemma listrel1I1: "(x,y) \ r \ (x # xs, y # xs) \ listrel1 r" by fast lemma listrel1I2: "(xs, ys) \ listrel1 r \ (x # xs, x # ys) \ listrel1 r" by fast lemma append_listrel1I: "(xs, ys) \ listrel1 r \ us = vs \ xs = ys \ (us, vs) \ listrel1 r \ (xs @ us, ys @ vs) \ listrel1 r" unfolding listrel1_def by auto (blast intro: append_eq_appendI)+ lemma Cons_listrel1E1[elim!]: assumes "(x # xs, ys) \ listrel1 r" and "\y. ys = y # xs \ (x, y) \ r \ R" and "\zs. ys = x # zs \ (xs, zs) \ listrel1 r \ R" shows R using assms by (cases ys) blast+ lemma Cons_listrel1E2[elim!]: assumes "(xs, y # ys) \ listrel1 r" and "\x. xs = x # ys \ (x, y) \ r \ R" and "\zs. xs = y # zs \ (zs, ys) \ listrel1 r \ R" shows R using assms by (cases xs) blast+ lemma snoc_listrel1_snoc_iff: "(xs @ [x], ys @ [y]) \ listrel1 r \ (xs, ys) \ listrel1 r \ x = y \ xs = ys \ (x,y) \ r" (is "?L \ ?R") proof assume ?L thus ?R by (fastforce simp: listrel1_def snoc_eq_iff_butlast butlast_append) next assume ?R then show ?L unfolding listrel1_def by force qed lemma listrel1_eq_len: "(xs,ys) \ listrel1 r \ length xs = length ys" unfolding listrel1_def by auto lemma listrel1_mono: "r \ s \ listrel1 r \ listrel1 s" unfolding listrel1_def by blast lemma listrel1_converse: "listrel1 (r\) = (listrel1 r)\" unfolding listrel1_def by blast lemma in_listrel1_converse: "(x,y) \ listrel1 (r\) \ (x,y) \ (listrel1 r)\" unfolding listrel1_def by blast lemma listrel1_iff_update: "(xs,ys) \ (listrel1 r) \ (\y n. (xs ! n, y) \ r \ n < length xs \ ys = xs[n:=y])" (is "?L \ ?R") proof assume "?L" then obtain x y u v where "xs = u @ x # v" "ys = u @ y # v" "(x,y) \ r" unfolding listrel1_def by auto then have "ys = xs[length u := y]" and "length u < length xs" and "(xs ! length u, y) \ r" by auto then show "?R" by auto next assume "?R" then obtain x y n where "(xs!n, y) \ r" "n < size xs" "ys = xs[n:=y]" "x = xs!n" by auto then obtain u v where "xs = u @ x # v" and "ys = u @ y # v" and "(x, y) \ r" by (auto intro: upd_conv_take_nth_drop id_take_nth_drop) then show "?L" by (auto simp: listrel1_def) qed text\Accessible part and wellfoundedness:\ lemma Cons_acc_listrel1I [intro!]: "x \ Wellfounded.acc r \ xs \ Wellfounded.acc (listrel1 r) \ (x # xs) \ Wellfounded.acc (listrel1 r)" apply (induct arbitrary: xs set: Wellfounded.acc) apply (erule thin_rl) apply (erule acc_induct) apply (rule accI) apply (blast) done lemma lists_accD: "xs \ lists (Wellfounded.acc r) \ xs \ Wellfounded.acc (listrel1 r)" proof (induct set: lists) case Nil then show ?case by (meson acc.intros not_listrel1_Nil) next case (Cons a l) then show ?case by blast qed lemma lists_accI: "xs \ Wellfounded.acc (listrel1 r) \ xs \ lists (Wellfounded.acc r)" apply (induct set: Wellfounded.acc) apply clarify apply (rule accI) apply (fastforce dest!: in_set_conv_decomp[THEN iffD1] simp: listrel1_def) done lemma wf_listrel1_iff[simp]: "wf(listrel1 r) = wf r" by (auto simp: wf_acc_iff intro: lists_accD lists_accI[THEN Cons_in_lists_iff[THEN iffD1, THEN conjunct1]]) subsubsection \Lifting Relations to Lists: all elements\ inductive_set listrel :: "('a \ 'b) set \ ('a list \ 'b list) set" for r :: "('a \ 'b) set" where Nil: "([],[]) \ listrel r" | Cons: "\(x,y) \ r; (xs,ys) \ listrel r\ \ (x#xs, y#ys) \ listrel r" inductive_cases listrel_Nil1 [elim!]: "([],xs) \ listrel r" inductive_cases listrel_Nil2 [elim!]: "(xs,[]) \ listrel r" inductive_cases listrel_Cons1 [elim!]: "(y#ys,xs) \ listrel r" inductive_cases listrel_Cons2 [elim!]: "(xs,y#ys) \ listrel r" lemma listrel_eq_len: "(xs, ys) \ listrel r \ length xs = length ys" by(induct rule: listrel.induct) auto lemma listrel_iff_zip [code_unfold]: "(xs,ys) \ listrel r \ length xs = length ys \ (\(x,y) \ set(zip xs ys). (x,y) \ r)" (is "?L \ ?R") proof assume ?L thus ?R by induct (auto intro: listrel_eq_len) next assume ?R thus ?L apply (clarify) by (induct rule: list_induct2) (auto intro: listrel.intros) qed lemma listrel_iff_nth: "(xs,ys) \ listrel r \ length xs = length ys \ (\n < length xs. (xs!n, ys!n) \ r)" (is "?L \ ?R") by (auto simp add: all_set_conv_all_nth listrel_iff_zip) lemma listrel_mono: "r \ s \ listrel r \ listrel s" by (meson listrel_iff_nth subrelI subset_eq) lemma listrel_subset: assumes "r \ A \ A" shows "listrel r \ lists A \ lists A" proof clarify show "a \ lists A \ b \ lists A" if "(a, b) \ listrel r" for a b using that assms by (induction rule: listrel.induct, auto) qed lemma listrel_refl_on: assumes "refl_on A r" shows "refl_on (lists A) (listrel r)" proof - have "l \ lists A \ (l, l) \ listrel r" for l using assms unfolding refl_on_def by (induction l, auto intro: listrel.intros) then show ?thesis by (meson assms listrel_subset refl_on_def) qed lemma listrel_sym: "sym r \ sym (listrel r)" by (simp add: listrel_iff_nth sym_def) lemma listrel_trans: assumes "trans r" shows "trans (listrel r)" proof - have "(x, z) \ listrel r" if "(x, y) \ listrel r" "(y, z) \ listrel r" for x y z using that proof induction case (Cons x y xs ys) then show ?case by clarsimp (metis assms listrel.Cons listrel_iff_nth transD) qed auto then show ?thesis using transI by blast qed theorem equiv_listrel: "equiv A r \ equiv (lists A) (listrel r)" by (simp add: equiv_def listrel_refl_on listrel_sym listrel_trans) lemma listrel_rtrancl_refl[iff]: "(xs,xs) \ listrel(r\<^sup>*)" using listrel_refl_on[of UNIV, OF refl_rtrancl] by(auto simp: refl_on_def) lemma listrel_rtrancl_trans: "\(xs,ys) \ listrel(r\<^sup>*); (ys,zs) \ listrel(r\<^sup>*)\ \ (xs,zs) \ listrel(r\<^sup>*)" by (metis listrel_trans trans_def trans_rtrancl) lemma listrel_Nil [simp]: "listrel r `` {[]} = {[]}" by (blast intro: listrel.intros) lemma listrel_Cons: "listrel r `` {x#xs} = set_Cons (r``{x}) (listrel r `` {xs})" by (auto simp add: set_Cons_def intro: listrel.intros) text \Relating \<^term>\listrel1\, \<^term>\listrel\ and closures:\ lemma listrel1_rtrancl_subset_rtrancl_listrel1: "listrel1 (r\<^sup>*) \ (listrel1 r)\<^sup>*" proof (rule subrelI) fix xs ys assume 1: "(xs,ys) \ listrel1 (r\<^sup>*)" { fix x y us vs have "(x,y) \ r\<^sup>* \ (us @ x # vs, us @ y # vs) \ (listrel1 r)\<^sup>*" proof(induct rule: rtrancl.induct) case rtrancl_refl show ?case by simp next case rtrancl_into_rtrancl thus ?case by (metis listrel1I rtrancl.rtrancl_into_rtrancl) qed } thus "(xs,ys) \ (listrel1 r)\<^sup>*" using 1 by(blast elim: listrel1E) qed lemma rtrancl_listrel1_eq_len: "(x,y) \ (listrel1 r)\<^sup>* \ length x = length y" by (induct rule: rtrancl.induct) (auto intro: listrel1_eq_len) lemma rtrancl_listrel1_ConsI1: "(xs,ys) \ (listrel1 r)\<^sup>* \ (x#xs,x#ys) \ (listrel1 r)\<^sup>*" proof (induction rule: rtrancl.induct) case (rtrancl_into_rtrancl a b c) then show ?case by (metis listrel1I2 rtrancl.rtrancl_into_rtrancl) qed auto lemma rtrancl_listrel1_ConsI2: "(x,y) \ r\<^sup>* \ (xs, ys) \ (listrel1 r)\<^sup>* \ (x # xs, y # ys) \ (listrel1 r)\<^sup>*" by (meson in_mono listrel1I1 listrel1_rtrancl_subset_rtrancl_listrel1 rtrancl_listrel1_ConsI1 rtrancl_trans) lemma listrel1_subset_listrel: "r \ r' \ refl r' \ listrel1 r \ listrel(r')" by(auto elim!: listrel1E simp add: listrel_iff_zip set_zip refl_on_def) lemma listrel_reflcl_if_listrel1: "(xs,ys) \ listrel1 r \ (xs,ys) \ listrel(r\<^sup>*)" by(erule listrel1E)(auto simp add: listrel_iff_zip set_zip) lemma listrel_rtrancl_eq_rtrancl_listrel1: "listrel (r\<^sup>*) = (listrel1 r)\<^sup>*" proof { fix x y assume "(x,y) \ listrel (r\<^sup>*)" then have "(x,y) \ (listrel1 r)\<^sup>*" by induct (auto intro: rtrancl_listrel1_ConsI2) } then show "listrel (r\<^sup>*) \ (listrel1 r)\<^sup>*" by (rule subrelI) next show "listrel (r\<^sup>*) \ (listrel1 r)\<^sup>*" proof(rule subrelI) fix xs ys assume "(xs,ys) \ (listrel1 r)\<^sup>*" then show "(xs,ys) \ listrel (r\<^sup>*)" proof induct case base show ?case by(auto simp add: listrel_iff_zip set_zip) next case (step ys zs) thus ?case by (metis listrel_reflcl_if_listrel1 listrel_rtrancl_trans) qed qed qed lemma rtrancl_listrel1_if_listrel: "(xs,ys) \ listrel r \ (xs,ys) \ (listrel1 r)\<^sup>*" by(metis listrel_rtrancl_eq_rtrancl_listrel1 subsetD[OF listrel_mono] r_into_rtrancl subsetI) lemma listrel_subset_rtrancl_listrel1: "listrel r \ (listrel1 r)\<^sup>*" by(fast intro:rtrancl_listrel1_if_listrel) subsection \Size function\ lemma [measure_function]: "is_measure f \ is_measure (size_list f)" by (rule is_measure_trivial) lemma [measure_function]: "is_measure f \ is_measure (size_option f)" by (rule is_measure_trivial) lemma size_list_estimation[termination_simp]: "x \ set xs \ y < f x \ y < size_list f xs" by (induct xs) auto lemma size_list_estimation'[termination_simp]: "x \ set xs \ y \ f x \ y \ size_list f xs" by (induct xs) auto lemma size_list_map[simp]: "size_list f (map g xs) = size_list (f \ g) xs" by (induct xs) auto lemma size_list_append[simp]: "size_list f (xs @ ys) = size_list f xs + size_list f ys" by (induct xs, auto) lemma size_list_pointwise[termination_simp]: "(\x. x \ set xs \ f x \ g x) \ size_list f xs \ size_list g xs" by (induct xs) force+ subsection \Monad operation\ definition bind :: "'a list \ ('a \ 'b list) \ 'b list" where "bind xs f = concat (map f xs)" hide_const (open) bind lemma bind_simps [simp]: "List.bind [] f = []" "List.bind (x # xs) f = f x @ List.bind xs f" by (simp_all add: bind_def) lemma list_bind_cong [fundef_cong]: assumes "xs = ys" "(\x. x \ set xs \ f x = g x)" shows "List.bind xs f = List.bind ys g" proof - from assms(2) have "List.bind xs f = List.bind xs g" by (induction xs) simp_all with assms(1) show ?thesis by simp qed lemma set_list_bind: "set (List.bind xs f) = (\x\set xs. set (f x))" by (induction xs) simp_all subsection \Code generation\ text\Optional tail recursive version of \<^const>\map\. Can avoid stack overflow in some target languages.\ fun map_tailrec_rev :: "('a \ 'b) \ 'a list \ 'b list \ 'b list" where "map_tailrec_rev f [] bs = bs" | "map_tailrec_rev f (a#as) bs = map_tailrec_rev f as (f a # bs)" lemma map_tailrec_rev: "map_tailrec_rev f as bs = rev(map f as) @ bs" by(induction as arbitrary: bs) simp_all definition map_tailrec :: "('a \ 'b) \ 'a list \ 'b list" where "map_tailrec f as = rev (map_tailrec_rev f as [])" text\Code equation:\ lemma map_eq_map_tailrec: "map = map_tailrec" by(simp add: fun_eq_iff map_tailrec_def map_tailrec_rev) subsubsection \Counterparts for set-related operations\ definition member :: "'a list \ 'a \ bool" where [code_abbrev]: "member xs x \ x \ set xs" text \ Use \member\ only for generating executable code. Otherwise use \<^prop>\x \ set xs\ instead --- it is much easier to reason about. \ lemma member_rec [code]: "member (x # xs) y \ x = y \ member xs y" "member [] y \ False" by (auto simp add: member_def) lemma in_set_member (* FIXME delete candidate *): "x \ set xs \ member xs x" by (simp add: member_def) lemmas list_all_iff [code_abbrev] = fun_cong[OF list.pred_set] definition list_ex :: "('a \ bool) \ 'a list \ bool" where list_ex_iff [code_abbrev]: "list_ex P xs \ Bex (set xs) P" definition list_ex1 :: "('a \ bool) \ 'a list \ bool" where list_ex1_iff [code_abbrev]: "list_ex1 P xs \ (\! x. x \ set xs \ P x)" text \ Usually you should prefer \\x\set xs\, \\x\set xs\ and \\!x. x\set xs \ _\ over \<^const>\list_all\, \<^const>\list_ex\ and \<^const>\list_ex1\ in specifications. \ lemma list_all_simps [code]: "list_all P (x # xs) \ P x \ list_all P xs" "list_all P [] \ True" by (simp_all add: list_all_iff) lemma list_ex_simps [simp, code]: "list_ex P (x # xs) \ P x \ list_ex P xs" "list_ex P [] \ False" by (simp_all add: list_ex_iff) lemma list_ex1_simps [simp, code]: "list_ex1 P [] = False" "list_ex1 P (x # xs) = (if P x then list_all (\y. \ P y \ x = y) xs else list_ex1 P xs)" by (auto simp add: list_ex1_iff list_all_iff) lemma Ball_set_list_all: (* FIXME delete candidate *) "Ball (set xs) P \ list_all P xs" by (simp add: list_all_iff) lemma Bex_set_list_ex: (* FIXME delete candidate *) "Bex (set xs) P \ list_ex P xs" by (simp add: list_ex_iff) lemma list_all_append [simp]: "list_all P (xs @ ys) \ list_all P xs \ list_all P ys" by (auto simp add: list_all_iff) lemma list_ex_append [simp]: "list_ex P (xs @ ys) \ list_ex P xs \ list_ex P ys" by (auto simp add: list_ex_iff) lemma list_all_rev [simp]: "list_all P (rev xs) \ list_all P xs" by (simp add: list_all_iff) lemma list_ex_rev [simp]: "list_ex P (rev xs) \ list_ex P xs" by (simp add: list_ex_iff) lemma list_all_length: "list_all P xs \ (\n < length xs. P (xs ! n))" by (auto simp add: list_all_iff set_conv_nth) lemma list_ex_length: "list_ex P xs \ (\n < length xs. P (xs ! n))" by (auto simp add: list_ex_iff set_conv_nth) lemmas list_all_cong [fundef_cong] = list.pred_cong lemma list_ex_cong [fundef_cong]: "xs = ys \ (\x. x \ set ys \ f x = g x) \ list_ex f xs = list_ex g ys" by (simp add: list_ex_iff) definition can_select :: "('a \ bool) \ 'a set \ bool" where [code_abbrev]: "can_select P A = (\!x\A. P x)" lemma can_select_set_list_ex1 [code]: "can_select P (set A) = list_ex1 P A" by (simp add: list_ex1_iff can_select_def) text \Executable checks for relations on sets\ definition listrel1p :: "('a \ 'a \ bool) \ 'a list \ 'a list \ bool" where "listrel1p r xs ys = ((xs, ys) \ listrel1 {(x, y). r x y})" lemma [code_unfold]: "(xs, ys) \ listrel1 r = listrel1p (\x y. (x, y) \ r) xs ys" unfolding listrel1p_def by auto lemma [code]: "listrel1p r [] xs = False" "listrel1p r xs [] = False" "listrel1p r (x # xs) (y # ys) \ r x y \ xs = ys \ x = y \ listrel1p r xs ys" by (simp add: listrel1p_def)+ definition lexordp :: "('a \ 'a \ bool) \ 'a list \ 'a list \ bool" where "lexordp r xs ys = ((xs, ys) \ lexord {(x, y). r x y})" lemma [code_unfold]: "(xs, ys) \ lexord r = lexordp (\x y. (x, y) \ r) xs ys" unfolding lexordp_def by auto lemma [code]: "lexordp r xs [] = False" "lexordp r [] (y#ys) = True" "lexordp r (x # xs) (y # ys) = (r x y \ (x = y \ lexordp r xs ys))" unfolding lexordp_def by auto text \Bounded quantification and summation over nats.\ lemma atMost_upto [code_unfold]: "{..n} = set [0..m (\m \ {0..m (\m \ {0..m\n::nat. P m) \ (\m \ {0..n}. P m)" by auto lemma ex_nat_less [code_unfold]: "(\m\n::nat. P m) \ (\m \ {0..n}. P m)" by auto text\Bounded \LEAST\ operator:\ definition "Bleast S P = (LEAST x. x \ S \ P x)" definition "abort_Bleast S P = (LEAST x. x \ S \ P x)" declare [[code abort: abort_Bleast]] lemma Bleast_code [code]: "Bleast (set xs) P = (case filter P (sort xs) of x#xs \ x | [] \ abort_Bleast (set xs) P)" proof (cases "filter P (sort xs)") case Nil thus ?thesis by (simp add: Bleast_def abort_Bleast_def) next case (Cons x ys) have "(LEAST x. x \ set xs \ P x) = x" proof (rule Least_equality) show "x \ set xs \ P x" by (metis Cons Cons_eq_filter_iff in_set_conv_decomp set_sort) next fix y assume "y \ set xs \ P y" hence "y \ set (filter P xs)" by auto thus "x \ y" by (metis Cons eq_iff filter_sort set_ConsD set_sort sorted_wrt.simps(2) sorted_sort) qed thus ?thesis using Cons by (simp add: Bleast_def) qed declare Bleast_def[symmetric, code_unfold] text \Summation over ints.\ lemma greaterThanLessThan_upto [code_unfold]: "{i<..Optimizing by rewriting\ definition null :: "'a list \ bool" where [code_abbrev]: "null xs \ xs = []" text \ Efficient emptyness check is implemented by \<^const>\null\. \ lemma null_rec [code]: "null (x # xs) \ False" "null [] \ True" by (simp_all add: null_def) lemma eq_Nil_null: (* FIXME delete candidate *) "xs = [] \ null xs" by (simp add: null_def) lemma equal_Nil_null [code_unfold]: "HOL.equal xs [] \ null xs" "HOL.equal [] = null" by (auto simp add: equal null_def) definition maps :: "('a \ 'b list) \ 'a list \ 'b list" where [code_abbrev]: "maps f xs = concat (map f xs)" definition map_filter :: "('a \ 'b option) \ 'a list \ 'b list" where [code_post]: "map_filter f xs = map (the \ f) (filter (\x. f x \ None) xs)" text \ Operations \<^const>\maps\ and \<^const>\map_filter\ avoid intermediate lists on execution -- do not use for proving. \ lemma maps_simps [code]: "maps f (x # xs) = f x @ maps f xs" "maps f [] = []" by (simp_all add: maps_def) lemma map_filter_simps [code]: "map_filter f (x # xs) = (case f x of None \ map_filter f xs | Some y \ y # map_filter f xs)" "map_filter f [] = []" by (simp_all add: map_filter_def split: option.split) lemma concat_map_maps: (* FIXME delete candidate *) "concat (map f xs) = maps f xs" by (simp add: maps_def) lemma map_filter_map_filter [code_unfold]: "map f (filter P xs) = map_filter (\x. if P x then Some (f x) else None) xs" by (simp add: map_filter_def) text \Optimized code for \\i\{a..b::int}\ and \\n:{a.. and similiarly for \\\.\ definition all_interval_nat :: "(nat \ bool) \ nat \ nat \ bool" where "all_interval_nat P i j \ (\n \ {i.. i \ j \ P i \ all_interval_nat P (Suc i) j" proof - have *: "\n. P i \ \n\{Suc i.. i \ n \ n < j \ P n" proof - fix n assume "P i" "\n\{Suc i.. n" "n < j" then show "P n" by (cases "n = i") simp_all qed show ?thesis by (auto simp add: all_interval_nat_def intro: *) qed lemma list_all_iff_all_interval_nat [code_unfold]: "list_all P [i.. all_interval_nat P i j" by (simp add: list_all_iff all_interval_nat_def) lemma list_ex_iff_not_all_inverval_nat [code_unfold]: "list_ex P [i.. \ (all_interval_nat (Not \ P) i j)" by (simp add: list_ex_iff all_interval_nat_def) definition all_interval_int :: "(int \ bool) \ int \ int \ bool" where "all_interval_int P i j \ (\k \ {i..j}. P k)" lemma [code]: "all_interval_int P i j \ i > j \ P i \ all_interval_int P (i + 1) j" proof - have *: "\k. P i \ \k\{i+1..j}. P k \ i \ k \ k \ j \ P k" proof - fix k assume "P i" "\k\{i+1..j}. P k" "i \ k" "k \ j" then show "P k" by (cases "k = i") simp_all qed show ?thesis by (auto simp add: all_interval_int_def intro: *) qed lemma list_all_iff_all_interval_int [code_unfold]: "list_all P [i..j] \ all_interval_int P i j" by (simp add: list_all_iff all_interval_int_def) lemma list_ex_iff_not_all_inverval_int [code_unfold]: "list_ex P [i..j] \ \ (all_interval_int (Not \ P) i j)" by (simp add: list_ex_iff all_interval_int_def) text \optimized code (tail-recursive) for \<^term>\length\\ definition gen_length :: "nat \ 'a list \ nat" where "gen_length n xs = n + length xs" lemma gen_length_code [code]: "gen_length n [] = n" "gen_length n (x # xs) = gen_length (Suc n) xs" by(simp_all add: gen_length_def) declare list.size(3-4)[code del] lemma length_code [code]: "length = gen_length 0" by(simp add: gen_length_def fun_eq_iff) hide_const (open) member null maps map_filter all_interval_nat all_interval_int gen_length subsubsection \Pretty lists\ ML \ (* Code generation for list literals. *) signature LIST_CODE = sig val add_literal_list: string -> theory -> theory end; structure List_Code : LIST_CODE = struct open Basic_Code_Thingol; fun implode_list t = let fun dest_cons (IConst { sym = Code_Symbol.Constant \<^const_name>\Cons\, ... } `$ t1 `$ t2) = SOME (t1, t2) | dest_cons _ = NONE; val (ts, t') = Code_Thingol.unfoldr dest_cons t; in case t' of IConst { sym = Code_Symbol.Constant \<^const_name>\Nil\, ... } => SOME ts | _ => NONE end; fun print_list (target_fxy, target_cons) pr fxy t1 t2 = Code_Printer.brackify_infix (target_fxy, Code_Printer.R) fxy ( pr (Code_Printer.INFX (target_fxy, Code_Printer.X)) t1, Code_Printer.str target_cons, pr (Code_Printer.INFX (target_fxy, Code_Printer.R)) t2 ); fun add_literal_list target = let fun pretty literals pr _ vars fxy [(t1, _), (t2, _)] = case Option.map (cons t1) (implode_list t2) of SOME ts => Code_Printer.literal_list literals (map (pr vars Code_Printer.NOBR) ts) | NONE => print_list (Code_Printer.infix_cons literals) (pr vars) fxy t1 t2; in Code_Target.set_printings (Code_Symbol.Constant (\<^const_name>\Cons\, [(target, SOME (Code_Printer.complex_const_syntax (2, pretty)))])) end end; \ code_printing type_constructor list \ (SML) "_ list" and (OCaml) "_ list" and (Haskell) "![(_)]" and (Scala) "List[(_)]" | constant Nil \ (SML) "[]" and (OCaml) "[]" and (Haskell) "[]" and (Scala) "!Nil" | class_instance list :: equal \ (Haskell) - | constant "HOL.equal :: 'a list \ 'a list \ bool" \ (Haskell) infix 4 "==" setup \fold (List_Code.add_literal_list) ["SML", "OCaml", "Haskell", "Scala"]\ code_reserved SML list code_reserved OCaml list subsubsection \Use convenient predefined operations\ code_printing constant "(@)" \ (SML) infixr 7 "@" and (OCaml) infixr 6 "@" and (Haskell) infixr 5 "++" and (Scala) infixl 7 "++" | constant map \ (Haskell) "map" | constant filter \ (Haskell) "filter" | constant concat \ (Haskell) "concat" | constant List.maps \ (Haskell) "concatMap" | constant rev \ (Haskell) "reverse" | constant zip \ (Haskell) "zip" | constant List.null \ (Haskell) "null" | constant takeWhile \ (Haskell) "takeWhile" | constant dropWhile \ (Haskell) "dropWhile" | constant list_all \ (Haskell) "all" | constant list_ex \ (Haskell) "any" subsubsection \Implementation of sets by lists\ lemma is_empty_set [code]: "Set.is_empty (set xs) \ List.null xs" by (simp add: Set.is_empty_def null_def) lemma empty_set [code]: "{} = set []" by simp lemma UNIV_coset [code]: "UNIV = List.coset []" by simp lemma compl_set [code]: "- set xs = List.coset xs" by simp lemma compl_coset [code]: "- List.coset xs = set xs" by simp lemma [code]: "x \ set xs \ List.member xs x" "x \ List.coset xs \ \ List.member xs x" by (simp_all add: member_def) lemma insert_code [code]: "insert x (set xs) = set (List.insert x xs)" "insert x (List.coset xs) = List.coset (removeAll x xs)" by simp_all lemma remove_code [code]: "Set.remove x (set xs) = set (removeAll x xs)" "Set.remove x (List.coset xs) = List.coset (List.insert x xs)" by (simp_all add: remove_def Compl_insert) lemma filter_set [code]: "Set.filter P (set xs) = set (filter P xs)" by auto lemma image_set [code]: "image f (set xs) = set (map f xs)" by simp lemma subset_code [code]: "set xs \ B \ (\x\set xs. x \ B)" "A \ List.coset ys \ (\y\set ys. y \ A)" "List.coset [] \ set [] \ False" by auto text \A frequent case -- avoid intermediate sets\ lemma [code_unfold]: "set xs \ set ys \ list_all (\x. x \ set ys) xs" by (auto simp: list_all_iff) lemma Ball_set [code]: "Ball (set xs) P \ list_all P xs" by (simp add: list_all_iff) lemma Bex_set [code]: "Bex (set xs) P \ list_ex P xs" by (simp add: list_ex_iff) lemma card_set [code]: "card (set xs) = length (remdups xs)" proof - have "card (set (remdups xs)) = length (remdups xs)" by (rule distinct_card) simp then show ?thesis by simp qed lemma the_elem_set [code]: "the_elem (set [x]) = x" by simp lemma Pow_set [code]: "Pow (set []) = {{}}" "Pow (set (x # xs)) = (let A = Pow (set xs) in A \ insert x ` A)" by (simp_all add: Pow_insert Let_def) definition map_project :: "('a \ 'b option) \ 'a set \ 'b set" where "map_project f A = {b. \ a \ A. f a = Some b}" lemma [code]: "map_project f (set xs) = set (List.map_filter f xs)" by (auto simp add: map_project_def map_filter_def image_def) hide_const (open) map_project text \Operations on relations\ lemma product_code [code]: "Product_Type.product (set xs) (set ys) = set [(x, y). x \ xs, y \ ys]" by (auto simp add: Product_Type.product_def) lemma Id_on_set [code]: "Id_on (set xs) = set [(x, x). x \ xs]" by (auto simp add: Id_on_def) lemma [code]: "R `` S = List.map_project (\(x, y). if x \ S then Some y else None) R" unfolding map_project_def by (auto split: prod.split if_split_asm) lemma trancl_set_ntrancl [code]: "trancl (set xs) = ntrancl (card (set xs) - 1) (set xs)" by (simp add: finite_trancl_ntranl) lemma set_relcomp [code]: "set xys O set yzs = set ([(fst xy, snd yz). xy \ xys, yz \ yzs, snd xy = fst yz])" by auto (auto simp add: Bex_def image_def) lemma wf_set [code]: "wf (set xs) = acyclic (set xs)" by (simp add: wf_iff_acyclic_if_finite) subsection \Setup for Lifting/Transfer\ subsubsection \Transfer rules for the Transfer package\ context includes lifting_syntax begin lemma tl_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A) tl tl" unfolding tl_def[abs_def] by transfer_prover lemma butlast_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A) butlast butlast" by (rule rel_funI, erule list_all2_induct, auto) lemma map_rec: "map f xs = rec_list Nil (%x _ y. Cons (f x) y) xs" by (induct xs) auto lemma append_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A ===> list_all2 A) append append" unfolding List.append_def by transfer_prover lemma rev_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A) rev rev" unfolding List.rev_def by transfer_prover lemma filter_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> list_all2 A) filter filter" unfolding List.filter_def by transfer_prover lemma fold_transfer [transfer_rule]: "((A ===> B ===> B) ===> list_all2 A ===> B ===> B) fold fold" unfolding List.fold_def by transfer_prover lemma foldr_transfer [transfer_rule]: "((A ===> B ===> B) ===> list_all2 A ===> B ===> B) foldr foldr" unfolding List.foldr_def by transfer_prover lemma foldl_transfer [transfer_rule]: "((B ===> A ===> B) ===> B ===> list_all2 A ===> B) foldl foldl" unfolding List.foldl_def by transfer_prover lemma concat_transfer [transfer_rule]: "(list_all2 (list_all2 A) ===> list_all2 A) concat concat" unfolding List.concat_def by transfer_prover lemma drop_transfer [transfer_rule]: "((=) ===> list_all2 A ===> list_all2 A) drop drop" unfolding List.drop_def by transfer_prover lemma take_transfer [transfer_rule]: "((=) ===> list_all2 A ===> list_all2 A) take take" unfolding List.take_def by transfer_prover lemma list_update_transfer [transfer_rule]: "(list_all2 A ===> (=) ===> A ===> list_all2 A) list_update list_update" unfolding list_update_def by transfer_prover lemma takeWhile_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> list_all2 A) takeWhile takeWhile" unfolding takeWhile_def by transfer_prover lemma dropWhile_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> list_all2 A) dropWhile dropWhile" unfolding dropWhile_def by transfer_prover lemma zip_transfer [transfer_rule]: "(list_all2 A ===> list_all2 B ===> list_all2 (rel_prod A B)) zip zip" unfolding zip_def by transfer_prover lemma product_transfer [transfer_rule]: "(list_all2 A ===> list_all2 B ===> list_all2 (rel_prod A B)) List.product List.product" unfolding List.product_def by transfer_prover lemma product_lists_transfer [transfer_rule]: "(list_all2 (list_all2 A) ===> list_all2 (list_all2 A)) product_lists product_lists" unfolding product_lists_def by transfer_prover lemma insert_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(A ===> list_all2 A ===> list_all2 A) List.insert List.insert" unfolding List.insert_def [abs_def] by transfer_prover lemma find_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> rel_option A) List.find List.find" unfolding List.find_def by transfer_prover lemma those_transfer [transfer_rule]: "(list_all2 (rel_option P) ===> rel_option (list_all2 P)) those those" unfolding List.those_def by transfer_prover lemma remove1_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(A ===> list_all2 A ===> list_all2 A) remove1 remove1" unfolding remove1_def by transfer_prover lemma removeAll_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(A ===> list_all2 A ===> list_all2 A) removeAll removeAll" unfolding removeAll_def by transfer_prover lemma successively_transfer [transfer_rule]: "((A ===> A ===> (=)) ===> list_all2 A ===> (=)) successively successively" unfolding successively_altdef by transfer_prover lemma distinct_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(list_all2 A ===> (=)) distinct distinct" unfolding distinct_def by transfer_prover lemma distinct_adj_transfer [transfer_rule]: assumes "bi_unique A" shows "(list_all2 A ===> (=)) distinct_adj distinct_adj" unfolding rel_fun_def proof (intro allI impI) fix xs ys assume "list_all2 A xs ys" thus "distinct_adj xs \ distinct_adj ys" proof (induction rule: list_all2_induct) case (Cons x xs y ys) note * = this show ?case proof (cases xs) case [simp]: (Cons x' xs') with * obtain y' ys' where [simp]: "ys = y' # ys'" by (cases ys) auto from * show ?thesis using assms by (auto simp: distinct_adj_Cons bi_unique_def) qed (use * in auto) qed auto qed lemma remdups_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(list_all2 A ===> list_all2 A) remdups remdups" unfolding remdups_def by transfer_prover lemma remdups_adj_transfer [transfer_rule]: assumes [transfer_rule]: "bi_unique A" shows "(list_all2 A ===> list_all2 A) remdups_adj remdups_adj" proof (rule rel_funI, erule list_all2_induct) qed (auto simp: remdups_adj_Cons assms[unfolded bi_unique_def] split: list.splits) lemma replicate_transfer [transfer_rule]: "((=) ===> A ===> list_all2 A) replicate replicate" unfolding replicate_def by transfer_prover lemma length_transfer [transfer_rule]: "(list_all2 A ===> (=)) length length" unfolding size_list_overloaded_def size_list_def by transfer_prover lemma rotate1_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A) rotate1 rotate1" unfolding rotate1_def by transfer_prover lemma rotate_transfer [transfer_rule]: "((=) ===> list_all2 A ===> list_all2 A) rotate rotate" unfolding rotate_def [abs_def] by transfer_prover lemma nths_transfer [transfer_rule]: "(list_all2 A ===> rel_set (=) ===> list_all2 A) nths nths" unfolding nths_def [abs_def] by transfer_prover lemma subseqs_transfer [transfer_rule]: "(list_all2 A ===> list_all2 (list_all2 A)) subseqs subseqs" unfolding subseqs_def [abs_def] by transfer_prover lemma partition_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> rel_prod (list_all2 A) (list_all2 A)) partition partition" unfolding partition_def by transfer_prover lemma lists_transfer [transfer_rule]: "(rel_set A ===> rel_set (list_all2 A)) lists lists" proof (rule rel_funI, rule rel_setI) show "\l \ lists X; rel_set A X Y\ \ \y\lists Y. list_all2 A l y" for X Y l proof (induction l rule: lists.induct) case (Cons a l) then show ?case by (simp only: rel_set_def list_all2_Cons1, metis lists.Cons) qed auto show "\l \ lists Y; rel_set A X Y\ \ \x\lists X. list_all2 A x l" for X Y l proof (induction l rule: lists.induct) case (Cons a l) then show ?case by (simp only: rel_set_def list_all2_Cons2, metis lists.Cons) qed auto qed lemma set_Cons_transfer [transfer_rule]: "(rel_set A ===> rel_set (list_all2 A) ===> rel_set (list_all2 A)) set_Cons set_Cons" unfolding rel_fun_def rel_set_def set_Cons_def by (fastforce simp add: list_all2_Cons1 list_all2_Cons2) lemma listset_transfer [transfer_rule]: "(list_all2 (rel_set A) ===> rel_set (list_all2 A)) listset listset" unfolding listset_def by transfer_prover lemma null_transfer [transfer_rule]: "(list_all2 A ===> (=)) List.null List.null" unfolding rel_fun_def List.null_def by auto lemma list_all_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> (=)) list_all list_all" unfolding list_all_iff [abs_def] by transfer_prover lemma list_ex_transfer [transfer_rule]: "((A ===> (=)) ===> list_all2 A ===> (=)) list_ex list_ex" unfolding list_ex_iff [abs_def] by transfer_prover lemma splice_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A ===> list_all2 A) splice splice" apply (rule rel_funI, erule list_all2_induct, simp add: rel_fun_def, simp) apply (rule rel_funI) apply (erule_tac xs=x in list_all2_induct, simp, simp add: rel_fun_def) done lemma shuffles_transfer [transfer_rule]: "(list_all2 A ===> list_all2 A ===> rel_set (list_all2 A)) shuffles shuffles" proof (intro rel_funI, goal_cases) case (1 xs xs' ys ys') thus ?case proof (induction xs ys arbitrary: xs' ys' rule: shuffles.induct) case (3 x xs y ys xs' ys') from "3.prems" obtain x' xs'' where xs': "xs' = x' # xs''" by (cases xs') auto from "3.prems" obtain y' ys'' where ys': "ys' = y' # ys''" by (cases ys') auto have [transfer_rule]: "A x x'" "A y y'" "list_all2 A xs xs''" "list_all2 A ys ys''" using "3.prems" by (simp_all add: xs' ys') have [transfer_rule]: "rel_set (list_all2 A) (shuffles xs (y # ys)) (shuffles xs'' ys')" and [transfer_rule]: "rel_set (list_all2 A) (shuffles (x # xs) ys) (shuffles xs' ys'')" using "3.prems" by (auto intro!: "3.IH" simp: xs' ys') have "rel_set (list_all2 A) ((#) x ` shuffles xs (y # ys) \ (#) y ` shuffles (x # xs) ys) ((#) x' ` shuffles xs'' ys' \ (#) y' ` shuffles xs' ys'')" by transfer_prover thus ?case by (simp add: xs' ys') qed (auto simp: rel_set_def) qed lemma rtrancl_parametric [transfer_rule]: assumes [transfer_rule]: "bi_unique A" "bi_total A" shows "(rel_set (rel_prod A A) ===> rel_set (rel_prod A A)) rtrancl rtrancl" unfolding rtrancl_def by transfer_prover lemma monotone_parametric [transfer_rule]: assumes [transfer_rule]: "bi_total A" shows "((A ===> A ===> (=)) ===> (B ===> B ===> (=)) ===> (A ===> B) ===> (=)) monotone monotone" unfolding monotone_def[abs_def] by transfer_prover lemma fun_ord_parametric [transfer_rule]: assumes [transfer_rule]: "bi_total C" shows "((A ===> B ===> (=)) ===> (C ===> A) ===> (C ===> B) ===> (=)) fun_ord fun_ord" unfolding fun_ord_def[abs_def] by transfer_prover lemma fun_lub_parametric [transfer_rule]: assumes [transfer_rule]: "bi_total A" "bi_unique A" shows "((rel_set A ===> B) ===> rel_set (C ===> A) ===> C ===> B) fun_lub fun_lub" unfolding fun_lub_def[abs_def] by transfer_prover end end diff --git a/src/HOL/Main.thy b/src/HOL/Main.thy --- a/src/HOL/Main.thy +++ b/src/HOL/Main.thy @@ -1,74 +1,76 @@ section \Main HOL\ text \ Classical Higher-order Logic -- only ``Main'', excluding real and complex numbers etc. \ theory Main imports Predicate_Compile Quickcheck_Narrowing Mirabelle Extraction Nunchaku BNF_Greatest_Fixpoint Filter Conditionally_Complete_Lattices Binomial GCD begin text \Namespace cleanup\ hide_const (open) czero cinfinite cfinite csum cone ctwo Csum cprod cexp image2 image2p vimage2p Gr Grp collect fsts snds setl setr convol pick_middlep fstOp sndOp csquare relImage relInvImage Succ Shift shift proj id_bnf hide_fact (open) id_bnf_def type_definition_id_bnf_UNIV text \Syntax cleanup\ no_notation bot ("\") and top ("\") and inf (infixl "\" 70) and sup (infixl "\" 65) and Inf ("\") and Sup ("\") and ordLeq2 (infix "<=o" 50) and ordLeq3 (infix "\o" 50) and ordLess2 (infix "(_,/ _)\") bundle cardinal_syntax begin notation ordLeq2 (infix "<=o" 50) and ordLeq3 (infix "\o" 50) and ordLess2 (infix " 'b \ 'b" ("(3\_./ _)" [0, 10] 10) "_INF" :: "pttrn \ 'a set \ 'b \ 'b" ("(3\_\_./ _)" [0, 0, 10] 10) "_SUP1" :: "pttrns \ 'b \ 'b" ("(3\_./ _)" [0, 10] 10) "_SUP" :: "pttrn \ 'a set \ 'b \ 'b" ("(3\_\_./ _)" [0, 0, 10] 10) +thy_deps + end diff --git a/src/HOL/Nominal/Examples/Class1.thy b/src/HOL/Nominal/Examples/Class1.thy --- a/src/HOL/Nominal/Examples/Class1.thy +++ b/src/HOL/Nominal/Examples/Class1.thy @@ -1,8111 +1,8113 @@ theory Class1 imports "HOL-Nominal.Nominal" begin section \Term-Calculus from Urban's PhD\ atom_decl name coname text \types\ +no_notation not ("NOT") + nominal_datatype ty = PR "string" | NOT "ty" | AND "ty" "ty" ("_ AND _" [100,100] 100) | OR "ty" "ty" ("_ OR _" [100,100] 100) | IMP "ty" "ty" ("_ IMP _" [100,100] 100) instantiation ty :: size begin nominal_primrec size_ty where "size (PR s) = (1::nat)" | "size (NOT T) = 1 + size T" | "size (T1 AND T2) = 1 + size T1 + size T2" | "size (T1 OR T2) = 1 + size T1 + size T2" | "size (T1 IMP T2) = 1 + size T1 + size T2" by (rule TrueI)+ instance .. end lemma ty_cases: fixes T::ty shows "(\s. T=PR s) \ (\T'. T=NOT T') \ (\S U. T=S OR U) \ (\S U. T=S AND U) \ (\S U. T=S IMP U)" by (induct T rule:ty.induct) (auto) lemma fresh_ty: fixes a::"coname" and x::"name" and T::"ty" shows "a\T" and "x\T" by (nominal_induct T rule: ty.strong_induct) (auto simp add: fresh_string) text \terms\ nominal_datatype trm = Ax "name" "coname" | Cut "\coname\trm" "\name\trm" ("Cut <_>._ (_)._" [100,100,100,100] 100) | NotR "\name\trm" "coname" ("NotR (_)._ _" [100,100,100] 100) | NotL "\coname\trm" "name" ("NotL <_>._ _" [100,100,100] 100) | AndR "\coname\trm" "\coname\trm" "coname" ("AndR <_>._ <_>._ _" [100,100,100,100,100] 100) | AndL1 "\name\trm" "name" ("AndL1 (_)._ _" [100,100,100] 100) | AndL2 "\name\trm" "name" ("AndL2 (_)._ _" [100,100,100] 100) | OrR1 "\coname\trm" "coname" ("OrR1 <_>._ _" [100,100,100] 100) | OrR2 "\coname\trm" "coname" ("OrR2 <_>._ _" [100,100,100] 100) | OrL "\name\trm" "\name\trm" "name" ("OrL (_)._ (_)._ _" [100,100,100,100,100] 100) | ImpR "\name\(\coname\trm)" "coname" ("ImpR (_).<_>._ _" [100,100,100,100] 100) | ImpL "\coname\trm" "\name\trm" "name" ("ImpL <_>._ (_)._ _" [100,100,100,100,100] 100) text \named terms\ nominal_datatype ntrm = Na "\name\trm" ("((_):_)" [100,100] 100) text \conamed terms\ nominal_datatype ctrm = Co "\coname\trm" ("(<_>:_)" [100,100] 100) text \renaming functions\ nominal_primrec (freshness_context: "(d::coname,e::coname)") crename :: "trm \ coname \ coname \ trm" ("_[_\c>_]" [100,100,100] 100) where "(Ax x a)[d\c>e] = (if a=d then Ax x e else Ax x a)" | "\a\(d,e,N);x\M\ \ (Cut .M (x).N)[d\c>e] = Cut .(M[d\c>e]) (x).(N[d\c>e])" | "(NotR (x).M a)[d\c>e] = (if a=d then NotR (x).(M[d\c>e]) e else NotR (x).(M[d\c>e]) a)" | "a\(d,e) \ (NotL .M x)[d\c>e] = (NotL .(M[d\c>e]) x)" | "\a\(d,e,N,c);b\(d,e,M,c);b\a\ \ (AndR .M .N c)[d\c>e] = (if c=d then AndR .(M[d\c>e]) .(N[d \c>e]) e else AndR .(M[d\c>e]) .(N[d\c>e]) c)" | "x\y \ (AndL1 (x).M y)[d\c>e] = AndL1 (x).(M[d\c>e]) y" | "x\y \ (AndL2 (x).M y)[d\c>e] = AndL2 (x).(M[d\c>e]) y" | "a\(d,e,b) \ (OrR1 .M b)[d\c>e] = (if b=d then OrR1 .(M[d\c>e]) e else OrR1 .(M[d\c>e]) b)" | "a\(d,e,b) \ (OrR2 .M b)[d\c>e] = (if b=d then OrR2 .(M[d\c>e]) e else OrR2 .(M[d\c>e]) b)" | "\x\(N,z);y\(M,z);y\x\ \ (OrL (x).M (y).N z)[d\c>e] = OrL (x).(M[d\c>e]) (y).(N[d\c>e]) z" | "a\(d,e,b) \ (ImpR (x)..M b)[d\c>e] = (if b=d then ImpR (x)..(M[d\c>e]) e else ImpR (x)..(M[d\c>e]) b)" | "\a\(d,e,N);x\(M,y)\ \ (ImpL .M (x).N y)[d\c>e] = ImpL .(M[d\c>e]) (x).(N[d\c>e]) y" apply(finite_guess)+ apply(rule TrueI)+ apply(simp add: abs_fresh abs_supp fin_supp)+ apply(fresh_guess)+ done nominal_primrec (freshness_context: "(u::name,v::name)") nrename :: "trm \ name \ name \ trm" ("_[_\n>_]" [100,100,100] 100) where "(Ax x a)[u\n>v] = (if x=u then Ax v a else Ax x a)" | "\a\N;x\(u,v,M)\ \ (Cut .M (x).N)[u\n>v] = Cut .(M[u\n>v]) (x).(N[u\n>v])" | "x\(u,v) \ (NotR (x).M a)[u\n>v] = NotR (x).(M[u\n>v]) a" | "(NotL .M x)[u\n>v] = (if x=u then NotL .(M[u\n>v]) v else NotL .(M[u\n>v]) x)" | "\a\(N,c);b\(M,c);b\a\ \ (AndR .M .N c)[u\n>v] = AndR .(M[u\n>v]) .(N[u\n>v]) c" | "x\(u,v,y) \ (AndL1 (x).M y)[u\n>v] = (if y=u then AndL1 (x).(M[u\n>v]) v else AndL1 (x).(M[u\n>v]) y)" | "x\(u,v,y) \ (AndL2 (x).M y)[u\n>v] = (if y=u then AndL2 (x).(M[u\n>v]) v else AndL2 (x).(M[u\n>v]) y)" | "a\b \ (OrR1 .M b)[u\n>v] = OrR1 .(M[u\n>v]) b" | "a\b \ (OrR2 .M b)[u\n>v] = OrR2 .(M[u\n>v]) b" | "\x\(u,v,N,z);y\(u,v,M,z);y\x\ \ (OrL (x).M (y).N z)[u\n>v] = (if z=u then OrL (x).(M[u\n>v]) (y).(N[u\n>v]) v else OrL (x).(M[u\n>v]) (y).(N[u\n>v]) z)" | "\a\b; x\(u,v)\ \ (ImpR (x)..M b)[u\n>v] = ImpR (x)..(M[u\n>v]) b" | "\a\N;x\(u,v,M,y)\ \ (ImpL .M (x).N y)[u\n>v] = (if y=u then ImpL .(M[u\n>v]) (x).(N[u\n>v]) v else ImpL .(M[u\n>v]) (x).(N[u\n>v]) y)" apply(finite_guess)+ apply(rule TrueI)+ apply(simp add: abs_fresh abs_supp fs_name1 fs_coname1)+ apply(fresh_guess)+ done lemmas eq_bij = pt_bij[OF pt_name_inst, OF at_name_inst] pt_bij[OF pt_coname_inst, OF at_coname_inst] lemma crename_name_eqvt[eqvt]: fixes pi::"name prm" shows "pi\(M[d\c>e]) = (pi\M)[(pi\d)\c>(pi\e)]" apply(nominal_induct M avoiding: d e rule: trm.strong_induct) apply(auto simp add: fresh_bij eq_bij) done lemma crename_coname_eqvt[eqvt]: fixes pi::"coname prm" shows "pi\(M[d\c>e]) = (pi\M)[(pi\d)\c>(pi\e)]" apply(nominal_induct M avoiding: d e rule: trm.strong_induct) apply(auto simp add: fresh_bij eq_bij) done lemma nrename_name_eqvt[eqvt]: fixes pi::"name prm" shows "pi\(M[x\n>y]) = (pi\M)[(pi\x)\n>(pi\y)]" apply(nominal_induct M avoiding: x y rule: trm.strong_induct) apply(auto simp add: fresh_bij eq_bij) done lemma nrename_coname_eqvt[eqvt]: fixes pi::"coname prm" shows "pi\(M[x\n>y]) = (pi\M)[(pi\x)\n>(pi\y)]" apply(nominal_induct M avoiding: x y rule: trm.strong_induct) apply(auto simp add: fresh_bij eq_bij) done lemmas rename_eqvts = crename_name_eqvt crename_coname_eqvt nrename_name_eqvt nrename_coname_eqvt lemma nrename_fresh: assumes a: "x\M" shows "M[x\n>y] = M" using a by (nominal_induct M avoiding: x y rule: trm.strong_induct) (auto simp add: trm.inject fresh_atm abs_fresh fin_supp abs_supp) lemma crename_fresh: assumes a: "a\M" shows "M[a\c>b] = M" using a by (nominal_induct M avoiding: a b rule: trm.strong_induct) (auto simp add: trm.inject fresh_atm abs_fresh) lemma nrename_nfresh: fixes x::"name" shows "x\y\x\M[x\n>y]" by (nominal_induct M avoiding: x y rule: trm.strong_induct) (auto simp add: fresh_atm abs_fresh abs_supp fin_supp) lemma crename_nfresh: fixes x::"name" shows "x\M\x\M[a\c>b]" by (nominal_induct M avoiding: a b rule: trm.strong_induct) (auto simp add: fresh_atm abs_fresh abs_supp fin_supp) lemma crename_cfresh: fixes a::"coname" shows "a\b\a\M[a\c>b]" by (nominal_induct M avoiding: a b rule: trm.strong_induct) (auto simp add: fresh_atm abs_fresh abs_supp fin_supp) lemma nrename_cfresh: fixes c::"coname" shows "c\M\c\M[x\n>y]" by (nominal_induct M avoiding: x y rule: trm.strong_induct) (auto simp add: fresh_atm abs_fresh abs_supp fin_supp) lemma nrename_nfresh': fixes x::"name" shows "x\(M,z,y)\x\M[z\n>y]" by (nominal_induct M avoiding: x z y rule: trm.strong_induct) (auto simp add: fresh_prod fresh_atm abs_fresh abs_supp fin_supp) lemma crename_cfresh': fixes a::"coname" shows "a\(M,b,c)\a\M[b\c>c]" by (nominal_induct M avoiding: a b c rule: trm.strong_induct) (auto simp add: fresh_prod fresh_atm abs_fresh abs_supp fin_supp) lemma nrename_rename: assumes a: "x'\M" shows "([(x',x)]\M)[x'\n>y]= M[x\n>y]" using a apply(nominal_induct M avoiding: x x' y rule: trm.strong_induct) apply(auto simp add: abs_fresh fresh_bij fresh_atm fresh_prod fresh_right calc_atm abs_supp fin_supp) apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm) done lemma crename_rename: assumes a: "a'\M" shows "([(a',a)]\M)[a'\c>b]= M[a\c>b]" using a apply(nominal_induct M avoiding: a a' b rule: trm.strong_induct) apply(auto simp add: abs_fresh fresh_bij fresh_atm fresh_prod fresh_right calc_atm abs_supp fin_supp) apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm) done lemmas rename_fresh = nrename_fresh crename_fresh nrename_nfresh crename_nfresh crename_cfresh nrename_cfresh nrename_nfresh' crename_cfresh' nrename_rename crename_rename lemma better_nrename_Cut: assumes a: "x\(u,v)" shows "(Cut .M (x).N)[u\n>v] = Cut .(M[u\n>v]) (x).(N[u\n>v])" proof - obtain x'::"name" where fs1: "x'\(M,N,a,x,u,v)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a,x,u,v)" by (rule exists_fresh(2), rule fin_supp, blast) have eq1: "(Cut .M (x).N) = (Cut .([(a',a)]\M) (x').([(x',x)]\N))" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) have "(Cut .([(a',a)]\M) (x').([(x',x)]\N))[u\n>v] = Cut .(([(a',a)]\M)[u\n>v]) (x').(([(x',x)]\N)[u\n>v])" using fs1 fs2 apply - apply(rule nrename.simps) apply(simp add: fresh_left calc_atm) apply(simp add: fresh_left calc_atm) done also have "\ = Cut .(M[u\n>v]) (x).(N[u\n>v])" using fs1 fs2 a apply - apply(simp add: trm.inject alpha fresh_atm fresh_prod rename_eqvts) apply(simp add: calc_atm) apply(simp add: rename_fresh fresh_atm) done finally show "(Cut .M (x).N)[u\n>v] = Cut .(M[u\n>v]) (x).(N[u\n>v])" using eq1 by simp qed lemma better_crename_Cut: assumes a: "a\(b,c)" shows "(Cut .M (x).N)[b\c>c] = Cut .(M[b\c>c]) (x).(N[b\c>c])" proof - obtain x'::"name" where fs1: "x'\(M,N,a,x,b,c)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a,x,b,c)" by (rule exists_fresh(2), rule fin_supp, blast) have eq1: "(Cut .M (x).N) = (Cut .([(a',a)]\M) (x').([(x',x)]\N))" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) have "(Cut .([(a',a)]\M) (x').([(x',x)]\N))[b\c>c] = Cut .(([(a',a)]\M)[b\c>c]) (x').(([(x',x)]\N)[b\c>c])" using fs1 fs2 apply - apply(rule crename.simps) apply(simp add: fresh_left calc_atm) apply(simp add: fresh_left calc_atm) done also have "\ = Cut .(M[b\c>c]) (x).(N[b\c>c])" using fs1 fs2 a apply - apply(simp add: trm.inject alpha fresh_atm fresh_prod rename_eqvts) apply(simp add: calc_atm) apply(simp add: rename_fresh fresh_atm) done finally show "(Cut .M (x).N)[b\c>c] = Cut .(M[b\c>c]) (x).(N[b\c>c])" using eq1 by simp qed lemma crename_id: shows "M[a\c>a] = M" by (nominal_induct M avoiding: a rule: trm.strong_induct) (auto) lemma nrename_id: shows "M[x\n>x] = M" by (nominal_induct M avoiding: x rule: trm.strong_induct) (auto) lemma nrename_swap: shows "x\M \ [(x,y)]\M = M[y\n>x]" by (nominal_induct M avoiding: x y rule: trm.strong_induct) (simp_all add: calc_atm fresh_atm trm.inject alpha abs_fresh abs_supp fin_supp) lemma crename_swap: shows "a\M \ [(a,b)]\M = M[b\c>a]" by (nominal_induct M avoiding: a b rule: trm.strong_induct) (simp_all add: calc_atm fresh_atm trm.inject alpha abs_fresh abs_supp fin_supp) lemma crename_ax: assumes a: "M[a\c>b] = Ax x c" "c\a" "c\b" shows "M = Ax x c" using a apply(nominal_induct M avoiding: a b x c rule: trm.strong_induct) apply(simp_all add: trm.inject split: if_splits) done lemma nrename_ax: assumes a: "M[x\n>y] = Ax z a" "z\x" "z\y" shows "M = Ax z a" using a apply(nominal_induct M avoiding: x y z a rule: trm.strong_induct) apply(simp_all add: trm.inject split: if_splits) done text \substitution functions\ lemma fresh_perm_coname: fixes c::"coname" and pi::"coname prm" and M::"trm" assumes a: "c\pi" "c\M" shows "c\(pi\M)" using a apply - apply(simp add: fresh_left) apply(simp add: at_prm_fresh[OF at_coname_inst] fresh_list_rev) done lemma fresh_perm_name: fixes x::"name" and pi::"name prm" and M::"trm" assumes a: "x\pi" "x\M" shows "x\(pi\M)" using a apply - apply(simp add: fresh_left) apply(simp add: at_prm_fresh[OF at_name_inst] fresh_list_rev) done lemma fresh_fun_simp_NotL: assumes a: "x'\P" "x'\M" shows "fresh_fun (\x'. Cut .P (x').NotL .M x') = Cut .P (x').NotL .M x'" using a apply - apply(rule fresh_fun_app) apply(rule pt_name_inst) apply(rule at_name_inst) apply(finite_guess) apply(subgoal_tac "\n::name. n\(c,P,a,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_NotL[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\x'. Cut .P (x').NotL .M x')= fresh_fun (pi1\(\x'. Cut .P (x').NotL .M x'))" and "pi2\fresh_fun (\x'. Cut .P (x').NotL .M x')= fresh_fun (pi2\(\x'. Cut .P (x').NotL .M x'))" apply - apply(perm_simp) apply(generate_fresh "name") apply(auto simp add: fresh_prod) apply(simp add: fresh_fun_simp_NotL) apply(rule sym) apply(rule trans) apply(rule fresh_fun_simp_NotL) apply(rule fresh_perm_name) apply(assumption) apply(assumption) apply(rule fresh_perm_name) apply(assumption) apply(assumption) apply(simp add: at_prm_fresh[OF at_name_inst] swap_simps) apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,pi2\P,pi2\M,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_NotL calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_AndL1: assumes a: "z'\P" "z'\M" "z'\x" shows "fresh_fun (\z'. Cut .P (z').AndL1 (x).M z') = Cut .P (z').AndL1 (x).M z'" using a apply - apply(rule fresh_fun_app) apply(rule pt_name_inst) apply(rule at_name_inst) apply(finite_guess) apply(subgoal_tac "\n::name. n\(c,P,x,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_AndL1[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\z'. Cut .P (z').AndL1 (x).M z')= fresh_fun (pi1\(\z'. Cut .P (z').AndL1 (x).M z'))" and "pi2\fresh_fun (\z'. Cut .P (z').AndL1 (x).M z')= fresh_fun (pi2\(\z'. Cut .P (z').AndL1 (x).M z'))" apply - apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,x,pi1\P,pi1\M,pi1\x,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndL1 at_prm_fresh[OF at_name_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,x,pi2\P,pi2\M,pi2\x,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndL1 calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_AndL2: assumes a: "z'\P" "z'\M" "z'\x" shows "fresh_fun (\z'. Cut .P (z').AndL2 (x).M z') = Cut .P (z').AndL2 (x).M z'" using a apply - apply(rule fresh_fun_app) apply(rule pt_name_inst) apply(rule at_name_inst) apply(finite_guess) apply(subgoal_tac "\n::name. n\(c,P,x,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_AndL2[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\z'. Cut .P (z').AndL2 (x).M z')= fresh_fun (pi1\(\z'. Cut .P (z').AndL2 (x).M z'))" and "pi2\fresh_fun (\z'. Cut .P (z').AndL2 (x).M z')= fresh_fun (pi2\(\z'. Cut .P (z').AndL2 (x).M z'))" apply - apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,x,pi1\P,pi1\M,pi1\x,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndL2 at_prm_fresh[OF at_name_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,x,pi2\P,pi2\M,pi2\x,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndL2 calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_OrL: assumes a: "z'\P" "z'\M" "z'\N" "z'\u" "z'\x" shows "fresh_fun (\z'. Cut .P (z').OrL (x).M (u).N z') = Cut .P (z').OrL (x).M (u).N z'" using a apply - apply(rule fresh_fun_app) apply(rule pt_name_inst) apply(rule at_name_inst) apply(finite_guess) apply(subgoal_tac "\n::name. n\(c,P,x,M,u,N)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_OrL[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\z'. Cut .P (z').OrL (x).M (u).N z')= fresh_fun (pi1\(\z'. Cut .P (z').OrL (x).M (u).N z'))" and "pi2\fresh_fun (\z'. Cut .P (z').OrL (x).M (u).N z')= fresh_fun (pi2\(\z'. Cut .P (z').OrL (x).M (u).N z'))" apply - apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,N,x,u,pi1\P,pi1\M,pi1\N,pi1\x,pi1\u,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrL at_prm_fresh[OF at_name_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,N,x,u,pi2\P,pi2\M,pi2\N,pi2\x,pi2\u,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrL calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_ImpL: assumes a: "z'\P" "z'\M" "z'\N" "z'\x" shows "fresh_fun (\z'. Cut .P (z').ImpL .M (x).N z') = Cut .P (z').ImpL .M (x).N z'" using a apply - apply(rule fresh_fun_app) apply(rule pt_name_inst) apply(rule at_name_inst) apply(finite_guess) apply(subgoal_tac "\n::name. n\(c,P,x,M,N)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_ImpL[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\z'. Cut .P (z').ImpL .M (x).N z')= fresh_fun (pi1\(\z'. Cut .P (z').ImpL .M (x).N z'))" and "pi2\fresh_fun (\z'. Cut .P (z').ImpL .M (x).N z')= fresh_fun (pi2\(\z'. Cut .P (z').ImpL .M (x).N z'))" apply - apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,N,x,pi1\P,pi1\M,pi1\N,pi1\x,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_ImpL at_prm_fresh[OF at_name_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::name. n\(P,M,N,x,pi2\P,pi2\M,pi2\N,pi2\x,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_ImpL calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_NotR: assumes a: "a'\P" "a'\M" shows "fresh_fun (\a'. Cut .(NotR (y).M a') (x).P) = Cut .(NotR (y).M a') (x).P" using a apply - apply(rule fresh_fun_app) apply(rule pt_coname_inst) apply(rule at_coname_inst) apply(finite_guess) apply(subgoal_tac "\n::coname. n\(x,P,y,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_NotR[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\a'. Cut .(NotR (y).M a') (x).P)= fresh_fun (pi1\(\a'. Cut .(NotR (y).M a') (x).P))" and "pi2\fresh_fun (\a'. Cut .(NotR (y).M a') (x).P)= fresh_fun (pi2\(\a'. Cut .(NotR (y).M a') (x).P))" apply - apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,pi1\P,pi1\M,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_NotR calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,pi2\P,pi2\M,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_NotR at_prm_fresh[OF at_coname_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_AndR: assumes a: "a'\P" "a'\M" "a'\N" "a'\b" "a'\c" shows "fresh_fun (\a'. Cut .(AndR .M .N a') (x).P) = Cut .(AndR .M .N a') (x).P" using a apply - apply(rule fresh_fun_app) apply(rule pt_coname_inst) apply(rule at_coname_inst) apply(finite_guess) apply(subgoal_tac "\n::coname. n\(x,P,b,M,c,N)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_AndR[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\a'. Cut .(AndR .M .N a') (x).P)= fresh_fun (pi1\(\a'. Cut .(AndR .M .N a') (x).P))" and "pi2\fresh_fun (\a'. Cut .(AndR .M .N a') (x).P)= fresh_fun (pi2\(\a'. Cut .(AndR .M .N a') (x).P))" apply - apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,N,b,c,pi1\P,pi1\M,pi1\N,pi1\b,pi1\c,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndR calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,N,b,c,pi2\P,pi2\M,pi2\N,pi2\b,pi2\c,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_AndR at_prm_fresh[OF at_coname_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_OrR1: assumes a: "a'\P" "a'\M" "a'\b" shows "fresh_fun (\a'. Cut .(OrR1 .M a') (x).P) = Cut .(OrR1 .M a') (x).P" using a apply - apply(rule fresh_fun_app) apply(rule pt_coname_inst) apply(rule at_coname_inst) apply(finite_guess) apply(subgoal_tac "\n::coname. n\(x,P,b,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_OrR1[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\a'. Cut .(OrR1 .M a') (x).P)= fresh_fun (pi1\(\a'. Cut .(OrR1 .M a') (x).P))" and "pi2\fresh_fun (\a'. Cut .(OrR1 .M a') (x).P)= fresh_fun (pi2\(\a'. Cut .(OrR1 .M a') (x).P))" apply - apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi1\P,pi1\M,pi1\b,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrR1 calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi2\P,pi2\M,pi2\b,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrR1 at_prm_fresh[OF at_coname_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_OrR2: assumes a: "a'\P" "a'\M" "a'\b" shows "fresh_fun (\a'. Cut .(OrR2 .M a') (x).P) = Cut .(OrR2 .M a') (x).P" using a apply - apply(rule fresh_fun_app) apply(rule pt_coname_inst) apply(rule at_coname_inst) apply(finite_guess) apply(subgoal_tac "\n::coname. n\(x,P,b,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_OrR2[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\a'. Cut .(OrR2 .M a') (x).P)= fresh_fun (pi1\(\a'. Cut .(OrR2 .M a') (x).P))" and "pi2\fresh_fun (\a'. Cut .(OrR2 .M a') (x).P)= fresh_fun (pi2\(\a'. Cut .(OrR2 .M a') (x).P))" apply - apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi1\P,pi1\M,pi1\b,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrR2 calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi2\P,pi2\M,pi2\b,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_OrR2 at_prm_fresh[OF at_coname_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) done lemma fresh_fun_simp_ImpR: assumes a: "a'\P" "a'\M" "a'\b" shows "fresh_fun (\a'. Cut .(ImpR (y)..M a') (x).P) = Cut .(ImpR (y)..M a') (x).P" using a apply - apply(rule fresh_fun_app) apply(rule pt_coname_inst) apply(rule at_coname_inst) apply(finite_guess) apply(subgoal_tac "\n::coname. n\(x,P,y,b,M)") apply(erule exE) apply(rule_tac x="n" in exI) apply(simp add: fresh_prod abs_fresh) apply(fresh_guess) apply(rule exists_fresh') apply(simp add: fin_supp) apply(fresh_guess) done lemma fresh_fun_ImpR[eqvt_force]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\fresh_fun (\a'. Cut .(ImpR (y)..M a') (x).P)= fresh_fun (pi1\(\a'. Cut .(ImpR (y)..M a') (x).P))" and "pi2\fresh_fun (\a'. Cut .(ImpR (y)..M a') (x).P)= fresh_fun (pi2\(\a'. Cut .(ImpR (y)..M a') (x).P))" apply - apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi1\P,pi1\M,pi1\b,pi1)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_ImpR calc_atm) apply(rule exists_fresh') apply(simp add: fin_supp) apply(perm_simp) apply(subgoal_tac "\n::coname. n\(P,M,b,pi2\P,pi2\M,pi2\b,pi2)") apply(simp add: fresh_prod) apply(auto) apply(simp add: fresh_fun_simp_ImpR at_prm_fresh[OF at_coname_inst] swap_simps) apply(rule exists_fresh') apply(simp add: fin_supp) done nominal_primrec (freshness_context: "(y::name,c::coname,P::trm)") substn :: "trm \ name \ coname \ trm \ trm" ("_{_:=<_>._}" [100,100,100,100] 100) where "(Ax x a){y:=.P} = (if x=y then Cut .P (y).Ax y a else Ax x a)" | "\a\(c,P,N);x\(y,P,M)\ \ (Cut .M (x).N){y:=.P} = (if M=Ax y a then Cut .P (x).(N{y:=.P}) else Cut .(M{y:=.P}) (x).(N{y:=.P}))" | "x\(y,P) \ (NotR (x).M a){y:=.P} = NotR (x).(M{y:=.P}) a" | "a\(c,P) \ (NotL .M x){y:=.P} = (if x=y then fresh_fun (\x'. Cut .P (x').NotL .(M{y:=.P}) x') else NotL .(M{y:=.P}) x)" | "\a\(c,P,N,d);b\(c,P,M,d);b\a\ \ (AndR .M .N d){y:=.P} = AndR .(M{y:=.P}) .(N{y:=.P}) d" | "x\(y,P,z) \ (AndL1 (x).M z){y:=.P} = (if z=y then fresh_fun (\z'. Cut .P (z').AndL1 (x).(M{y:=.P}) z') else AndL1 (x).(M{y:=.P}) z)" | "x\(y,P,z) \ (AndL2 (x).M z){y:=.P} = (if z=y then fresh_fun (\z'. Cut .P (z').AndL2 (x).(M{y:=.P}) z') else AndL2 (x).(M{y:=.P}) z)" | "a\(c,P,b) \ (OrR1 .M b){y:=.P} = OrR1 .(M{y:=.P}) b" | "a\(c,P,b) \ (OrR2 .M b){y:=.P} = OrR2 .(M{y:=.P}) b" | "\x\(y,N,P,z);u\(y,M,P,z);x\u\ \ (OrL (x).M (u).N z){y:=.P} = (if z=y then fresh_fun (\z'. Cut .P (z').OrL (x).(M{y:=.P}) (u).(N{y:=.P}) z') else OrL (x).(M{y:=.P}) (u).(N{y:=.P}) z)" | "\a\(b,c,P); x\(y,P)\ \ (ImpR (x)..M b){y:=.P} = ImpR (x)..(M{y:=.P}) b" | "\a\(N,c,P);x\(y,P,M,z)\ \ (ImpL .M (x).N z){y:=.P} = (if y=z then fresh_fun (\z'. Cut .P (z').ImpL .(M{y:=.P}) (x).(N{y:=.P}) z') else ImpL .(M{y:=.P}) (x).(N{y:=.P}) z)" apply(finite_guess)+ apply(rule TrueI)+ apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x1,P,y1,x3,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x1,P,y1,x3,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x3,P,y1,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::name. x\(x3,P,y1,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(fresh_guess)+ done nominal_primrec (freshness_context: "(d::name,z::coname,P::trm)") substc :: "trm \ coname \ name \ trm \ trm" ("_{_:=(_)._}" [100,100,100,100] 100) where "(Ax x a){d:=(z).P} = (if d=a then Cut .(Ax x a) (z).P else Ax x a)" | "\a\(d,P,N);x\(z,P,M)\ \ (Cut .M (x).N){d:=(z).P} = (if N=Ax x d then Cut .(M{d:=(z).P}) (z).P else Cut .(M{d:=(z).P}) (x).(N{d:=(z).P}))" | "x\(z,P) \ (NotR (x).M a){d:=(z).P} = (if d=a then fresh_fun (\a'. Cut .NotR (x).(M{d:=(z).P}) a' (z).P) else NotR (x).(M{d:=(z).P}) a)" | "a\(d,P) \ (NotL .M x){d:=(z).P} = NotL .(M{d:=(z).P}) x" | "\a\(P,c,N,d);b\(P,c,M,d);b\a\ \ (AndR .M .N c){d:=(z).P} = (if d=c then fresh_fun (\a'. Cut .(AndR .(M{d:=(z).P}) .(N{d:=(z).P}) a') (z).P) else AndR .(M{d:=(z).P}) .(N{d:=(z).P}) c)" | "x\(y,z,P) \ (AndL1 (x).M y){d:=(z).P} = AndL1 (x).(M{d:=(z).P}) y" | "x\(y,P,z) \ (AndL2 (x).M y){d:=(z).P} = AndL2 (x).(M{d:=(z).P}) y" | "a\(d,P,b) \ (OrR1 .M b){d:=(z).P} = (if d=b then fresh_fun (\a'. Cut .OrR1 .(M{d:=(z).P}) a' (z).P) else OrR1 .(M{d:=(z).P}) b)" | "a\(d,P,b) \ (OrR2 .M b){d:=(z).P} = (if d=b then fresh_fun (\a'. Cut .OrR2 .(M{d:=(z).P}) a' (z).P) else OrR2 .(M{d:=(z).P}) b)" | "\x\(N,z,P,u);y\(M,z,P,u);x\y\ \ (OrL (x).M (y).N u){d:=(z).P} = OrL (x).(M{d:=(z).P}) (y).(N{d:=(z).P}) u" | "\a\(b,d,P); x\(z,P)\ \ (ImpR (x)..M b){d:=(z).P} = (if d=b then fresh_fun (\a'. Cut .ImpR (x)..(M{d:=(z).P}) a' (z).P) else ImpR (x)..(M{d:=(z).P}) b)" | "\a\(N,d,P);x\(y,z,P,M)\ \ (ImpL .M (x).N y){d:=(z).P} = ImpL .(M{d:=(z).P}) (x).(N{d:=(z).P}) y" apply(finite_guess)+ apply(rule TrueI)+ apply(simp add: abs_fresh abs_supp fs_name1 fs_coname1)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,y1,x3,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,y1,x3,y2)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,x2,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh fresh_atm abs_supp) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(rule impI) apply(subgoal_tac "\x::coname. x\(x1,P,x2,y1)", erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh fresh_atm) apply(rule exists_fresh', simp add: fin_supp) apply(simp add: abs_fresh abs_supp)+ apply(fresh_guess add: abs_fresh fresh_prod)+ done lemma csubst_eqvt[eqvt]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\(M{c:=(x).N}) = (pi1\M){(pi1\c):=(pi1\x).(pi1\N)}" and "pi2\(M{c:=(x).N}) = (pi2\M){(pi2\c):=(pi2\x).(pi2\N)}" apply(nominal_induct M avoiding: c x N rule: trm.strong_induct) apply(auto simp add: eq_bij fresh_bij eqvts) apply(perm_simp)+ done lemma nsubst_eqvt[eqvt]: fixes pi1::"name prm" and pi2::"coname prm" shows "pi1\(M{x:=.N}) = (pi1\M){(pi1\x):=<(pi1\c)>.(pi1\N)}" and "pi2\(M{x:=.N}) = (pi2\M){(pi2\x):=<(pi2\c)>.(pi2\N)}" apply(nominal_induct M avoiding: c x N rule: trm.strong_induct) apply(auto simp add: eq_bij fresh_bij eqvts) apply(perm_simp)+ done lemma supp_subst1: shows "supp (M{y:=.P}) \ ((supp M) - {y}) \ (supp P)" apply(nominal_induct M avoiding: y P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ done lemma supp_subst2: shows "supp (M{y:=.P}) \ supp (M) \ ((supp P) - {c})" apply(nominal_induct M avoiding: y P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast)+ done lemma supp_subst3: shows "supp (M{c:=(x).P}) \ ((supp M) - {c}) \ (supp P)" apply(nominal_induct M avoiding: x P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ done lemma supp_subst4: shows "supp (M{c:=(x).P}) \ (supp M) \ ((supp P) - {x})" apply(nominal_induct M avoiding: x P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ done lemma supp_subst5: shows "(supp M - {y}) \ supp (M{y:=.P})" apply(nominal_induct M avoiding: y P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) done lemma supp_subst6: shows "(supp M) \ ((supp (M{y:=.P}))::coname set)" apply(nominal_induct M avoiding: y P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},P,name1,trm2{y:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm) apply(blast) apply(rule exists_fresh'(1)[OF fs_name1]) apply(blast) done lemma supp_subst7: shows "(supp M - {c}) \ supp (M{c:=(x).P})" apply(nominal_induct M avoiding: x P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast) done lemma supp_subst8: shows "(supp M) \ ((supp (M{c:=(x).P}))::name set)" apply(nominal_induct M avoiding: x P c rule: trm.strong_induct) apply(auto) apply(auto simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(simp add: fresh_def abs_supp trm.supp supp_atm fin_supp) apply(blast) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(blast)+ done lemmas subst_supp = supp_subst1 supp_subst2 supp_subst3 supp_subst4 supp_subst5 supp_subst6 supp_subst7 supp_subst8 lemma subst_fresh: fixes x::"name" and c::"coname" shows "x\P \ x\M{x:=.P}" and "b\P \ b\M{b:=(y).P}" and "x\(M,P) \ x\M{y:=.P}" and "x\M \ x\M{c:=(x).P}" and "x\(M,P) \ x\M{c:=(y).P}" and "b\(M,P) \ b\M{c:=(y).P}" and "b\M \ b\M{y:=.P}" and "b\(M,P) \ b\M{y:=.P}" apply - apply(insert subst_supp) apply(simp_all add: fresh_def supp_prod) apply(blast)+ done lemma forget: shows "x\M \ M{x:=.P} = M" and "c\M \ M{c:=(x).P} = M" apply(nominal_induct M avoiding: x c P rule: trm.strong_induct) apply(auto simp add: fresh_atm abs_fresh abs_supp fin_supp) done lemma substc_rename1: assumes a: "c\(M,a)" shows "M{a:=(x).N} = ([(c,a)]\M){c:=(x).N}" using a proof(nominal_induct M avoiding: c a x N rule: trm.strong_induct) case (Ax z d) then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha) next case NotL then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (NotR y M d) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (AndR c1 M c2 M' c3) then show ?case apply(auto simp add: fresh_prod calc_atm fresh_atm abs_fresh fresh_left) apply (metis (erased, opaque_lifting)) by metis next case AndL1 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case AndL2 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (OrR1 d M e) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (OrR2 d M e) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (OrL x1 M x2 M' x3) then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case ImpL then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) metis next case (ImpR y d M e) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case (Cut d M y M') then show ?case by(simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) (metis crename.simps(1) crename_id crename_rename) qed lemma substc_rename2: assumes a: "y\(N,x)" shows "M{a:=(x).N} = M{a:=(y).([(y,x)]\N)}" using a proof(nominal_induct M avoiding: a x y N rule: trm.strong_induct) case (Ax z d) then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha perm_swap fresh_left) next case NotL then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha perm_swap fresh_left) next case (NotR y M d) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::coname. a'\(N,M{d:=(y).([(y,x)]\N)},[(y,x)]\N)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (AndR c1 M c2 M' c3) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::coname. a'\(N,M{c3:=(y).([(y,x)]\N)},M'{c3:=(y).([(y,x)]\N)},[(y,x)]\N,c1,c2,c3)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh subst_fresh perm_swap fresh_left) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case AndL1 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case AndL2 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case (OrR1 d M e) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::coname. a'\(N,M{e:=(y).([(y,x)]\N)},[(y,x)]\N,d,e)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrR2 d M e) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::coname. a'\(N,M{e:=(y).([(y,x)]\N)},[(y,x)]\N,d,e)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrL x1 M x2 M' x3) then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case ImpL then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case (ImpR y d M e) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::coname. a'\(N,M{e:=(y).([(y,x)]\N)},[(y,x)]\N,d,e)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (Cut d M y M') then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left perm_swap) qed lemma substn_rename3: assumes a: "y\(M,x)" shows "M{x:=.N} = ([(y,x)]\M){y:=.N}" using a proof(nominal_induct M avoiding: a x y N rule: trm.strong_induct) case (Ax z d) then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha) next case NotR then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (NotL d M z) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (AndR c1 M c2 M' c3) then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case OrR1 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case OrR2 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (AndL1 u M v) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (AndL2 u M v) then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod) next case (OrL x1 M x2 M' x3) then show ?case by(simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) (metis (poly_guards_query)) next case ImpR then show ?case by(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_left abs_supp fin_supp fresh_prod) next case (ImpL d M v M' u) then show ?case by(simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) (metis (poly_guards_query)) next case (Cut d M y M') then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(drule pt_bij1[OF pt_name_inst, OF at_name_inst]) apply(simp add: calc_atm) apply metis done qed lemma substn_rename4: assumes a: "c\(N,a)" shows "M{x:=.N} = M{x:=.([(c,a)]\N)}" using a proof(nominal_induct M avoiding: x c a N rule: trm.strong_induct) case (Ax z d) then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha perm_swap fresh_left) next case NotR then show ?case by (auto simp add: fresh_prod fresh_atm calc_atm trm.inject alpha perm_swap fresh_left) next case (NotL d M y) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::name. a'\(N,M{x:=.([(c,a)]\N)},[(c,a)]\N)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (OrL x1 M x2 M' x3) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::name. a'\(N,M{x:=.([(c,a)]\N)},M'{x:=.([(c,a)]\N)},[(c,a)]\N,x1,x2,x3)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh subst_fresh perm_swap fresh_left) apply(rule exists_fresh'(1)[OF fs_name1]) done next case OrR1 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case OrR2 then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case (AndL1 u M v) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::name. a'\(N,M{x:=.([(c,a)]\N)},[(c,a)]\N,u,v)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndL2 u M v) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::name. a'\(N,M{x:=.([(c,a)]\N)},[(c,a)]\N,u,v)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndR c1 M c2 M' c3) then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case ImpR then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) next case (ImpL d M y M' u) then show ?case apply(auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left) apply(subgoal_tac "\a'::name. a'\(N,M{u:=.([(c,a)]\N)},M'{u:=.([(c,a)]\N)},[(c,a)]\N,y,u)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(simp add: trm.inject alpha perm_swap fresh_left calc_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (Cut d M y M') then show ?case by (auto simp add: calc_atm trm.inject alpha fresh_atm abs_fresh fresh_prod fresh_left perm_swap) qed lemma subst_rename5: assumes a: "c'\(c,N)" "x'\(x,M)" shows "M{x:=.N} = ([(x',x)]\M){x':=.([(c',c)]\N)}" proof - have "M{x:=.N} = ([(x',x)]\M){x':=.N}" using a by (simp add: substn_rename3) also have "\ = ([(x',x)]\M){x':=.([(c',c)]\N)}" using a by (simp add: substn_rename4) finally show ?thesis by simp qed lemma subst_rename6: assumes a: "c'\(c,M)" "x'\(x,N)" shows "M{c:=(x).N} = ([(c',c)]\M){c':=(x').([(x',x)]\N)}" proof - have "M{c:=(x).N} = ([(c',c)]\M){c':=(x).N}" using a by (simp add: substc_rename1) also have "\ = ([(c',c)]\M){c':=(x').([(x',x)]\N)}" using a by (simp add: substc_rename2) finally show ?thesis by simp qed lemmas subst_rename = substc_rename1 substc_rename2 substn_rename3 substn_rename4 subst_rename5 subst_rename6 lemma better_Cut_substn[simp]: assumes a: "a\[c].P" "x\(y,P)" shows "(Cut .M (x).N){y:=.P} = (if M=Ax y a then Cut .P (x).(N{y:=.P}) else Cut .(M{y:=.P}) (x).(N{y:=.P}))" proof - obtain x'::"name" where fs1: "x'\(M,N,c,P,x,y)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,c,P,a)" by (rule exists_fresh(2), rule fin_supp, blast) have eq1: "(Cut .M (x).N) = (Cut .([(a',a)]\M) (x').([(x',x)]\N))" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) have eq2: "(M=Ax y a) = (([(a',a)]\M)=Ax y a')" apply(auto simp add: calc_atm) apply(drule pt_bij1[OF pt_coname_inst, OF at_coname_inst]) apply(simp add: calc_atm) done have "(Cut .M (x).N){y:=.P} = (Cut .([(a',a)]\M) (x').([(x',x)]\N)){y:=.P}" using eq1 by simp also have "\ = (if ([(a',a)]\M)=Ax y a' then Cut .P (x').(([(x',x)]\N){y:=.P}) else Cut .(([(a',a)]\M){y:=.P}) (x').(([(x',x)]\N){y:=.P}))" using fs1 fs2 by (auto simp add: fresh_prod fresh_left calc_atm fresh_atm) also have "\ =(if M=Ax y a then Cut .P (x).(N{y:=.P}) else Cut .(M{y:=.P}) (x).(N{y:=.P}))" using fs1 fs2 a apply - apply(simp only: eq2[symmetric]) apply(auto simp add: trm.inject) apply(simp_all add: alpha fresh_atm fresh_prod subst_fresh) apply(simp_all add: eqvts perm_fresh_fresh calc_atm) apply(auto) apply(rule subst_rename) apply(simp add: fresh_prod fresh_atm) apply(simp add: abs_fresh) apply(simp add: perm_fresh_fresh) done finally show ?thesis by simp qed lemma better_Cut_substc[simp]: assumes a: "a\(c,P)" "x\[y].P" shows "(Cut .M (x).N){c:=(y).P} = (if N=Ax x c then Cut .(M{c:=(y).P}) (y).P else Cut .(M{c:=(y).P}) (x).(N{c:=(y).P}))" proof - obtain x'::"name" where fs1: "x'\(M,N,c,P,x,y)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,c,P,a)" by (rule exists_fresh(2), rule fin_supp, blast) have eq1: "(Cut .M (x).N) = (Cut .([(a',a)]\M) (x').([(x',x)]\N))" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) have eq2: "(N=Ax x c) = (([(x',x)]\N)=Ax x' c)" apply(auto simp add: calc_atm) apply(drule pt_bij1[OF pt_name_inst, OF at_name_inst]) apply(simp add: calc_atm) done have "(Cut .M (x).N){c:=(y).P} = (Cut .([(a',a)]\M) (x').([(x',x)]\N)){c:=(y).P}" using eq1 by simp also have "\ = (if ([(x',x)]\N)=Ax x' c then Cut .(([(a',a)]\M){c:=(y).P}) (y).P else Cut .(([(a',a)]\M){c:=(y).P}) (x').(([(x',x)]\N){c:=(y).P}))" using fs1 fs2 by (simp add: fresh_prod fresh_left calc_atm fresh_atm trm.inject) also have "\ =(if N=Ax x c then Cut .(M{c:=(y).P}) (y).P else Cut .(M{c:=(y).P}) (x).(N{c:=(y).P}))" using fs1 fs2 a apply - apply(simp only: eq2[symmetric]) apply(auto simp add: trm.inject) apply(simp_all add: alpha fresh_atm fresh_prod subst_fresh) apply(simp_all add: eqvts perm_fresh_fresh calc_atm) apply(auto) apply(rule subst_rename) apply(simp add: fresh_prod fresh_atm) apply(simp add: abs_fresh) apply(simp add: perm_fresh_fresh) done finally show ?thesis by simp qed lemma better_Cut_substn': assumes a: "a\[c].P" "y\(N,x)" "M\Ax y a" shows "(Cut .M (x).N){y:=.P} = Cut .(M{y:=.P}) (x).N" using a apply - apply(generate_fresh "name") apply(subgoal_tac "Cut .M (x).N = Cut .M (ca).([(ca,x)]\N)") apply(simp) apply(subgoal_tac"y\([(ca,x)]\N)") apply(simp add: forget) apply(simp add: trm.inject) apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm)[1] apply(simp add: trm.inject) apply(rule sym) apply(simp add: alpha fresh_prod fresh_atm) done lemma better_NotR_substc: assumes a: "d\M" shows "(NotR (x).M d){d:=(z).P} = fresh_fun (\a'. Cut .NotR (x).M a' (z).P)" using a apply - apply(generate_fresh "name") apply(subgoal_tac "NotR (x).M d = NotR (c).([(c,x)]\M) d") apply(auto simp add: fresh_left calc_atm forget) apply(generate_fresh "coname") apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(perm_simp add: trm.inject alpha fresh_prod fresh_atm fresh_left, auto) done lemma better_NotL_substn: assumes a: "y\M" shows "(NotL .M y){y:=.P} = fresh_fun (\x'. Cut .P (x').NotL .M x')" using a apply - apply(generate_fresh "coname") apply(subgoal_tac "NotL .M y = NotL .([(ca,a)]\M) y") apply(auto simp add: fresh_left calc_atm forget) apply(generate_fresh "name") apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(perm_simp add: trm.inject alpha fresh_prod fresh_atm fresh_left, auto) done lemma better_AndL1_substn: assumes a: "y\[x].M" shows "(AndL1 (x).M y){y:=.P} = fresh_fun (\z'. Cut .P (z').AndL1 (x).M z')" using a apply - apply(generate_fresh "name") apply(subgoal_tac "AndL1 (x).M y = AndL1 (ca).([(ca,x)]\M) y") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(generate_fresh "name") apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_AndL2_substn: assumes a: "y\[x].M" shows "(AndL2 (x).M y){y:=.P} = fresh_fun (\z'. Cut .P (z').AndL2 (x).M z')" using a apply - apply(generate_fresh "name") apply(subgoal_tac "AndL2 (x).M y = AndL2 (ca).([(ca,x)]\M) y") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(generate_fresh "name") apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_AndR_substc: assumes a: "c\([a].M,[b].N)" shows "(AndR .M .N c){c:=(z).P} = fresh_fun (\a'. Cut .(AndR .M .N a') (z).P)" using a apply - apply(generate_fresh "coname") apply(generate_fresh "coname") apply(subgoal_tac "AndR .M .N c = AndR .([(ca,a)]\M) .([(caa,b)]\N) c") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm)[1] apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm)[1] apply(auto simp add: fresh_prod fresh_atm)[1] apply(simp) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule conjI) apply(rule forget) apply(auto simp add: fresh_left calc_atm abs_fresh)[1] apply(rule forget) apply(auto simp add: fresh_left calc_atm abs_fresh)[1] apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_OrL_substn: assumes a: "x\([y].M,[z].N)" shows "(OrL (y).M (z).N x){x:=.P} = fresh_fun (\z'. Cut .P (z').OrL (y).M (z).N z')" using a apply - apply(generate_fresh "name") apply(generate_fresh "name") apply(subgoal_tac "OrL (y).M (z).N x = OrL (ca).([(ca,y)]\M) (caa).([(caa,z)]\N) x") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule trans) apply(rule substn.simps) apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm)[1] apply(auto simp add: fresh_left calc_atm fresh_prod fresh_atm)[1] apply(auto simp add: fresh_prod fresh_atm)[1] apply(simp) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule conjI) apply(rule forget) apply(auto simp add: fresh_left calc_atm abs_fresh)[1] apply(rule forget) apply(auto simp add: fresh_left calc_atm abs_fresh)[1] apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_OrR1_substc: assumes a: "d\[a].M" shows "(OrR1 .M d){d:=(z).P} = fresh_fun (\a'. Cut .OrR1 .M a' (z).P)" using a apply - apply(generate_fresh "coname") apply(subgoal_tac "OrR1 .M d = OrR1 .([(c,a)]\M) d") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_OrR2_substc: assumes a: "d\[a].M" shows "(OrR2 .M d){d:=(z).P} = fresh_fun (\a'. Cut .OrR2 .M a' (z).P)" using a apply - apply(generate_fresh "coname") apply(subgoal_tac "OrR2 .M d = OrR2 .([(c,a)]\M) d") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm) apply(auto) done lemma better_ImpR_substc: assumes a: "d\[a].M" shows "(ImpR (x)..M d){d:=(z).P} = fresh_fun (\a'. Cut .ImpR (x)..M a' (z).P)" using a apply - apply(generate_fresh "coname") apply(generate_fresh "name") apply(subgoal_tac "ImpR (x)..M d = ImpR (ca)..([(c,a)]\[(ca,x)]\M) d") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm abs_perm abs_fresh fresh_left calc_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm abs_perm fresh_left calc_atm abs_fresh) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(rule sym) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm abs_fresh abs_perm) done lemma better_ImpL_substn: assumes a: "y\(M,[x].N)" shows "(ImpL .M (x).N y){y:=.P} = fresh_fun (\z'. Cut .P (z').ImpL .M (x).N z')" using a apply - apply(generate_fresh "coname") apply(generate_fresh "name") apply(subgoal_tac "ImpL .M (x).N y = ImpL .([(ca,a)]\M) (caa).([(caa,x)]\N) y") apply(auto simp add: fresh_left calc_atm forget abs_fresh)[1] apply(rule_tac f="fresh_fun" in arg_cong) apply(simp add: fun_eq_iff) apply(rule allI) apply(simp add: trm.inject alpha fresh_prod fresh_atm abs_perm abs_fresh fresh_left calc_atm) apply(rule forget) apply(simp add: fresh_left calc_atm) apply(auto)[1] apply(rule sym) apply(perm_simp add: trm.inject alpha fresh_left calc_atm fresh_prod fresh_atm abs_fresh abs_perm) done lemma freshn_after_substc: fixes x::"name" assumes a: "x\M{c:=(y).P}" shows "x\M" using a supp_subst8 apply(simp add: fresh_def) apply(blast) done lemma freshn_after_substn: fixes x::"name" assumes a: "x\M{y:=.P}" "x\y" shows "x\M" using a using a supp_subst5 apply(simp add: fresh_def) apply(blast) done lemma freshc_after_substc: fixes a::"coname" assumes a: "a\M{c:=(y).P}" "a\c" shows "a\M" using a supp_subst7 apply(simp add: fresh_def) apply(blast) done lemma freshc_after_substn: fixes a::"coname" assumes a: "a\M{y:=.P}" shows "a\M" using a supp_subst6 apply(simp add: fresh_def) apply(blast) done lemma substn_crename_comm: assumes a: "c\a" "c\b" shows "M{x:=.P}[a\c>b] = M[a\c>b]{x:=.(P[a\c>b])}" using a apply(nominal_induct M avoiding: x c P a b rule: trm.strong_induct) apply(auto simp add: subst_fresh rename_fresh trm.inject) apply(subgoal_tac "\x'::name. x'\(P,x,c)") apply(erule exE) apply(subgoal_tac "Cut .P (x).Ax x a = Cut .P (x').Ax x' a") apply(simp) apply(rule trans) apply(rule crename.simps) apply(simp add: fresh_prod fresh_atm) apply(simp) apply(simp add: trm.inject) apply(simp add: alpha trm.inject calc_atm fresh_atm) apply(simp add: trm.inject) apply(simp add: alpha trm.inject calc_atm fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm) apply(simp) apply(simp add: crename_id) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(auto simp add: fresh_atm)[1] apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm) apply(auto simp add: fresh_atm)[1] apply(drule crename_ax) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(simp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,P[a\c>b],x,trm[a\c>b]{x:=.P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,P[a\c>b],name1,trm[a\c>b]{x:=.P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,P[a\c>b],name1,trm[a\c>b]{x:=.P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.P},trm2{x:=.P},P,P[a\c>b],name1,name2, trm1[a\c>b]{x:=.P[a\c>b]},trm2[a\c>b]{x:=.P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},trm2{name2:=.P},P,P[a\c>b],name1, trm1[a\c>b]{name2:=.P[a\c>b]},trm2[a\c>b]{name2:=.P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) done lemma substc_crename_comm: assumes a: "c\a" "c\b" shows "M{c:=(x).P}[a\c>b] = M[a\c>b]{c:=(x).(P[a\c>b])}" using a apply(nominal_induct M avoiding: x c P a b rule: trm.strong_induct) apply(auto simp add: subst_fresh rename_fresh trm.inject) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(drule crename_ax) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(simp) apply(subgoal_tac "\c'::coname. c'\(a,b,trm{coname:=(x).P},P,P[a\c>b],x,trm[a\c>b]{coname:=(x).P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,coname2,a,b,trm1{coname3:=(x).P},trm2{coname3:=(x).P}, P,P[a\c>b],x,trm1[a\c>b]{coname3:=(x).P[a\c>b]},trm2[a\c>b]{coname3:=(x).P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[a\c>b],a,b, trm[a\c>b]{coname2:=(x).P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[a\c>b],a,b, trm[a\c>b]{coname2:=(x).P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[a\c>b],a,b, trm[a\c>b]{coname2:=(x).P[a\c>b]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(rule trans) apply(rule better_crename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) done lemma substn_nrename_comm: assumes a: "x\y" "x\z" shows "M{x:=.P}[y\n>z] = M[y\n>z]{x:=.(P[y\n>z])}" using a apply(nominal_induct M avoiding: x c P y z rule: trm.strong_induct) apply(auto simp add: subst_fresh rename_fresh trm.inject) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_prod fresh_atm) apply(simp add: trm.inject) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm) apply(simp) apply(drule nrename_ax) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(simp) apply(subgoal_tac "\x'::name. x'\(y,z,trm{x:=.P},P,P[y\n>z],x,trm[y\n>z]{x:=.P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,P[y\n>z],name1,trm[y\n>z]{x:=.P[y\n>z]},y,z)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(y,z,trm{x:=.P},P,P[y\n>z],name1,trm[y\n>z]{x:=.P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.P},trm2{x:=.P},P,P[y\n>z],name1,name2,y,z, trm1[y\n>z]{x:=.P[y\n>z]},trm2[y\n>z]{x:=.P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},trm2{name2:=.P},P,P[y\n>z],y,z,name1, trm1[y\n>z]{name2:=.P[y\n>z]},trm2[y\n>z]{name2:=.P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) done lemma substc_nrename_comm: assumes a: "x\y" "x\z" shows "M{c:=(x).P}[y\n>z] = M[y\n>z]{c:=(x).(P[y\n>z])}" using a apply(nominal_induct M avoiding: x c P y z rule: trm.strong_induct) apply(auto simp add: subst_fresh rename_fresh trm.inject) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(drule nrename_ax) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(simp) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(drule nrename_ax) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(simp) apply(subgoal_tac "\c'::coname. c'\(y,z,trm{coname:=(x).P},P,P[y\n>z],x,trm[y\n>z]{coname:=(x).P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,coname2,y,z,trm1{coname3:=(x).P},trm2{coname3:=(x).P}, P,P[y\n>z],x,trm1[y\n>z]{coname3:=(x).P[y\n>z]},trm2[y\n>z]{coname3:=(x).P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh subst_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[y\n>z],y,z, trm[y\n>z]{coname2:=(x).P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[y\n>z],y,z, trm[y\n>z]{coname2:=(x).P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) apply(subgoal_tac "\c'::coname. c'\(coname1,trm{coname2:=(x).P},P,P[y\n>z],y,z, trm[y\n>z]{coname2:=(x).P[y\n>z]})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(rule trans) apply(rule better_nrename_Cut) apply(simp add: fresh_atm fresh_prod) apply(simp add: rename_fresh fresh_atm) apply(rule exists_fresh') apply(rule fin_supp) done lemma substn_crename_comm': assumes a: "a\c" "a\P" shows "M{x:=.P}[a\c>b] = M[a\c>b]{x:=.P}" using a proof - assume a1: "a\c" assume a2: "a\P" obtain c'::"coname" where fs2: "c'\(c,P,a,b)" by (rule exists_fresh(2), rule fin_supp, blast) have eq: "M{x:=.P} = M{x:=.([(c',c)]\P)}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq': "M[a\c>b]{x:=.P} = M[a\c>b]{x:=.([(c',c)]\P)}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq2: "([(c',c)]\P)[a\c>b] = ([(c',c)]\P)" using fs2 a2 a1 apply - apply(rule rename_fresh) apply(simp add: fresh_left calc_atm fresh_prod fresh_atm) done have "M{x:=.P}[a\c>b] = M{x:=.([(c',c)]\P)}[a\c>b]" using eq by simp also have "\ = M[a\c>b]{x:=.(([(c',c)]\P)[a\c>b])}" using fs2 apply - apply(rule substn_crename_comm) apply(simp_all add: fresh_prod fresh_atm) done also have "\ = M[a\c>b]{x:=.(([(c',c)]\P))}" using eq2 by simp also have "\ = M[a\c>b]{x:=.P}" using eq' by simp finally show ?thesis by simp qed lemma substc_crename_comm': assumes a: "c\a" "c\b" "a\P" shows "M{c:=(x).P}[a\c>b] = M[a\c>b]{c:=(x).P}" using a proof - assume a1: "c\a" assume a1': "c\b" assume a2: "a\P" obtain c'::"coname" where fs2: "c'\(c,M,a,b)" by (rule exists_fresh(2), rule fin_supp, blast) have eq: "M{c:=(x).P} = ([(c',c)]\M){c':=(x).P}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq': "([(c',c)]\(M[a\c>b])){c':=(x).P} = M[a\c>b]{c:=(x).P}" using fs2 apply - apply(rule subst_rename[symmetric]) apply(simp add: rename_fresh) done have eq2: "([(c',c)]\M)[a\c>b] = ([(c',c)]\(M[a\c>b]))" using fs2 a2 a1 a1' apply - apply(simp add: rename_eqvts) apply(simp add: fresh_left calc_atm fresh_prod fresh_atm) done have "M{c:=(x).P}[a\c>b] = ([(c',c)]\M){c':=(x).P}[a\c>b]" using eq by simp also have "\ = ([(c',c)]\M)[a\c>b]{c':=(x).P[a\c>b]}" using fs2 apply - apply(rule substc_crename_comm) apply(simp_all add: fresh_prod fresh_atm) done also have "\ = ([(c',c)]\(M[a\c>b])){c':=(x).P[a\c>b]}" using eq2 by simp also have "\ = ([(c',c)]\(M[a\c>b])){c':=(x).P}" using a2 by (simp add: rename_fresh) also have "\ = M[a\c>b]{c:=(x).P}" using eq' by simp finally show ?thesis by simp qed lemma substn_nrename_comm': assumes a: "x\y" "x\z" "y\P" shows "M{x:=.P}[y\n>z] = M[y\n>z]{x:=.P}" using a proof - assume a1: "x\y" assume a1': "x\z" assume a2: "y\P" obtain x'::"name" where fs2: "x'\(x,M,y,z)" by (rule exists_fresh(1), rule fin_supp, blast) have eq: "M{x:=.P} = ([(x',x)]\M){x':=.P}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq': "([(x',x)]\(M[y\n>z])){x':=.P} = M[y\n>z]{x:=.P}" using fs2 apply - apply(rule subst_rename[symmetric]) apply(simp add: rename_fresh) done have eq2: "([(x',x)]\M)[y\n>z] = ([(x',x)]\(M[y\n>z]))" using fs2 a2 a1 a1' apply - apply(simp add: rename_eqvts) apply(simp add: fresh_left calc_atm fresh_prod fresh_atm) done have "M{x:=.P}[y\n>z] = ([(x',x)]\M){x':=.P}[y\n>z]" using eq by simp also have "\ = ([(x',x)]\M)[y\n>z]{x':=.P[y\n>z]}" using fs2 apply - apply(rule substn_nrename_comm) apply(simp_all add: fresh_prod fresh_atm) done also have "\ = ([(x',x)]\(M[y\n>z])){x':=.P[y\n>z]}" using eq2 by simp also have "\ = ([(x',x)]\(M[y\n>z])){x':=.P}" using a2 by (simp add: rename_fresh) also have "\ = M[y\n>z]{x:=.P}" using eq' by simp finally show ?thesis by simp qed lemma substc_nrename_comm': assumes a: "x\y" "y\P" shows "M{c:=(x).P}[y\n>z] = M[y\n>z]{c:=(x).P}" using a proof - assume a1: "x\y" assume a2: "y\P" obtain x'::"name" where fs2: "x'\(x,P,y,z)" by (rule exists_fresh(1), rule fin_supp, blast) have eq: "M{c:=(x).P} = M{c:=(x').([(x',x)]\P)}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq': "M[y\n>z]{c:=(x).P} = M[y\n>z]{c:=(x').([(x',x)]\P)}" using fs2 apply - apply(rule subst_rename) apply(simp) done have eq2: "([(x',x)]\P)[y\n>z] = ([(x',x)]\P)" using fs2 a2 a1 apply - apply(rule rename_fresh) apply(simp add: fresh_left calc_atm fresh_prod fresh_atm) done have "M{c:=(x).P}[y\n>z] = M{c:=(x').([(x',x)]\P)}[y\n>z]" using eq by simp also have "\ = M[y\n>z]{c:=(x').(([(x',x)]\P)[y\n>z])}" using fs2 apply - apply(rule substc_nrename_comm) apply(simp_all add: fresh_prod fresh_atm) done also have "\ = M[y\n>z]{c:=(x').(([(x',x)]\P))}" using eq2 by simp also have "\ = M[y\n>z]{c:=(x).P}" using eq' by simp finally show ?thesis by simp qed lemmas subst_comm = substn_crename_comm substc_crename_comm substn_nrename_comm substc_nrename_comm lemmas subst_comm' = substn_crename_comm' substc_crename_comm' substn_nrename_comm' substc_nrename_comm' text \typing contexts\ type_synonym ctxtn = "(name\ty) list" type_synonym ctxtc = "(coname\ty) list" inductive validc :: "ctxtc \ bool" where vc1[intro]: "validc []" | vc2[intro]: "\a\\; validc \\ \ validc ((a,T)#\)" equivariance validc inductive validn :: "ctxtn \ bool" where vn1[intro]: "validn []" | vn2[intro]: "\x\\; validn \\ \ validn ((x,T)#\)" equivariance validn lemma fresh_ctxt: fixes a::"coname" and x::"name" and \::"ctxtn" and \::"ctxtc" shows "a\\" and "x\\" proof - show "a\\" by (induct \) (auto simp add: fresh_list_nil fresh_list_cons fresh_prod fresh_atm fresh_ty) next show "x\\" by (induct \) (auto simp add: fresh_list_nil fresh_list_cons fresh_prod fresh_atm fresh_ty) qed text \cut-reductions\ declare abs_perm[eqvt] inductive fin :: "trm \ name \ bool" where [intro]: "fin (Ax x a) x" | [intro]: "x\M \ fin (NotL .M x) x" | [intro]: "y\[x].M \ fin (AndL1 (x).M y) y" | [intro]: "y\[x].M \ fin (AndL2 (x).M y) y" | [intro]: "\z\[x].M;z\[y].N\ \ fin (OrL (x).M (y).N z) z" | [intro]: "\y\M;y\[x].N\ \ fin (ImpL .M (x).N y) y" equivariance fin lemma fin_Ax_elim: assumes a: "fin (Ax x a) y" shows "x=y" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) done lemma fin_NotL_elim: assumes a: "fin (NotL .M x) y" shows "x=y \ x\M" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) apply(subgoal_tac "y\[aa].Ma") apply(drule sym) apply(simp_all add: abs_fresh) done lemma fin_AndL1_elim: assumes a: "fin (AndL1 (x).M y) z" shows "z=y \ z\[x].M" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) done lemma fin_AndL2_elim: assumes a: "fin (AndL2 (x).M y) z" shows "z=y \ z\[x].M" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) done lemma fin_OrL_elim: assumes a: "fin (OrL (x).M (y).N u) z" shows "z=u \ z\[x].M \ z\[y].N" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) done lemma fin_ImpL_elim: assumes a: "fin (ImpL .M (x).N z) y" shows "z=y \ z\M \ z\[x].N" using a apply(erule_tac fin.cases) apply(auto simp add: trm.inject) apply(subgoal_tac "y\[aa].Ma") apply(drule sym) apply(simp_all add: abs_fresh) apply (metis abs_fresh(5)) done lemma fin_rest_elims: shows "fin (Cut .M (x).N) y \ False" and "fin (NotR (x).M c) y \ False" and "fin (AndR .M .N c) y \ False" and "fin (OrR1 .M b) y \ False" and "fin (OrR2 .M b) y \ False" and "fin (ImpR (x)..M b) y \ False" by (erule fin.cases, simp_all add: trm.inject)+ lemmas fin_elims = fin_Ax_elim fin_NotL_elim fin_AndL1_elim fin_AndL2_elim fin_OrL_elim fin_ImpL_elim fin_rest_elims lemma fin_rename: shows "fin M x \ fin ([(x',x)]\M) x'" by (induct rule: fin.induct) (auto simp add: calc_atm simp add: fresh_left abs_fresh) lemma not_fin_subst1: assumes a: "\(fin M x)" shows "\(fin (M{c:=(y).P}) x)" using a [[simproc del: defined_all]] apply(nominal_induct M avoiding: x c y P rule: trm.strong_induct) apply(auto) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(subgoal_tac "\a'::coname. a'\(trm{coname:=(y).P},P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(drule fin_elims) apply(auto)[1] apply(drule freshn_after_substc) apply(simp add: fin.intros) apply(subgoal_tac "\a'::coname. a'\(trm1{coname3:=(y).P},trm2{coname3:=(y).P},P,coname1,coname2,coname3,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm1{coname3:=(y).P},trm2{coname3:=(y).P},P,coname1,coname2,coname3,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fin.cases, simp_all add: trm.inject) apply(drule fin_AndL1_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substc) apply(subgoal_tac "name2\[name1]. trm") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(drule fin_AndL2_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substc) apply(subgoal_tac "name2\[name1].trm") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(y).P},coname1,P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(y).P},coname1,P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fin.cases, simp_all add: trm.inject) apply(drule fin_OrL_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substc)+ apply(subgoal_tac "name3\[name1].trm1 \ name3\[name2].trm2") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(y).P},coname1,P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fin.cases, simp_all add: trm.inject) apply(drule fin_ImpL_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substc)+ apply(subgoal_tac "x\[name1].trm2") apply(simp add: fin.intros) apply(simp add: abs_fresh) done lemma not_fin_subst2: assumes a: "\(fin M x)" shows "\(fin (M{y:=.P}) x)" using a [[simproc del: defined_all]] apply(nominal_induct M avoiding: x c y P rule: trm.strong_induct) apply(auto) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::name. a'\(trm{y:=.P},P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fin_NotL_elim) apply(auto)[1] apply(drule freshn_after_substn) apply(simp) apply(simp add: fin.intros) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::name. a'\(trm{y:=.P},P,name1,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fin_AndL1_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substn) apply(simp) apply(subgoal_tac "name2\[name1]. trm") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(subgoal_tac "\a'::name. a'\(trm{y:=.P},P,name1,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fin_AndL2_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substn) apply(simp) apply(subgoal_tac "name2\[name1].trm") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::name. a'\(trm1{y:=.P},trm2{y:=.P},name1,name2,P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fin_OrL_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substn) apply(simp) apply(drule freshn_after_substn) apply(simp) apply(subgoal_tac "name3\[name1].trm1 \ name3\[name2].trm2") apply(simp add: fin.intros) apply(simp add: abs_fresh) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::name. a'\(trm1{name2:=.P},trm2{name2:=.P},name1,P,x)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fin_ImpL_elim) apply(auto simp add: abs_fresh)[1] apply(drule freshn_after_substn) apply(simp) apply(drule freshn_after_substn) apply(simp) apply(subgoal_tac "x\[name1].trm2") apply(simp add: fin.intros) apply(simp add: abs_fresh) done lemma fin_subst1: assumes a: "fin M x" "x\y" "x\P" shows "fin (M{y:=.P}) x" using a apply(nominal_induct M avoiding: x y c P rule: trm.strong_induct) apply(auto dest!: fin_elims simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh, simp add: subst_fresh abs_fresh) apply(rule fin.intros, simp add: subst_fresh abs_fresh, simp add: subst_fresh abs_fresh) done lemma fin_subst2: assumes a: "fin M y" "x\y" "y\P" "M\Ax y c" shows "fin (M{c:=(x).P}) y" using a apply(nominal_induct M avoiding: x y c P rule: trm.strong_induct) apply(drule fin_elims) apply(simp add: trm.inject) apply(rule fin.intros) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(rule fin.intros) apply(rule subst_fresh) apply(simp) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(rule fin.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fin_elims, simp) apply(rule fin.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(rule fin.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(rule fin.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] done lemma fin_substn_nrename: assumes a: "fin M x" "x\y" "x\P" shows "M[x\n>y]{y:=.P} = Cut .P (x).(M{y:=.P})" using a [[simproc del: defined_all]] apply(nominal_induct M avoiding: x y c P rule: trm.strong_induct) apply(drule fin_Ax_elim) apply(simp) apply(simp add: trm.inject) apply(simp add: alpha calc_atm fresh_atm) apply(simp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_NotL_elim) apply(simp) apply(subgoal_tac "\z::name. z\(trm,y,x,P,trm[x\n>y]{y:=.P})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(rule subst_fresh) apply(simp) apply(rule exists_fresh') apply(rule fin_supp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_AndL1_elim) apply(simp) apply(subgoal_tac "\z::name. z\(name2,name1,P,trm[name2\n>y]{y:=.P},y,P,trm)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(rule subst_fresh) apply(simp) apply(rule exists_fresh') apply(rule fin_supp) apply(drule fin_AndL2_elim) apply(simp) apply(subgoal_tac "\z::name. z\(name2,name1,P,trm[name2\n>y]{y:=.P},y,P,trm)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(rule subst_fresh) apply(simp) apply(rule exists_fresh') apply(rule fin_supp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_OrL_elim) apply(simp add: abs_fresh) apply(simp add: subst_fresh rename_fresh) apply(subgoal_tac "\z::name. z\(name3,name2,name1,P,trm1[name3\n>y]{y:=.P},trm2[name3\n>y]{y:=.P},y,P,trm1,trm2)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(rule exists_fresh') apply(rule fin_supp) apply(drule fin_rest_elims) apply(simp) apply(drule fin_ImpL_elim) apply(simp add: abs_fresh) apply(simp add: subst_fresh rename_fresh) apply(subgoal_tac "\z::name. z\(name1,x,P,trm1[x\n>y]{y:=.P},trm2[x\n>y]{y:=.P},y,P,trm1,trm2)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(simp add: nsubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: nrename_fresh) apply(rule exists_fresh') apply(rule fin_supp) done inductive fic :: "trm \ coname \ bool" where [intro]: "fic (Ax x a) a" | [intro]: "a\M \ fic (NotR (x).M a) a" | [intro]: "\c\[a].M;c\[b].N\ \ fic (AndR .M .N c) c" | [intro]: "b\[a].M \ fic (OrR1 .M b) b" | [intro]: "b\[a].M \ fic (OrR2 .M b) b" | [intro]: "\b\[a].M\ \ fic (ImpR (x)..M b) b" equivariance fic lemma fic_Ax_elim: assumes a: "fic (Ax x a) b" shows "a=b" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) done lemma fic_NotR_elim: assumes a: "fic (NotR (x).M a) b" shows "a=b \ b\M" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) apply(subgoal_tac "b\[xa].Ma") apply(drule sym) apply(simp_all add: abs_fresh) done lemma fic_OrR1_elim: assumes a: "fic (OrR1 .M b) c" shows "b=c \ c\[a].M" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) done lemma fic_OrR2_elim: assumes a: "fic (OrR2 .M b) c" shows "b=c \ c\[a].M" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) done lemma fic_AndR_elim: assumes a: "fic (AndR .M .N c) d" shows "c=d \ d\[a].M \ d\[b].N" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) done lemma fic_ImpR_elim: assumes a: "fic (ImpR (x)..M b) c" shows "b=c \ b\[a].M" using a apply(erule_tac fic.cases) apply(auto simp add: trm.inject) apply(subgoal_tac "c\[xa].[aa].Ma") apply(drule sym) apply(simp_all add: abs_fresh) done lemma fic_rest_elims: shows "fic (Cut .M (x).N) d \ False" and "fic (NotL .M x) d \ False" and "fic (OrL (x).M (y).N z) d \ False" and "fic (AndL1 (x).M y) d \ False" and "fic (AndL2 (x).M y) d \ False" and "fic (ImpL .M (x).N y) d \ False" by (erule fic.cases, simp_all add: trm.inject)+ lemmas fic_elims = fic_Ax_elim fic_NotR_elim fic_OrR1_elim fic_OrR2_elim fic_AndR_elim fic_ImpR_elim fic_rest_elims lemma fic_rename: shows "fic M a \ fic ([(a',a)]\M) a'" by (induct rule: fic.induct) (auto simp add: calc_atm simp add: fresh_left abs_fresh) lemma not_fic_subst1: assumes a: "\(fic M a)" shows "\(fic (M{y:=.P}) a)" using a apply(nominal_induct M avoiding: a c y P rule: trm.strong_induct) apply(auto) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims) apply(auto)[1] apply(drule freshc_after_substn) apply(simp add: fic.intros) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(subgoal_tac "\x'::name. x'\(trm{y:=.P},P,name1,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm1{y:=.P},trm2{y:=.P},P,name1,name2,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},trm2{name2:=.P},P,name1,name2,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) done lemma not_fic_subst2: assumes a: "\(fic M a)" shows "\(fic (M{c:=(y).P}) a)" using a apply(nominal_induct M avoiding: a c y P rule: trm.strong_induct) apply(auto) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm{coname:=(y).P},P,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(drule fic_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fic_elims, simp) apply(drule freshc_after_substc) apply(simp) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm1{coname3:=(y).P},trm2{coname3:=(y).P},P,coname1,coname2,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(drule fic_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fic_elims, simp) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substc) apply(simp) apply(drule freshc_after_substc) apply(simp) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(y).P},P,coname1,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(drule fic_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substc) apply(simp) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(y).P},P,coname1,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(drule fic_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substc) apply(simp) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(y).P},P,coname1,a)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(drule fic_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substc) apply(simp) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims, simp) done lemma fic_subst1: assumes a: "fic M a" "a\b" "a\P" shows "fic (M{b:=(x).P}) a" using a apply(nominal_induct M avoiding: x b a P rule: trm.strong_induct) apply(drule fic_elims) apply(simp add: fic.intros) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(rule subst_fresh) apply(simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) done lemma fic_subst2: assumes a: "fic M a" "c\a" "a\P" "M\Ax x a" shows "fic (M{x:=.P}) a" using a apply(nominal_induct M avoiding: x a c P rule: trm.strong_induct) apply(drule fic_elims) apply(simp add: trm.inject) apply(rule fic.intros) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(rule subst_fresh) apply(simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(rule fic.intros) apply(simp add: abs_fresh fresh_atm) apply(rule subst_fresh) apply(auto)[1] apply(drule fic_elims, simp) done lemma fic_substc_crename: assumes a: "fic M a" "a\b" "a\P" shows "M[a\c>b]{b:=(y).P} = Cut .(M{b:=(y).P}) (y).P" using a apply(nominal_induct M avoiding: a b y P rule: trm.strong_induct) apply(drule fic_Ax_elim) apply(simp) apply(simp add: trm.inject) apply(simp add: alpha calc_atm fresh_atm trm.inject) apply(simp) apply(drule fic_rest_elims) apply(simp) apply(drule fic_NotR_elim) apply(simp) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(simp add: trm.inject alpha fresh_atm fresh_prod fresh_atm calc_atm abs_fresh) apply(rule conjI) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: crename_fresh) apply(rule subst_fresh) apply(simp) apply(drule fic_rest_elims) apply(simp) apply(drule fic_AndR_elim) apply(simp add: abs_fresh fresh_atm subst_fresh rename_fresh) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh fresh_prod) apply(rule conjI) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: subst_fresh) apply(drule fic_rest_elims) apply(simp) apply(drule fic_rest_elims) apply(simp) apply(drule fic_OrR1_elim) apply(simp) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh fresh_prod) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: subst_fresh rename_fresh) apply(drule fic_OrR2_elim) apply(simp add: abs_fresh fresh_atm) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh fresh_prod) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: subst_fresh rename_fresh) apply(drule fic_rest_elims) apply(simp) apply(drule fic_ImpR_elim) apply(simp add: abs_fresh fresh_atm) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(simp add: trm.inject alpha fresh_atm calc_atm abs_fresh fresh_prod) apply(simp add: csubst_eqvt calc_atm) apply(simp add: perm_fresh_fresh) apply(simp add: subst_fresh rename_fresh) apply(drule fic_rest_elims) apply(simp) done inductive l_redu :: "trm \ trm \ bool" ("_ \\<^sub>l _" [100,100] 100) where LAxR: "\x\M; a\b; fic M a\ \ Cut .M (x).(Ax x b) \\<^sub>l M[a\c>b]" | LAxL: "\a\M; x\y; fin M x\ \ Cut .(Ax y a) (x).M \\<^sub>l M[x\n>y]" | LNot: "\y\(M,N); x\(N,y); a\(M,N,b); b\M; y\x; b\a\ \ Cut .(NotR (x).M a) (y).(NotL .N y) \\<^sub>l Cut .N (x).M" | LAnd1: "\b\([a1].M1,[a2].M2,N,a1,a2); y\([x].N,M1,M2,x); x\(M1,M2); a1\(M2,N); a2\(M1,N); a1\a2\ \ Cut .(AndR .M1 .M2 b) (y).(AndL1 (x).N y) \\<^sub>l Cut .M1 (x).N" | LAnd2: "\b\([a1].M1,[a2].M2,N,a1,a2); y\([x].N,M1,M2,x); x\(M1,M2); a1\(M2,N); a2\(M1,N); a1\a2\ \ Cut .(AndR .M1 .M2 b) (y).(AndL2 (x).N y) \\<^sub>l Cut .M2 (x).N" | LOr1: "\b\([a].M,N1,N2,a); y\([x1].N1,[x2].N2,M,x1,x2); x1\(M,N2); x2\(M,N1); a\(N1,N2); x1\x2\ \ Cut .(OrR1 .M b) (y).(OrL (x1).N1 (x2).N2 y) \\<^sub>l Cut .M (x1).N1" | LOr2: "\b\([a].M,N1,N2,a); y\([x1].N1,[x2].N2,M,x1,x2); x1\(M,N2); x2\(M,N1); a\(N1,N2); x1\x2\ \ Cut .(OrR2 .M b) (y).(OrL (x1).N1 (x2).N2 y) \\<^sub>l Cut .M (x2).N2" | LImp: "\z\(N,[y].P,[x].M,y,x); b\([a].M,[c].N,P,c,a); x\(N,[y].P,y); c\(P,[a].M,b,a); a\([c].N,P); y\(N,[x].M)\ \ Cut .(ImpR (x)..M b) (z).(ImpL .N (y).P z) \\<^sub>l Cut .(Cut .N (x).M) (y).P" equivariance l_redu lemma l_redu_eqvt': fixes pi1::"name prm" and pi2::"coname prm" shows "(pi1\M) \\<^sub>l (pi1\M') \ M \\<^sub>l M'" and "(pi2\M) \\<^sub>l (pi2\M') \ M \\<^sub>l M'" apply - apply(drule_tac pi="rev pi1" in l_redu.eqvt(1)) apply(perm_simp) apply(drule_tac pi="rev pi2" in l_redu.eqvt(2)) apply(perm_simp) done nominal_inductive l_redu apply(simp_all add: abs_fresh fresh_atm rename_fresh fresh_prod abs_supp fin_supp) apply(force)+ done lemma fresh_l_redu: fixes x::"name" and a::"coname" shows "M \\<^sub>l M' \ x\M \ x\M'" and "M \\<^sub>l M' \ a\M \ a\M'" apply - apply(induct rule: l_redu.induct) apply(auto simp add: abs_fresh rename_fresh) apply(case_tac "xa=x") apply(simp add: rename_fresh) apply(simp add: rename_fresh fresh_atm) apply(simp add: fresh_prod abs_fresh abs_supp fin_supp)+ apply(induct rule: l_redu.induct) apply(auto simp add: abs_fresh rename_fresh) apply(case_tac "aa=a") apply(simp add: rename_fresh) apply(simp add: rename_fresh fresh_atm) apply(simp add: fresh_prod abs_fresh abs_supp fin_supp)+ done lemma better_LAxR_intro[intro]: shows "fic M a \ Cut .M (x).(Ax x b) \\<^sub>l M[a\c>b]" proof - assume fin: "fic M a" obtain x'::"name" where fs1: "x'\(M,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(a,M,b)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .M (x).(Ax x b) = Cut .([(a',a)]\M) (x').(Ax x' b)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>l ([(a',a)]\M)[a'\c>b]" using fs1 fs2 fin by (auto intro: l_redu.intros simp add: fresh_left calc_atm fic_rename) also have "\ = M[a\c>b]" using fs1 fs2 by (simp add: crename_rename) finally show ?thesis by simp qed lemma better_LAxL_intro[intro]: shows "fin M x \ Cut .(Ax y a) (x).M \\<^sub>l M[x\n>y]" proof - assume fin: "fin M x" obtain x'::"name" where fs1: "x'\(y,M,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(a,M)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .(Ax y a) (x).M = Cut .(Ax y a') (x').([(x',x)]\M)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>l ([(x',x)]\M)[x'\n>y]" using fs1 fs2 fin by (auto intro: l_redu.intros simp add: fresh_left calc_atm fin_rename) also have "\ = M[x\n>y]" using fs1 fs2 by (simp add: nrename_rename) finally show ?thesis by simp qed lemma better_LNot_intro[intro]: shows "\y\N; a\M\ \ Cut .(NotR (x).M a) (y).(NotL .N y) \\<^sub>l Cut .N (x).M" proof - assume fs: "y\N" "a\M" obtain x'::"name" where f1: "x'\(y,N,M,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain y'::"name" where f2: "y'\(y,N,M,x,x')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f3: "a'\(a,M,N,b)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b'::"coname" where f4: "b'\(a,M,N,b,a')" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .(NotR (x).M a) (y).(NotL .N y) = Cut .(NotR (x).([(a',a)]\M) a') (y').(NotL .([(y',y)]\N) y')" using f1 f2 f3 f4 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm abs_fresh) also have "\ = Cut .(NotR (x).M a') (y').(NotL .N y')" using f1 f2 f3 f4 fs by (perm_simp) also have "\ = Cut .(NotR (x').([(x',x)]\M) a') (y').(NotL .([(b',b)]\N) y')" using f1 f2 f3 f4 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>l Cut .([(b',b)]\N) (x').([(x',x)]\M)" using f1 f2 f3 f4 fs by (auto intro: l_redu.intros simp add: fresh_prod fresh_left calc_atm fresh_atm) also have "\ = Cut .N (x).M" using f1 f2 f3 f4 by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) finally show ?thesis by simp qed lemma better_LAnd1_intro[intro]: shows "\a\([b1].M1,[b2].M2); y\[x].N\ \ Cut .(AndR .M1 .M2 a) (y).(AndL1 (x).N y) \\<^sub>l Cut .M1 (x).N" proof - assume fs: "a\([b1].M1,[b2].M2)" "y\[x].N" obtain x'::"name" where f1: "x'\(y,N,M1,M2,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain y'::"name" where f2: "y'\(y,N,M1,M2,x,x')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f3: "a'\(a,M1,M2,N,b1,b2)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b1'::"coname" where f4:"b1'\(a,M1,M2,N,b1,b2,a')" by (rule exists_fresh(2), rule fin_supp, blast) obtain b2'::"coname" where f5:"b2'\(a,M1,M2,N,b1,b2,a',b1')" by (rule exists_fresh(2),rule fin_supp, blast) have "Cut .(AndR .M1 .M2 a) (y).(AndL1 (x).N y) = Cut .(AndR .M1 .M2 a') (y').(AndL1 (x).N y')" using f1 f2 f3 f4 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) apply(auto simp add: perm_fresh_fresh) done also have "\ = Cut .(AndR .([(b1',b1)]\M1) .([(b2',b2)]\M2) a') (y').(AndL1 (x').([(x',x)]\N) y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) done also have "\ \\<^sub>l Cut .([(b1',b1)]\M1) (x').([(x',x)]\N)" using f1 f2 f3 f4 f5 fs apply - apply(rule l_redu.intros) apply(auto simp add: abs_fresh fresh_prod fresh_left calc_atm fresh_atm) done also have "\ = Cut .M1 (x).N" using f1 f2 f3 f4 f5 fs by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) finally show ?thesis by simp qed lemma better_LAnd2_intro[intro]: shows "\a\([b1].M1,[b2].M2); y\[x].N\ \ Cut .(AndR .M1 .M2 a) (y).(AndL2 (x).N y) \\<^sub>l Cut .M2 (x).N" proof - assume fs: "a\([b1].M1,[b2].M2)" "y\[x].N" obtain x'::"name" where f1: "x'\(y,N,M1,M2,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain y'::"name" where f2: "y'\(y,N,M1,M2,x,x')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f3: "a'\(a,M1,M2,N,b1,b2)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b1'::"coname" where f4:"b1'\(a,M1,M2,N,b1,b2,a')" by (rule exists_fresh(2), rule fin_supp, blast) obtain b2'::"coname" where f5:"b2'\(a,M1,M2,N,b1,b2,a',b1')" by (rule exists_fresh(2),rule fin_supp, blast) have "Cut .(AndR .M1 .M2 a) (y).(AndL2 (x).N y) = Cut .(AndR .M1 .M2 a') (y').(AndL2 (x).N y')" using f1 f2 f3 f4 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) apply(auto simp add: perm_fresh_fresh) done also have "\ = Cut .(AndR .([(b1',b1)]\M1) .([(b2',b2)]\M2) a') (y').(AndL2 (x').([(x',x)]\N) y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) done also have "\ \\<^sub>l Cut .([(b2',b2)]\M2) (x').([(x',x)]\N)" using f1 f2 f3 f4 f5 fs apply - apply(rule l_redu.intros) apply(auto simp add: abs_fresh fresh_prod fresh_left calc_atm fresh_atm) done also have "\ = Cut .M2 (x).N" using f1 f2 f3 f4 f5 fs by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) finally show ?thesis by simp qed lemma better_LOr1_intro[intro]: shows "\y\([x1].N1,[x2].N2); b\[a].M\ \ Cut .(OrR1 .M b) (y).(OrL (x1).N1 (x2).N2 y) \\<^sub>l Cut .M (x1).N1" proof - assume fs: "y\([x1].N1,[x2].N2)" "b\[a].M" obtain y'::"name" where f1: "y'\(y,M,N1,N2,x1,x2)" by (rule exists_fresh(1), rule fin_supp, blast) obtain x1'::"name" where f2: "x1'\(y,M,N1,N2,x1,x2,y')" by (rule exists_fresh(1), rule fin_supp, blast) obtain x2'::"name" where f3: "x2'\(y,M,N1,N2,x1,x2,y',x1')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f4: "a'\(a,N1,N2,M,b)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b'::"coname" where f5: "b'\(a,N1,N2,M,b,a')" by (rule exists_fresh(2),rule fin_supp, blast) have "Cut .(OrR1 .M b) (y).(OrL (x1).N1 (x2).N2 y) = Cut .(OrR1 .M b') (y').(OrL (x1).N1 (x2).N2 y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) apply(auto simp add: perm_fresh_fresh) done also have "\ = Cut .(OrR1 .([(a',a)]\M) b') (y').(OrL (x1').([(x1',x1)]\N1) (x2').([(x2',x2)]\N2) y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) done also have "\ \\<^sub>l Cut .([(a',a)]\M) (x1').([(x1',x1)]\N1)" using f1 f2 f3 f4 f5 fs apply - apply(rule l_redu.intros) apply(auto simp add: abs_fresh fresh_prod fresh_left calc_atm fresh_atm) done also have "\ = Cut .M (x1).N1" using f1 f2 f3 f4 f5 fs by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) finally show ?thesis by simp qed lemma better_LOr2_intro[intro]: shows "\y\([x1].N1,[x2].N2); b\[a].M\ \ Cut .(OrR2 .M b) (y).(OrL (x1).N1 (x2).N2 y) \\<^sub>l Cut .M (x2).N2" proof - assume fs: "y\([x1].N1,[x2].N2)" "b\[a].M" obtain y'::"name" where f1: "y'\(y,M,N1,N2,x1,x2)" by (rule exists_fresh(1), rule fin_supp, blast) obtain x1'::"name" where f2: "x1'\(y,M,N1,N2,x1,x2,y')" by (rule exists_fresh(1), rule fin_supp, blast) obtain x2'::"name" where f3: "x2'\(y,M,N1,N2,x1,x2,y',x1')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f4: "a'\(a,N1,N2,M,b)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b'::"coname" where f5: "b'\(a,N1,N2,M,b,a')" by (rule exists_fresh(2),rule fin_supp, blast) have "Cut .(OrR2 .M b) (y).(OrL (x1).N1 (x2).N2 y) = Cut .(OrR2 .M b') (y').(OrL (x1).N1 (x2).N2 y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) apply(auto simp add: perm_fresh_fresh) done also have "\ = Cut .(OrR2 .([(a',a)]\M) b') (y').(OrL (x1').([(x1',x1)]\N1) (x2').([(x2',x2)]\N2) y')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) done also have "\ \\<^sub>l Cut .([(a',a)]\M) (x2').([(x2',x2)]\N2)" using f1 f2 f3 f4 f5 fs apply - apply(rule l_redu.intros) apply(auto simp add: abs_fresh fresh_prod fresh_left calc_atm fresh_atm) done also have "\ = Cut .M (x2).N2" using f1 f2 f3 f4 f5 fs by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) finally show ?thesis by simp qed lemma better_LImp_intro[intro]: shows "\z\(N,[y].P); b\[a].M; a\N\ \ Cut .(ImpR (x)..M b) (z).(ImpL .N (y).P z) \\<^sub>l Cut .(Cut .N (x).M) (y).P" proof - assume fs: "z\(N,[y].P)" "b\[a].M" "a\N" obtain y'::"name" where f1: "y'\(y,M,N,P,z,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain x'::"name" where f2: "x'\(y,M,N,P,z,x,y')" by (rule exists_fresh(1), rule fin_supp, blast) obtain z'::"name" where f3: "z'\(y,M,N,P,z,x,y',x')" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where f4: "a'\(a,N,P,M,b)" by (rule exists_fresh(2), rule fin_supp, blast) obtain b'::"coname" where f5: "b'\(a,N,P,M,b,c,a')" by (rule exists_fresh(2),rule fin_supp, blast) obtain c'::"coname" where f6: "c'\(a,N,P,M,b,c,a',b')" by (rule exists_fresh(2),rule fin_supp, blast) have " Cut .(ImpR (x)..M b) (z).(ImpL .N (y).P z) = Cut .(ImpR (x)..M b') (z').(ImpL .N (y).P z')" using f1 f2 f3 f4 f5 fs apply(rule_tac sym) apply(perm_simp add: trm.inject alpha calc_atm fresh_prod fresh_left fresh_atm abs_fresh) apply(auto simp add: perm_fresh_fresh) done also have "\ = Cut .(ImpR (x')..([(a',a)]\([(x',x)]\M)) b') (z').(ImpL .([(c',c)]\N) (y').([(y',y)]\P) z')" using f1 f2 f3 f4 f5 f6 fs apply(rule_tac sym) apply(simp add: trm.inject) apply(simp add: alpha) apply(rule conjI) apply(simp add: trm.inject) apply(simp add: alpha fresh_prod fresh_atm abs_perm calc_atm fresh_left abs_fresh) apply(simp add: trm.inject) apply(simp add: alpha) apply(rule conjI) apply(simp add: alpha fresh_prod fresh_atm abs_perm calc_atm fresh_left abs_fresh) apply(simp add: alpha fresh_prod fresh_atm abs_perm calc_atm fresh_left abs_fresh) done also have "\ \\<^sub>l Cut .(Cut .([(c',c)]\N) (x').([(a',a)]\[(x',x)]\M)) (y').([(y',y)]\P)" using f1 f2 f3 f4 f5 f6 fs apply - apply(rule l_redu.intros) apply(auto simp add: abs_fresh fresh_prod fresh_left calc_atm fresh_atm) done also have "\ = Cut .(Cut .N (x).M) (y).P" using f1 f2 f3 f4 f5 f6 fs apply(simp add: trm.inject) apply(rule conjI) apply(simp add: alpha) apply(rule disjI2) apply(simp add: trm.inject) apply(rule conjI) apply(simp add: fresh_prod fresh_atm) apply(rule conjI) apply(perm_simp add: calc_atm) apply(auto simp add: fresh_prod fresh_atm)[1] apply(perm_simp add: alpha) apply(perm_simp add: alpha) apply(perm_simp add: alpha) apply(rule conjI) apply(perm_simp add: calc_atm) apply(rule_tac pi="[(a',a)]" in pt_bij4[OF pt_coname_inst, OF at_coname_inst]) apply(perm_simp add: abs_perm calc_atm) apply(perm_simp add: alpha fresh_prod fresh_atm) apply(simp add: abs_fresh) apply(perm_simp add: alpha fresh_prod fresh_atm) done finally show ?thesis by simp qed lemma alpha_coname: fixes M::"trm" and a::"coname" assumes a: "[a].M = [b].N" "c\(a,b,M,N)" shows "M = [(a,c)]\[(b,c)]\N" using a apply(auto simp add: alpha_fresh fresh_prod fresh_atm) apply(drule sym) apply(perm_simp) done lemma alpha_name: fixes M::"trm" and x::"name" assumes a: "[x].M = [y].N" "z\(x,y,M,N)" shows "M = [(x,z)]\[(y,z)]\N" using a apply(auto simp add: alpha_fresh fresh_prod fresh_atm) apply(drule sym) apply(perm_simp) done lemma alpha_name_coname: fixes M::"trm" and x::"name" and a::"coname" assumes a: "[x].[b].M = [y].[c].N" "z\(x,y,M,N)" "a\(b,c,M,N)" shows "M = [(x,z)]\[(b,a)]\[(c,a)]\[(y,z)]\N" using a apply(auto simp add: alpha_fresh fresh_prod fresh_atm abs_supp fin_supp abs_fresh abs_perm fresh_left calc_atm) apply(drule sym) apply(simp) apply(perm_simp) done lemma Cut_l_redu_elim: assumes a: "Cut .M (x).N \\<^sub>l R" shows "(\b. R = M[a\c>b]) \ (\y. R = N[x\n>y]) \ (\y M' b N'. M = NotR (y).M' a \ N = NotL .N' x \ R = Cut .N' (y).M' \ fic M a \ fin N x) \ (\b M1 c M2 y N'. M = AndR .M1 .M2 a \ N = AndL1 (y).N' x \ R = Cut .M1 (y).N' \ fic M a \ fin N x) \ (\b M1 c M2 y N'. M = AndR .M1 .M2 a \ N = AndL2 (y).N' x \ R = Cut .M2 (y).N' \ fic M a \ fin N x) \ (\b N' z M1 y M2. M = OrR1 .N' a \ N = OrL (z).M1 (y).M2 x \ R = Cut .N' (z).M1 \ fic M a \ fin N x) \ (\b N' z M1 y M2. M = OrR2 .N' a \ N = OrL (z).M1 (y).M2 x \ R = Cut .N' (y).M2 \ fic M a \ fin N x) \ (\z b M' c N1 y N2. M = ImpR (z)..M' a \ N = ImpL .N1 (y).N2 x \ R = Cut .(Cut .N1 (z).M') (y).N2 \ b\(c,N1) \ fic M a \ fin N x)" using a apply(erule_tac l_redu.cases) apply(rule disjI1) (* ax case *) apply(simp add: trm.inject) apply(rule_tac x="b" in exI) apply(erule conjE) apply(simp add: alpha) apply(erule disjE) apply(simp) apply(simp) apply(simp add: rename_fresh) apply(rule disjI2) apply(rule disjI1) (* ax case *) apply(simp add: trm.inject) apply(rule_tac x="y" in exI) apply(erule conjE) apply(thin_tac "[a].M = [aa].Ax y aa") apply(simp add: alpha) apply(erule disjE) apply(simp) apply(simp) apply(simp add: rename_fresh) apply(rule disjI2) apply(rule disjI2) apply(rule disjI1) (* not case *) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="c" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp add: calc_atm) apply(rule exI)+ apply(rule conjI) apply(rule refl) apply(generate_fresh "name") apply(simp add: calc_atm abs_fresh fresh_prod fresh_atm fresh_left) apply(auto)[1] apply(drule_tac z="ca" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp add: calc_atm) apply(rule exI)+ apply(rule conjI) apply(rule refl) apply(auto simp add: calc_atm abs_fresh fresh_left)[1] apply(case_tac "y=x") apply(perm_simp) apply(perm_simp) apply(case_tac "aa=a") apply(perm_simp) apply(perm_simp) (* and1 case *) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI1) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="c" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule exI)+ apply(rule_tac s="a" and t="[(a,c)]\[(b,c)]\b" in subst) apply(simp add: calc_atm) apply(rule refl) apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="ca" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,ca)]\[(y,ca)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh split: if_splits)[1] apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ (* and2 case *) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI1) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="c" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="a" and t="[(a,c)]\[(b,c)]\b" in subst) apply(simp add: calc_atm) apply(rule refl) apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="ca" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,ca)]\[(y,ca)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh split: if_splits)[1] apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ (* or1 case *) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI1) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="c" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="a" and t="[(a,c)]\[(b,c)]\b" in subst) apply(simp add: calc_atm) apply(rule refl) apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="ca" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule exI)+ apply(rule_tac s="x" and t="[(x,ca)]\[(y,ca)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule exI)+ apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ (* or2 case *) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI1) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="c" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="a" and t="[(a,c)]\[(b,c)]\b" in subst) apply(simp add: calc_atm) apply(rule refl) apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="ca" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,ca)]\[(y,ca)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(y,cb)]\y" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ (* imp-case *) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(rule disjI2) apply(simp add: trm.inject) apply(erule conjE)+ apply(generate_fresh "coname") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac c="ca" in alpha_coname) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="a" and t="[(a,ca)]\[(b,ca)]\b" in subst) apply(simp add: calc_atm) apply(rule refl) apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cb" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cb)]\[(z,cb)]\z" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ apply(generate_fresh "name") apply(simp add: abs_fresh fresh_prod fresh_atm) apply(auto)[1] apply(drule_tac z="cc" in alpha_name) apply(simp add: fresh_prod fresh_atm abs_fresh) apply(simp) apply(rule exI)+ apply(rule conjI) apply(rule_tac s="x" and t="[(x,cc)]\[(z,cc)]\z" in subst) apply(simp add: calc_atm) apply(rule refl) apply(auto simp add: fresh_left calc_atm abs_fresh alpha perm_fresh_fresh split: if_splits)[1] apply(perm_simp)+ done inductive c_redu :: "trm \ trm \ bool" ("_ \\<^sub>c _" [100,100] 100) where left[intro]: "\\fic M a; a\N; x\M\ \ Cut .M (x).N \\<^sub>c M{a:=(x).N}" | right[intro]: "\\fin N x; a\N; x\M\ \ Cut .M (x).N \\<^sub>c N{x:=.M}" equivariance c_redu nominal_inductive c_redu by (simp_all add: abs_fresh subst_fresh) lemma better_left[intro]: shows "\fic M a \ Cut .M (x).N \\<^sub>c M{a:=(x).N}" proof - assume not_fic: "\fic M a" obtain x'::"name" where fs1: "x'\(N,M,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(a,M,N)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .M (x).N = Cut .([(a',a)]\M) (x').([(x',x)]\N)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>c ([(a',a)]\M){a':=(x').([(x',x)]\N)}" using fs1 fs2 not_fic apply - apply(rule left) apply(clarify) apply(drule_tac a'="a" in fic_rename) apply(simp add: perm_swap) apply(simp add: fresh_left calc_atm)+ done also have "\ = M{a:=(x).N}" using fs1 fs2 by (simp add: subst_rename[symmetric] fresh_atm fresh_prod fresh_left calc_atm) finally show ?thesis by simp qed lemma better_right[intro]: shows "\fin N x \ Cut .M (x).N \\<^sub>c N{x:=.M}" proof - assume not_fin: "\fin N x" obtain x'::"name" where fs1: "x'\(N,M,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(a,M,N)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .M (x).N = Cut .([(a',a)]\M) (x').([(x',x)]\N)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>c ([(x',x)]\N){x':=.([(a',a)]\M)}" using fs1 fs2 not_fin apply - apply(rule right) apply(clarify) apply(drule_tac x'="x" in fin_rename) apply(simp add: perm_swap) apply(simp add: fresh_left calc_atm)+ done also have "\ = N{x:=.M}" using fs1 fs2 by (simp add: subst_rename[symmetric] fresh_atm fresh_prod fresh_left calc_atm) finally show ?thesis by simp qed lemma fresh_c_redu: fixes x::"name" and c::"coname" shows "M \\<^sub>c M' \ x\M \ x\M'" and "M \\<^sub>c M' \ c\M \ c\M'" apply - apply(induct rule: c_redu.induct) apply(auto simp add: abs_fresh rename_fresh subst_fresh) apply(induct rule: c_redu.induct) apply(auto simp add: abs_fresh rename_fresh subst_fresh) done inductive a_redu :: "trm \ trm \ bool" ("_ \\<^sub>a _" [100,100] 100) where al_redu[intro]: "M\\<^sub>l M' \ M \\<^sub>a M'" | ac_redu[intro]: "M\\<^sub>c M' \ M \\<^sub>a M'" | a_Cut_l: "\a\N; x\M; M\\<^sub>a M'\ \ Cut .M (x).N \\<^sub>a Cut .M' (x).N" | a_Cut_r: "\a\N; x\M; N\\<^sub>a N'\ \ Cut .M (x).N \\<^sub>a Cut .M (x).N'" | a_NotL[intro]: "M\\<^sub>a M' \ NotL .M x \\<^sub>a NotL .M' x" | a_NotR[intro]: "M\\<^sub>a M' \ NotR (x).M a \\<^sub>a NotR (x).M' a" | a_AndR_l: "\a\(N,c); b\(M,c); b\a; M\\<^sub>a M'\ \ AndR .M .N c \\<^sub>a AndR .M' .N c" | a_AndR_r: "\a\(N,c); b\(M,c); b\a; N\\<^sub>a N'\ \ AndR .M .N c \\<^sub>a AndR .M .N' c" | a_AndL1: "\x\y; M\\<^sub>a M'\ \ AndL1 (x).M y \\<^sub>a AndL1 (x).M' y" | a_AndL2: "\x\y; M\\<^sub>a M'\ \ AndL2 (x).M y \\<^sub>a AndL2 (x).M' y" | a_OrL_l: "\x\(N,z); y\(M,z); y\x; M\\<^sub>a M'\ \ OrL (x).M (y).N z \\<^sub>a OrL (x).M' (y).N z" | a_OrL_r: "\x\(N,z); y\(M,z); y\x; N\\<^sub>a N'\ \ OrL (x).M (y).N z \\<^sub>a OrL (x).M (y).N' z" | a_OrR1: "\a\b; M\\<^sub>a M'\ \ OrR1 .M b \\<^sub>a OrR1 .M' b" | a_OrR2: "\a\b; M\\<^sub>a M'\ \ OrR2 .M b \\<^sub>a OrR2 .M' b" | a_ImpL_l: "\a\N; x\(M,y); M\\<^sub>a M'\ \ ImpL .M (x).N y \\<^sub>a ImpL .M' (x).N y" | a_ImpL_r: "\a\N; x\(M,y); N\\<^sub>a N'\ \ ImpL .M (x).N y \\<^sub>a ImpL .M (x).N' y" | a_ImpR: "\a\b; M\\<^sub>a M'\ \ ImpR (x)..M b \\<^sub>a ImpR (x)..M' b" lemma fresh_a_redu: fixes x::"name" and c::"coname" shows "M \\<^sub>a M' \ x\M \ x\M'" and "M \\<^sub>a M' \ c\M \ c\M'" apply - apply(induct rule: a_redu.induct) apply(simp add: fresh_l_redu) apply(simp add: fresh_c_redu) apply(auto simp add: abs_fresh abs_supp fin_supp) apply(induct rule: a_redu.induct) apply(simp add: fresh_l_redu) apply(simp add: fresh_c_redu) apply(auto simp add: abs_fresh abs_supp fin_supp) done equivariance a_redu nominal_inductive a_redu by (simp_all add: abs_fresh fresh_atm fresh_prod abs_supp fin_supp fresh_a_redu) lemma better_CutL_intro[intro]: shows "M\\<^sub>a M' \ Cut .M (x).N \\<^sub>a Cut .M' (x).N" proof - assume red: "M\\<^sub>a M'" obtain x'::"name" where fs1: "x'\(M,N,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .M (x).N = Cut .([(a',a)]\M) (x').([(x',x)]\N)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a Cut .([(a',a)]\M') (x').([(x',x)]\N)" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt) also have "\ = Cut .M' (x).N" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_CutR_intro[intro]: shows "N\\<^sub>a N' \ Cut .M (x).N \\<^sub>a Cut .M (x).N'" proof - assume red: "N\\<^sub>a N'" obtain x'::"name" where fs1: "x'\(M,N,x)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "Cut .M (x).N = Cut .([(a',a)]\M) (x').([(x',x)]\N)" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a Cut .([(a',a)]\M) (x').([(x',x)]\N')" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt) also have "\ = Cut .M (x).N'" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_AndRL_intro[intro]: shows "M\\<^sub>a M' \ AndR .M .N c \\<^sub>a AndR .M' .N c" proof - assume red: "M\\<^sub>a M'" obtain b'::"coname" where fs1: "b'\(M,N,a,b,c)" by (rule exists_fresh(2), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a,b,c,b')" by (rule exists_fresh(2), rule fin_supp, blast) have "AndR .M .N c = AndR .([(a',a)]\M) .([(b',b)]\N) c" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a AndR .([(a',a)]\M') .([(b',b)]\N) c" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = AndR .M' .N c" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_AndRR_intro[intro]: shows "N\\<^sub>a N' \ AndR .M .N c \\<^sub>a AndR .M .N' c" proof - assume red: "N\\<^sub>a N'" obtain b'::"coname" where fs1: "b'\(M,N,a,b,c)" by (rule exists_fresh(2), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a,b,c,b')" by (rule exists_fresh(2), rule fin_supp, blast) have "AndR .M .N c = AndR .([(a',a)]\M) .([(b',b)]\N) c" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a AndR .([(a',a)]\M) .([(b',b)]\N') c" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = AndR .M .N' c" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_AndL1_intro[intro]: shows "M\\<^sub>a M' \ AndL1 (x).M y \\<^sub>a AndL1 (x).M' y" proof - assume red: "M\\<^sub>a M'" obtain x'::"name" where fs1: "x'\(M,y,x)" by (rule exists_fresh(1), rule fin_supp, blast) have "AndL1 (x).M y = AndL1 (x').([(x',x)]\M) y" using fs1 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a AndL1 (x').([(x',x)]\M') y" using fs1 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = AndL1 (x).M' y" using fs1 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_AndL2_intro[intro]: shows "M\\<^sub>a M' \ AndL2 (x).M y \\<^sub>a AndL2 (x).M' y" proof - assume red: "M\\<^sub>a M'" obtain x'::"name" where fs1: "x'\(M,y,x)" by (rule exists_fresh(1), rule fin_supp, blast) have "AndL2 (x).M y = AndL2 (x').([(x',x)]\M) y" using fs1 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a AndL2 (x').([(x',x)]\M') y" using fs1 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = AndL2 (x).M' y" using fs1 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_OrLL_intro[intro]: shows "M\\<^sub>a M' \ OrL (x).M (y).N z \\<^sub>a OrL (x).M' (y).N z" proof - assume red: "M\\<^sub>a M'" obtain x'::"name" where fs1: "x'\(M,N,x,y,z)" by (rule exists_fresh(1), rule fin_supp, blast) obtain y'::"name" where fs2: "y'\(M,N,x,y,z,x')" by (rule exists_fresh(1), rule fin_supp, blast) have "OrL (x).M (y).N z = OrL (x').([(x',x)]\M) (y').([(y',y)]\N) z" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a OrL (x').([(x',x)]\M') (y').([(y',y)]\N) z" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = OrL (x).M' (y).N z" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_OrLR_intro[intro]: shows "N\\<^sub>a N' \ OrL (x).M (y).N z \\<^sub>a OrL (x).M (y).N' z" proof - assume red: "N\\<^sub>a N'" obtain x'::"name" where fs1: "x'\(M,N,x,y,z)" by (rule exists_fresh(1), rule fin_supp, blast) obtain y'::"name" where fs2: "y'\(M,N,x,y,z,x')" by (rule exists_fresh(1), rule fin_supp, blast) have "OrL (x).M (y).N z = OrL (x').([(x',x)]\M) (y').([(y',y)]\N) z" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a OrL (x').([(x',x)]\M) (y').([(y',y)]\N') z" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = OrL (x).M (y).N' z" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_OrR1_intro[intro]: shows "M\\<^sub>a M' \ OrR1 .M b \\<^sub>a OrR1 .M' b" proof - assume red: "M\\<^sub>a M'" obtain a'::"coname" where fs1: "a'\(M,b,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "OrR1 .M b = OrR1 .([(a',a)]\M) b" using fs1 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a OrR1 .([(a',a)]\M') b" using fs1 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = OrR1 .M' b" using fs1 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_OrR2_intro[intro]: shows "M\\<^sub>a M' \ OrR2 .M b \\<^sub>a OrR2 .M' b" proof - assume red: "M\\<^sub>a M'" obtain a'::"coname" where fs1: "a'\(M,b,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "OrR2 .M b = OrR2 .([(a',a)]\M) b" using fs1 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a OrR2 .([(a',a)]\M') b" using fs1 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = OrR2 .M' b" using fs1 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_ImpLL_intro[intro]: shows "M\\<^sub>a M' \ ImpL .M (x).N y \\<^sub>a ImpL .M' (x).N y" proof - assume red: "M\\<^sub>a M'" obtain x'::"name" where fs1: "x'\(M,N,x,y)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "ImpL .M (x).N y = ImpL .([(a',a)]\M) (x').([(x',x)]\N) y" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a ImpL .([(a',a)]\M') (x').([(x',x)]\N) y" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = ImpL .M' (x).N y" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_ImpLR_intro[intro]: shows "N\\<^sub>a N' \ ImpL .M (x).N y \\<^sub>a ImpL .M (x).N' y" proof - assume red: "N\\<^sub>a N'" obtain x'::"name" where fs1: "x'\(M,N,x,y)" by (rule exists_fresh(1), rule fin_supp, blast) obtain a'::"coname" where fs2: "a'\(M,N,a)" by (rule exists_fresh(2), rule fin_supp, blast) have "ImpL .M (x).N y = ImpL .([(a',a)]\M) (x').([(x',x)]\N) y" using fs1 fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a ImpL .([(a',a)]\M) (x').([(x',x)]\N') y" using fs1 fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = ImpL .M (x).N' y" using fs1 fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed lemma better_ImpR_intro[intro]: shows "M\\<^sub>a M' \ ImpR (x)..M b \\<^sub>a ImpR (x)..M' b" proof - assume red: "M\\<^sub>a M'" obtain a'::"coname" where fs2: "a'\(M,a,b)" by (rule exists_fresh(2), rule fin_supp, blast) have "ImpR (x)..M b = ImpR (x)..([(a',a)]\M) b" using fs2 by (rule_tac sym, auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm) also have "\ \\<^sub>a ImpR (x)..([(a',a)]\M') b" using fs2 red by (auto intro: a_redu.intros simp add: fresh_left calc_atm a_redu.eqvt fresh_atm fresh_prod) also have "\ = ImpR (x)..M' b" using fs2 red by (auto simp add: trm.inject alpha fresh_atm fresh_prod calc_atm fresh_a_redu) finally show ?thesis by simp qed text \axioms do not reduce\ lemma ax_do_not_l_reduce: shows "Ax x a \\<^sub>l M \ False" by (erule_tac l_redu.cases) (simp_all add: trm.inject) lemma ax_do_not_c_reduce: shows "Ax x a \\<^sub>c M \ False" by (erule_tac c_redu.cases) (simp_all add: trm.inject) lemma ax_do_not_a_reduce: shows "Ax x a \\<^sub>a M \ False" apply(erule_tac a_redu.cases) apply(auto simp add: trm.inject) apply(drule ax_do_not_l_reduce) apply(simp) apply(drule ax_do_not_c_reduce) apply(simp) done lemma a_redu_NotL_elim: assumes a: "NotL .M x \\<^sub>a R" shows "\M'. R = NotL .M' x \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 2) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_NotR_elim: assumes a: "NotR (x).M a \\<^sub>a R" shows "\M'. R = NotR (x).M' a \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 2) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_AndR_elim: assumes a: "AndR .M .N c\\<^sub>a R" shows "(\M'. R = AndR .M' .N c \ M\\<^sub>aM') \ (\N'. R = AndR .M .N' c \ N\\<^sub>aN')" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rotate_tac 6) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(b,ba)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rotate_tac 6) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(b,ba)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,ba)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(b,baa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] done lemma a_redu_AndL1_elim: assumes a: "AndL1 (x).M y \\<^sub>a R" shows "\M'. R = AndL1 (x).M' y \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 3) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_AndL2_elim: assumes a: "AndL2 (x).M y \\<^sub>a R" shows "\M'. R = AndL2 (x).M' y \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 3) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_OrL_elim: assumes a: "OrL (x).M (y).N z\\<^sub>a R" shows "(\M'. R = OrL (x).M' (y).N z \ M\\<^sub>aM') \ (\N'. R = OrL (x).M (y).N' z \ N\\<^sub>aN')" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rotate_tac 6) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(y,ya)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rotate_tac 6) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(x,xa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(x,xaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(y,ya)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,ya)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,yaa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] done lemma a_redu_OrR1_elim: assumes a: "OrR1 .M b \\<^sub>a R" shows "\M'. R = OrR1 .M' b \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 3) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_OrR2_elim: assumes a: "OrR2 .M b \\<^sub>a R" shows "\M'. R = OrR2 .M' b \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 3) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt) apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu) apply(simp add: perm_swap) done lemma a_redu_ImpL_elim: assumes a: "ImpL .M (y).N z\\<^sub>a R" shows "(\M'. R = ImpL .M' (y).N z \ M\\<^sub>aM') \ (\N'. R = ImpL .M (y).N' z \ N\\<^sub>aN')" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rotate_tac 5) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rotate_tac 5) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(rule disjI1) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(a,aaa)]\M')" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule disjI2) apply(auto simp add: alpha a_redu.eqvt)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] apply(rule_tac x="([(y,xa)]\N'a)" in exI) apply(auto simp add: perm_swap fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu)[1] done lemma a_redu_ImpR_elim: assumes a: "ImpR (x)..M b \\<^sub>a R" shows "\M'. R = ImpR (x)..M' b \ M\\<^sub>aM'" using a [[simproc del: defined_all]] apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto) apply(rotate_tac 3) apply(erule_tac a_redu.cases, simp_all add: trm.inject) apply(erule_tac l_redu.cases, simp_all add: trm.inject) apply(erule_tac c_redu.cases, simp_all add: trm.inject) apply(auto simp add: alpha a_redu.eqvt abs_perm abs_fresh) apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(a,aa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(a,aaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(a,aa)]\[(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule sym) apply(rule trans) apply(rule perm_compose) apply(simp add: calc_atm perm_swap) apply(rule_tac x="([(a,aaa)]\[(x,xa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule sym) apply(rule trans) apply(rule perm_compose) apply(simp add: calc_atm perm_swap) apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule_tac x="([(a,aa)]\[(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule sym) apply(rule trans) apply(rule perm_compose) apply(simp add: calc_atm perm_swap) apply(rule_tac x="([(a,aaa)]\[(x,xaa)]\M'a)" in exI) apply(auto simp add: fresh_left alpha a_redu.eqvt calc_atm fresh_a_redu perm_swap) apply(rule sym) apply(rule trans) apply(rule perm_compose) apply(simp add: calc_atm perm_swap) done text \Transitive Closure\ abbreviation a_star_redu :: "trm \ trm \ bool" ("_ \\<^sub>a* _" [100,100] 100) where "M \\<^sub>a* M' \ (a_redu)\<^sup>*\<^sup>* M M'" lemma a_starI: assumes a: "M \\<^sub>a M'" shows "M \\<^sub>a* M'" using a by blast lemma a_starE: assumes a: "M \\<^sub>a* M'" shows "M = M' \ (\N. M \\<^sub>a N \ N \\<^sub>a* M')" using a by (induct) (auto) lemma a_star_refl: shows "M \\<^sub>a* M" by blast lemma a_star_trans[trans]: assumes a1: "M1\\<^sub>a* M2" and a2: "M2\\<^sub>a* M3" shows "M1 \\<^sub>a* M3" using a2 a1 by (induct) (auto) text \congruence rules for \\\<^sub>a*\\ lemma ax_do_not_a_star_reduce: shows "Ax x a \\<^sub>a* M \ M = Ax x a" apply(induct set: rtranclp) apply(auto) apply(drule ax_do_not_a_reduce) apply(simp) done lemma a_star_CutL: "M \\<^sub>a* M' \ Cut .M (x).N \\<^sub>a* Cut .M' (x).N" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_CutR: "N \\<^sub>a* N'\ Cut .M (x).N \\<^sub>a* Cut .M (x).N'" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_Cut: "\M \\<^sub>a* M'; N \\<^sub>a* N'\ \ Cut .M (x).N \\<^sub>a* Cut .M' (x).N'" by (blast intro!: a_star_CutL a_star_CutR intro: rtranclp_trans) lemma a_star_NotR: "M \\<^sub>a* M' \ NotR (x).M a \\<^sub>a* NotR (x).M' a" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_NotL: "M \\<^sub>a* M' \ NotL .M x \\<^sub>a* NotL .M' x" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_AndRL: "M \\<^sub>a* M'\ AndR .M .N c \\<^sub>a* AndR .M' .N c" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_AndRR: "N \\<^sub>a* N'\ AndR .M .N c \\<^sub>a* AndR .M .N' c" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_AndR: "\M \\<^sub>a* M'; N \\<^sub>a* N'\ \ AndR .M .N c \\<^sub>a* AndR .M' .N' c" by (blast intro!: a_star_AndRL a_star_AndRR intro: rtranclp_trans) lemma a_star_AndL1: "M \\<^sub>a* M' \ AndL1 (x).M y \\<^sub>a* AndL1 (x).M' y" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_AndL2: "M \\<^sub>a* M' \ AndL2 (x).M y \\<^sub>a* AndL2 (x).M' y" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_OrLL: "M \\<^sub>a* M'\ OrL (x).M (y).N z \\<^sub>a* OrL (x).M' (y).N z" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_OrLR: "N \\<^sub>a* N'\ OrL (x).M (y).N z \\<^sub>a* OrL (x).M (y).N' z" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_OrL: "\M \\<^sub>a* M'; N \\<^sub>a* N'\ \ OrL (x).M (y).N z \\<^sub>a* OrL (x).M' (y).N' z" by (blast intro!: a_star_OrLL a_star_OrLR intro: rtranclp_trans) lemma a_star_OrR1: "M \\<^sub>a* M' \ OrR1 .M b \\<^sub>a* OrR1 .M' b" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_OrR2: "M \\<^sub>a* M' \ OrR2 .M b \\<^sub>a* OrR2 .M' b" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_ImpLL: "M \\<^sub>a* M'\ ImpL .M (y).N z \\<^sub>a* ImpL .M' (y).N z" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_ImpLR: "N \\<^sub>a* N'\ ImpL .M (y).N z \\<^sub>a* ImpL .M (y).N' z" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemma a_star_ImpL: "\M \\<^sub>a* M'; N \\<^sub>a* N'\ \ ImpL .M (y).N z \\<^sub>a* ImpL .M' (y).N' z" by (blast intro!: a_star_ImpLL a_star_ImpLR intro: rtranclp_trans) lemma a_star_ImpR: "M \\<^sub>a* M' \ ImpR (x)..M b \\<^sub>a* ImpR (x)..M' b" by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+ lemmas a_star_congs = a_star_Cut a_star_NotR a_star_NotL a_star_AndR a_star_AndL1 a_star_AndL2 a_star_OrL a_star_OrR1 a_star_OrR2 a_star_ImpL a_star_ImpR lemma a_star_redu_NotL_elim: assumes a: "NotL .M x \\<^sub>a* R" shows "\M'. R = NotL .M' x \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_NotL_elim) apply(auto) done lemma a_star_redu_NotR_elim: assumes a: "NotR (x).M a \\<^sub>a* R" shows "\M'. R = NotR (x).M' a \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_NotR_elim) apply(auto) done lemma a_star_redu_AndR_elim: assumes a: "AndR .M .N c\\<^sub>a* R" shows "(\M' N'. R = AndR .M' .N' c \ M \\<^sub>a* M' \ N \\<^sub>a* N')" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_AndR_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_AndL1_elim: assumes a: "AndL1 (x).M y \\<^sub>a* R" shows "\M'. R = AndL1 (x).M' y \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_AndL1_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_AndL2_elim: assumes a: "AndL2 (x).M y \\<^sub>a* R" shows "\M'. R = AndL2 (x).M' y \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_AndL2_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_OrL_elim: assumes a: "OrL (x).M (y).N z \\<^sub>a* R" shows "(\M' N'. R = OrL (x).M' (y).N' z \ M \\<^sub>a* M' \ N \\<^sub>a* N')" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_OrL_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_OrR1_elim: assumes a: "OrR1 .M y \\<^sub>a* R" shows "\M'. R = OrR1 .M' y \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_OrR1_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_OrR2_elim: assumes a: "OrR2 .M y \\<^sub>a* R" shows "\M'. R = OrR2 .M' y \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_OrR2_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_ImpR_elim: assumes a: "ImpR (x)..M y \\<^sub>a* R" shows "\M'. R = ImpR (x)..M' y \ M \\<^sub>a* M'" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_ImpR_elim) apply(auto simp add: alpha trm.inject) done lemma a_star_redu_ImpL_elim: assumes a: "ImpL .M (y).N z \\<^sub>a* R" shows "(\M' N'. R = ImpL .M' (y).N' z \ M \\<^sub>a* M' \ N \\<^sub>a* N')" using a apply(induct set: rtranclp) apply(auto) apply(drule a_redu_ImpL_elim) apply(auto simp add: alpha trm.inject) done text \Substitution\ lemma subst_not_fin1: shows "\fin(M{x:=.P}) x" apply(nominal_induct M avoiding: x c P rule: trm.strong_induct) apply(auto) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.P},P,name1,trm2{x:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(erule fin.cases, simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},P,name1,trm2{name2:=.P})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(erule fin.cases, simp_all add: trm.inject) apply(rule exists_fresh'(1)[OF fs_name1]) apply(erule fin.cases, simp_all add: trm.inject) done lemma subst_not_fin2: assumes a: "\fin M y" shows "\fin(M{c:=(x).P}) y" using a apply(nominal_induct M avoiding: x c P y rule: trm.strong_induct) apply(auto) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(drule fin_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(drule freshn_after_substc) apply(simp add: fin.intros) apply(subgoal_tac "\c'::coname. c'\(trm1{coname3:=(x).P},P,coname1,trm2{coname3:=(x).P},coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(drule fin_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshn_after_substc) apply(simp add: fin.intros abs_fresh) apply(drule fin_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshn_after_substc) apply(simp add: fin.intros abs_fresh) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(drule fin_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(drule fin_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshn_after_substc) apply(drule freshn_after_substc) apply(simp add: fin.intros abs_fresh) apply(subgoal_tac "\c'::coname. c'\(trm{coname2:=(x).P},P,coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(drule fin_elims, simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(drule fin_elims, simp) apply(drule fin_elims, simp) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshn_after_substc) apply(drule freshn_after_substc) apply(simp add: fin.intros abs_fresh) done lemma subst_not_fic1: shows "\fic (M{a:=(x).P}) a" apply(nominal_induct M avoiding: a x P rule: trm.strong_induct) apply(auto) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm{coname:=(x).P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(erule fic.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm1{coname3:=(x).P},P,trm2{coname3:=(x).P},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(erule fic.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(erule fic.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fic.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(erule fic.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) apply(subgoal_tac "\a'::coname. a'\(trm{coname2:=(x).P},P,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(erule fic.cases, simp_all add: trm.inject) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(erule fic.cases, simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject) done lemma subst_not_fic2: assumes a: "\fic M a" shows "\fic(M{x:=.P}) a" using a apply(nominal_induct M avoiding: x a P b rule: trm.strong_induct) apply(auto) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(drule freshc_after_substn) apply(simp add: fic.intros) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(auto)[1] apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.P},P,name1,trm2{x:=.P},name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) apply(drule fic_elims, simp) apply(simp add: abs_fresh fresh_atm) apply(drule freshc_after_substn) apply(simp add: fic.intros abs_fresh) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.P},trm2{name2:=.P},P,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(drule fic_elims, simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(drule fic_elims, simp) done text \Reductions\ lemma fin_l_reduce: assumes a: "fin M x" and b: "M \\<^sub>l M'" shows "fin M' x" using b a apply(induct) apply(erule fin.cases) apply(simp_all add: trm.inject) apply(rotate_tac 3) apply(erule fin.cases) apply(simp_all add: trm.inject) apply(erule fin.cases, simp_all add: trm.inject)+ done lemma fin_c_reduce: assumes a: "fin M x" and b: "M \\<^sub>c M'" shows "fin M' x" using b a apply(induct) apply(erule fin.cases, simp_all add: trm.inject)+ done lemma fin_a_reduce: assumes a: "fin M x" and b: "M \\<^sub>a M'" shows "fin M' x" using a b apply(induct) apply(drule ax_do_not_a_reduce) apply(simp) apply(drule a_redu_NotL_elim) apply(auto) apply(rule fin.intros) apply(simp add: fresh_a_redu) apply(drule a_redu_AndL1_elim) apply(auto) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_AndL2_elim) apply(auto) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_OrL_elim) apply(auto) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_ImpL_elim) apply(auto) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) apply(rule fin.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) done lemma fin_a_star_reduce: assumes a: "fin M x" and b: "M \\<^sub>a* M'" shows "fin M' x" using b a apply(induct set: rtranclp) apply(auto simp add: fin_a_reduce) done lemma fic_l_reduce: assumes a: "fic M x" and b: "M \\<^sub>l M'" shows "fic M' x" using b a apply(induct) apply(erule fic.cases) apply(simp_all add: trm.inject) apply(rotate_tac 3) apply(erule fic.cases) apply(simp_all add: trm.inject) apply(erule fic.cases, simp_all add: trm.inject)+ done lemma fic_c_reduce: assumes a: "fic M x" and b: "M \\<^sub>c M'" shows "fic M' x" using b a apply(induct) apply(erule fic.cases, simp_all add: trm.inject)+ done lemma fic_a_reduce: assumes a: "fic M x" and b: "M \\<^sub>a M'" shows "fic M' x" using a b apply(induct) apply(drule ax_do_not_a_reduce) apply(simp) apply(drule a_redu_NotR_elim) apply(auto) apply(rule fic.intros) apply(simp add: fresh_a_redu) apply(drule a_redu_AndR_elim) apply(auto) apply(rule fic.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) apply(rule fic.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_OrR1_elim) apply(auto) apply(rule fic.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_OrR2_elim) apply(auto) apply(rule fic.intros) apply(force simp add: abs_fresh fresh_a_redu) apply(drule a_redu_ImpR_elim) apply(auto) apply(rule fic.intros) apply(force simp add: abs_fresh fresh_a_redu) done lemma fic_a_star_reduce: assumes a: "fic M x" and b: "M \\<^sub>a* M'" shows "fic M' x" using b a apply(induct set: rtranclp) apply(auto simp add: fic_a_reduce) done text \substitution properties\ lemma subst_with_ax1: shows "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" proof(nominal_induct M avoiding: x a y rule: trm.strong_induct) case (Ax z b x a y) show "(Ax z b){x:=.Ax y a} \\<^sub>a* (Ax z b)[x\n>y]" proof (cases "z=x") case True assume eq: "z=x" have "(Ax z b){x:=.Ax y a} = Cut .Ax y a (x).Ax x b" using eq by simp also have "\ \\<^sub>a* (Ax x b)[x\n>y]" by blast finally show "Ax z b{x:=.Ax y a} \\<^sub>a* (Ax z b)[x\n>y]" using eq by simp next case False assume neq: "z\x" then show "(Ax z b){x:=.Ax y a} \\<^sub>a* (Ax z b)[x\n>y]" using neq by simp qed next case (Cut b M z N x a y) have fs: "b\x" "b\a" "b\y" "b\N" "z\x" "z\a" "z\y" "z\M" by fact+ have ih1: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have ih2: "N{x:=.Ax y a} \\<^sub>a* N[x\n>y]" by fact show "(Cut .M (z).N){x:=.Ax y a} \\<^sub>a* (Cut .M (z).N)[x\n>y]" proof (cases "M = Ax x b") case True assume eq: "M = Ax x b" have "(Cut .M (z).N){x:=.Ax y a} = Cut .Ax y a (z).(N{x:=.Ax y a})" using fs eq by simp also have "\ \\<^sub>a* Cut .Ax y a (z).(N[x\n>y])" using ih2 a_star_congs by blast also have "\ = Cut .(M[x\n>y]) (z).(N[x\n>y])" using eq by (simp add: trm.inject alpha calc_atm fresh_atm) finally show "(Cut .M (z).N){x:=.Ax y a} \\<^sub>a* (Cut .M (z).N)[x\n>y]" using fs by simp next case False assume neq: "M \ Ax x b" have "(Cut .M (z).N){x:=.Ax y a} = Cut .(M{x:=.Ax y a}) (z).(N{x:=.Ax y a})" using fs neq by simp also have "\ \\<^sub>a* Cut .(M[x\n>y]) (z).(N[x\n>y])" using ih1 ih2 a_star_congs by blast finally show "(Cut .M (z).N){x:=.Ax y a} \\<^sub>a* (Cut .M (z).N)[x\n>y]" using fs by simp qed next case (NotR z M b x a y) have fs: "z\x" "z\a" "z\y" "z\b" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have "(NotR (z).M b){x:=.Ax y a} = NotR (z).(M{x:=.Ax y a}) b" using fs by simp also have "\ \\<^sub>a* NotR (z).(M[x\n>y]) b" using ih by (auto intro: a_star_congs) finally show "(NotR (z).M b){x:=.Ax y a} \\<^sub>a* (NotR (z).M b)[x\n>y]" using fs by simp next case (NotL b M z x a y) have fs: "b\x" "b\a" "b\y" "b\z" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact show "(NotL .M z){x:=.Ax y a} \\<^sub>a* (NotL .M z)[x\n>y]" proof(cases "z=x") case True assume eq: "z=x" obtain x'::"name" where new: "x'\(Ax y a,M{x:=.Ax y a})" by (rule exists_fresh(1)[OF fs_name1]) have "(NotL .M z){x:=.Ax y a} = fresh_fun (\x'. Cut .Ax y a (x').NotL .(M{x:=.Ax y a}) x')" using eq fs by simp also have "\ = Cut .Ax y a (x').NotL .(M{x:=.Ax y a}) x'" using new by (simp add: fresh_fun_simp_NotL fresh_prod) also have "\ \\<^sub>a* (NotL .(M{x:=.Ax y a}) x')[x'\n>y]" using new apply(rule_tac a_starI) apply(rule al_redu) apply(rule better_LAxL_intro) apply(auto) done also have "\ = NotL .(M{x:=.Ax y a}) y" using new by (simp add: nrename_fresh) also have "\ \\<^sub>a* NotL .(M[x\n>y]) y" using ih by (auto intro: a_star_congs) also have "\ = (NotL .M z)[x\n>y]" using eq by simp finally show "(NotL .M z){x:=.Ax y a} \\<^sub>a* (NotL .M z)[x\n>y]" by simp next case False assume neq: "z\x" have "(NotL .M z){x:=.Ax y a} = NotL .(M{x:=.Ax y a}) z" using fs neq by simp also have "\ \\<^sub>a* NotL .(M[x\n>y]) z" using ih by (auto intro: a_star_congs) finally show "(NotL .M z){x:=.Ax y a} \\<^sub>a* (NotL .M z)[x\n>y]" using neq by simp qed next case (AndR c M d N e x a y) have fs: "c\x" "c\a" "c\y" "d\x" "d\a" "d\y" "d\c" "c\N" "c\e" "d\M" "d\e" by fact+ have ih1: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have ih2: "N{x:=.Ax y a} \\<^sub>a* N[x\n>y]" by fact have "(AndR .M .N e){x:=.Ax y a} = AndR .(M{x:=.Ax y a}) .(N{x:=.Ax y a}) e" using fs by simp also have "\ \\<^sub>a* AndR .(M[x\n>y]) .(N[x\n>y]) e" using ih1 ih2 by (auto intro: a_star_congs) finally show "(AndR .M .N e){x:=.Ax y a} \\<^sub>a* (AndR .M .N e)[x\n>y]" using fs by simp next case (AndL1 u M v x a y) have fs: "u\x" "u\a" "u\y" "u\v" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact show "(AndL1 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL1 (u).M v)[x\n>y]" proof(cases "v=x") case True assume eq: "v=x" obtain v'::"name" where new: "v'\(Ax y a,M{x:=.Ax y a},u)" by (rule exists_fresh(1)[OF fs_name1]) have "(AndL1 (u).M v){x:=.Ax y a} = fresh_fun (\v'. Cut .Ax y a (v').AndL1 (u).(M{x:=.Ax y a}) v')" using eq fs by simp also have "\ = Cut .Ax y a (v').AndL1 (u).(M{x:=.Ax y a}) v'" using new by (simp add: fresh_fun_simp_AndL1 fresh_prod) also have "\ \\<^sub>a* (AndL1 (u).(M{x:=.Ax y a}) v')[v'\n>y]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxL_intro) apply(rule fin.intros) apply(simp add: abs_fresh) done also have "\ = AndL1 (u).(M{x:=.Ax y a}) y" using fs new by (auto simp add: fresh_prod fresh_atm nrename_fresh) also have "\ \\<^sub>a* AndL1 (u).(M[x\n>y]) y" using ih by (auto intro: a_star_congs) also have "\ = (AndL1 (u).M v)[x\n>y]" using eq fs by simp finally show "(AndL1 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL1 (u).M v)[x\n>y]" by simp next case False assume neq: "v\x" have "(AndL1 (u).M v){x:=.Ax y a} = AndL1 (u).(M{x:=.Ax y a}) v" using fs neq by simp also have "\ \\<^sub>a* AndL1 (u).(M[x\n>y]) v" using ih by (auto intro: a_star_congs) finally show "(AndL1 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL1 (u).M v)[x\n>y]" using fs neq by simp qed next case (AndL2 u M v x a y) have fs: "u\x" "u\a" "u\y" "u\v" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact show "(AndL2 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL2 (u).M v)[x\n>y]" proof(cases "v=x") case True assume eq: "v=x" obtain v'::"name" where new: "v'\(Ax y a,M{x:=.Ax y a},u)" by (rule exists_fresh(1)[OF fs_name1]) have "(AndL2 (u).M v){x:=.Ax y a} = fresh_fun (\v'. Cut .Ax y a (v').AndL2 (u).(M{x:=.Ax y a}) v')" using eq fs by simp also have "\ = Cut .Ax y a (v').AndL2 (u).(M{x:=.Ax y a}) v'" using new by (simp add: fresh_fun_simp_AndL2 fresh_prod) also have "\ \\<^sub>a* (AndL2 (u).(M{x:=.Ax y a}) v')[v'\n>y]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxL_intro) apply(rule fin.intros) apply(simp add: abs_fresh) done also have "\ = AndL2 (u).(M{x:=.Ax y a}) y" using fs new by (auto simp add: fresh_prod fresh_atm nrename_fresh) also have "\ \\<^sub>a* AndL2 (u).(M[x\n>y]) y" using ih by (auto intro: a_star_congs) also have "\ = (AndL2 (u).M v)[x\n>y]" using eq fs by simp finally show "(AndL2 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL2 (u).M v)[x\n>y]" by simp next case False assume neq: "v\x" have "(AndL2 (u).M v){x:=.Ax y a} = AndL2 (u).(M{x:=.Ax y a}) v" using fs neq by simp also have "\ \\<^sub>a* AndL2 (u).(M[x\n>y]) v" using ih by (auto intro: a_star_congs) finally show "(AndL2 (u).M v){x:=.Ax y a} \\<^sub>a* (AndL2 (u).M v)[x\n>y]" using fs neq by simp qed next case (OrR1 c M d x a y) have fs: "c\x" "c\a" "c\y" "c\d" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have "(OrR1 .M d){x:=.Ax y a} = OrR1 .(M{x:=.Ax y a}) d" using fs by (simp add: fresh_atm) also have "\ \\<^sub>a* OrR1 .(M[x\n>y]) d" using ih by (auto intro: a_star_congs) finally show "(OrR1 .M d){x:=.Ax y a} \\<^sub>a* (OrR1 .M d)[x\n>y]" using fs by simp next case (OrR2 c M d x a y) have fs: "c\x" "c\a" "c\y" "c\d" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have "(OrR2 .M d){x:=.Ax y a} = OrR2 .(M{x:=.Ax y a}) d" using fs by (simp add: fresh_atm) also have "\ \\<^sub>a* OrR2 .(M[x\n>y]) d" using ih by (auto intro: a_star_congs) finally show "(OrR2 .M d){x:=.Ax y a} \\<^sub>a* (OrR2 .M d)[x\n>y]" using fs by simp next case (OrL u M v N z x a y) have fs: "u\x" "u\a" "u\y" "v\x" "v\a" "v\y" "v\u" "u\N" "u\z" "v\M" "v\z" by fact+ have ih1: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have ih2: "N{x:=.Ax y a} \\<^sub>a* N[x\n>y]" by fact show "(OrL (u).M (v).N z){x:=.Ax y a} \\<^sub>a* (OrL (u).M (v).N z)[x\n>y]" proof(cases "z=x") case True assume eq: "z=x" obtain z'::"name" where new: "z'\(Ax y a,M{x:=.Ax y a},N{x:=.Ax y a},u,v,y,a)" by (rule exists_fresh(1)[OF fs_name1]) have "(OrL (u).M (v).N z){x:=.Ax y a} = fresh_fun (\z'. Cut .Ax y a (z').OrL (u).(M{x:=.Ax y a}) (v).(N{x:=.Ax y a}) z')" using eq fs by simp also have "\ = Cut .Ax y a (z').OrL (u).(M{x:=.Ax y a}) (v).(N{x:=.Ax y a}) z'" using new by (simp add: fresh_fun_simp_OrL) also have "\ \\<^sub>a* (OrL (u).(M{x:=.Ax y a}) (v).(N{x:=.Ax y a}) z')[z'\n>y]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxL_intro) apply(rule fin.intros) apply(simp_all add: abs_fresh) done also have "\ = OrL (u).(M{x:=.Ax y a}) (v).(N{x:=.Ax y a}) y" using fs new by (auto simp add: fresh_prod fresh_atm nrename_fresh subst_fresh) also have "\ \\<^sub>a* OrL (u).(M[x\n>y]) (v).(N[x\n>y]) y" using ih1 ih2 by (auto intro: a_star_congs) also have "\ = (OrL (u).M (v).N z)[x\n>y]" using eq fs by simp finally show "(OrL (u).M (v).N z){x:=.Ax y a} \\<^sub>a* (OrL (u).M (v).N z)[x\n>y]" by simp next case False assume neq: "z\x" have "(OrL (u).M (v).N z){x:=.Ax y a} = OrL (u).(M{x:=.Ax y a}) (v).(N{x:=.Ax y a}) z" using fs neq by simp also have "\ \\<^sub>a* OrL (u).(M[x\n>y]) (v).(N[x\n>y]) z" using ih1 ih2 by (auto intro: a_star_congs) finally show "(OrL (u).M (v).N z){x:=.Ax y a} \\<^sub>a* (OrL (u).M (v).N z)[x\n>y]" using fs neq by simp qed next case (ImpR z c M d x a y) have fs: "z\x" "z\a" "z\y" "c\x" "c\a" "c\y" "z\d" "c\d" by fact+ have ih: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have "(ImpR (z)..M d){x:=.Ax y a} = ImpR (z)..(M{x:=.Ax y a}) d" using fs by simp also have "\ \\<^sub>a* ImpR (z)..(M[x\n>y]) d" using ih by (auto intro: a_star_congs) finally show "(ImpR (z)..M d){x:=.Ax y a} \\<^sub>a* (ImpR (z)..M d)[x\n>y]" using fs by simp next case (ImpL c M u N v x a y) have fs: "c\x" "c\a" "c\y" "u\x" "u\a" "u\y" "c\N" "c\v" "u\M" "u\v" by fact+ have ih1: "M{x:=.Ax y a} \\<^sub>a* M[x\n>y]" by fact have ih2: "N{x:=.Ax y a} \\<^sub>a* N[x\n>y]" by fact show "(ImpL .M (u).N v){x:=.Ax y a} \\<^sub>a* (ImpL .M (u).N v)[x\n>y]" proof(cases "v=x") case True assume eq: "v=x" obtain v'::"name" where new: "v'\(Ax y a,M{x:=.Ax y a},N{x:=.Ax y a},y,a,u)" by (rule exists_fresh(1)[OF fs_name1]) have "(ImpL .M (u).N v){x:=.Ax y a} = fresh_fun (\v'. Cut .Ax y a (v').ImpL .(M{x:=.Ax y a}) (u).(N{x:=.Ax y a}) v')" using eq fs by simp also have "\ = Cut .Ax y a (v').ImpL .(M{x:=.Ax y a}) (u).(N{x:=.Ax y a}) v'" using new by (simp add: fresh_fun_simp_ImpL) also have "\ \\<^sub>a* (ImpL .(M{x:=.Ax y a}) (u).(N{x:=.Ax y a}) v')[v'\n>y]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxL_intro) apply(rule fin.intros) apply(simp_all add: abs_fresh) done also have "\ = ImpL .(M{x:=.Ax y a}) (u).(N{x:=.Ax y a}) y" using fs new by (auto simp add: fresh_prod subst_fresh fresh_atm trm.inject alpha rename_fresh) also have "\ \\<^sub>a* ImpL .(M[x\n>y]) (u).(N[x\n>y]) y" using ih1 ih2 by (auto intro: a_star_congs) also have "\ = (ImpL .M (u).N v)[x\n>y]" using eq fs by simp finally show "(ImpL .M (u).N v){x:=.Ax y a} \\<^sub>a* (ImpL .M (u).N v)[x\n>y]" using fs by simp next case False assume neq: "v\x" have "(ImpL .M (u).N v){x:=.Ax y a} = ImpL .(M{x:=.Ax y a}) (u).(N{x:=.Ax y a}) v" using fs neq by simp also have "\ \\<^sub>a* ImpL .(M[x\n>y]) (u).(N[x\n>y]) v" using ih1 ih2 by (auto intro: a_star_congs) finally show "(ImpL .M (u).N v){x:=.Ax y a} \\<^sub>a* (ImpL .M (u).N v)[x\n>y]" using fs neq by simp qed qed lemma subst_with_ax2: shows "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" proof(nominal_induct M avoiding: b a x rule: trm.strong_induct) case (Ax z c b a x) show "(Ax z c){b:=(x).Ax x a} \\<^sub>a* (Ax z c)[b\c>a]" proof (cases "c=b") case True assume eq: "c=b" have "(Ax z c){b:=(x).Ax x a} = Cut .Ax z c (x).Ax x a" using eq by simp also have "\ \\<^sub>a* (Ax z c)[b\c>a]" using eq by blast finally show "(Ax z c){b:=(x).Ax x a} \\<^sub>a* (Ax z c)[b\c>a]" by simp next case False assume neq: "c\b" then show "(Ax z c){b:=(x).Ax x a} \\<^sub>a* (Ax z c)[b\c>a]" by simp qed next case (Cut c M z N b a x) have fs: "c\b" "c\a" "c\x" "c\N" "z\b" "z\a" "z\x" "z\M" by fact+ have ih1: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have ih2: "N{b:=(x).Ax x a} \\<^sub>a* N[b\c>a]" by fact show "(Cut .M (z).N){b:=(x).Ax x a} \\<^sub>a* (Cut .M (z).N)[b\c>a]" proof (cases "N = Ax z b") case True assume eq: "N = Ax z b" have "(Cut .M (z).N){b:=(x).Ax x a} = Cut .(M{b:=(x).Ax x a}) (x).Ax x a" using eq fs by simp also have "\ \\<^sub>a* Cut .(M[b\c>a]) (x).Ax x a" using ih1 a_star_congs by blast also have "\ = Cut .(M[b\c>a]) (z).(N[b\c>a])" using eq fs by (simp add: trm.inject alpha calc_atm fresh_atm) finally show "(Cut .M (z).N){b:=(x).Ax x a} \\<^sub>a* (Cut .M (z).N)[b\c>a]" using fs by simp next case False assume neq: "N \ Ax z b" have "(Cut .M (z).N){b:=(x).Ax x a} = Cut .(M{b:=(x).Ax x a}) (z).(N{b:=(x).Ax x a})" using fs neq by simp also have "\ \\<^sub>a* Cut .(M[b\c>a]) (z).(N[b\c>a])" using ih1 ih2 a_star_congs by blast finally show "(Cut .M (z).N){b:=(x).Ax x a} \\<^sub>a* (Cut .M (z).N)[b\c>a]" using fs by simp qed next case (NotR z M c b a x) have fs: "z\b" "z\a" "z\x" "z\c" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact show "(NotR (z).M c){b:=(x).Ax x a} \\<^sub>a* (NotR (z).M c)[b\c>a]" proof (cases "c=b") case True assume eq: "c=b" obtain a'::"coname" where new: "a'\(Ax x a,M{b:=(x).Ax x a})" by (rule exists_fresh(2)[OF fs_coname1]) have "(NotR (z).M c){b:=(x).Ax x a} = fresh_fun (\a'. Cut .NotR (z).M{b:=(x).Ax x a} a' (x).Ax x a)" using eq fs by simp also have "\ = Cut .NotR (z).M{b:=(x).Ax x a} a' (x).Ax x a" using new by (simp add: fresh_fun_simp_NotR fresh_prod) also have "\ \\<^sub>a* (NotR (z).(M{b:=(x).Ax x a}) a')[a'\c>a]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxR_intro) apply(rule fic.intros) apply(simp) done also have "\ = NotR (z).(M{b:=(x).Ax x a}) a" using new by (simp add: crename_fresh) also have "\ \\<^sub>a* NotR (z).(M[b\c>a]) a" using ih by (auto intro: a_star_congs) also have "\ = (NotR (z).M c)[b\c>a]" using eq by simp finally show "(NotR (z).M c){b:=(x).Ax x a} \\<^sub>a* (NotR (z).M c)[b\c>a]" by simp next case False assume neq: "c\b" have "(NotR (z).M c){b:=(x).Ax x a} = NotR (z).(M{b:=(x).Ax x a}) c" using fs neq by simp also have "\ \\<^sub>a* NotR (z).(M[b\c>a]) c" using ih by (auto intro: a_star_congs) finally show "(NotR (z).M c){b:=(x).Ax x a} \\<^sub>a* (NotR (z).M c)[b\c>a]" using neq by simp qed next case (NotL c M z b a x) have fs: "c\b" "c\a" "c\x" "c\z" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have "(NotL .M z){b:=(x).Ax x a} = NotL .(M{b:=(x).Ax x a}) z" using fs by simp also have "\ \\<^sub>a* NotL .(M[b\c>a]) z" using ih by (auto intro: a_star_congs) finally show "(NotL .M z){b:=(x).Ax x a} \\<^sub>a* (NotL .M z)[b\c>a]" using fs by simp next case (AndR c M d N e b a x) have fs: "c\b" "c\a" "c\x" "d\b" "d\a" "d\x" "d\c" "c\N" "c\e" "d\M" "d\e" by fact+ have ih1: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have ih2: "N{b:=(x).Ax x a} \\<^sub>a* N[b\c>a]" by fact show "(AndR .M .N e){b:=(x).Ax x a} \\<^sub>a* (AndR .M .N e)[b\c>a]" proof(cases "e=b") case True assume eq: "e=b" obtain e'::"coname" where new: "e'\(Ax x a,M{b:=(x).Ax x a},N{b:=(x).Ax x a},c,d)" by (rule exists_fresh(2)[OF fs_coname1]) have "(AndR .M .N e){b:=(x).Ax x a} = fresh_fun (\e'. Cut .AndR .(M{b:=(x).Ax x a}) .(N{b:=(x).Ax x a}) e' (x).Ax x a)" using eq fs by simp also have "\ = Cut .AndR .(M{b:=(x).Ax x a}) .(N{b:=(x).Ax x a}) e' (x).Ax x a" using new by (simp add: fresh_fun_simp_AndR fresh_prod) also have "\ \\<^sub>a* (AndR .(M{b:=(x).Ax x a}) .(N{b:=(x).Ax x a}) e')[e'\c>a]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxR_intro) apply(rule fic.intros) apply(simp_all add: abs_fresh) done also have "\ = AndR .(M{b:=(x).Ax x a}) .(N{b:=(x).Ax x a}) a" using fs new by (auto simp add: fresh_prod fresh_atm subst_fresh crename_fresh) also have "\ \\<^sub>a* AndR .(M[b\c>a]) .(N[b\c>a]) a" using ih1 ih2 by (auto intro: a_star_congs) also have "\ = (AndR .M .N e)[b\c>a]" using eq fs by simp finally show "(AndR .M .N e){b:=(x).Ax x a} \\<^sub>a* (AndR .M .N e)[b\c>a]" by simp next case False assume neq: "e\b" have "(AndR .M .N e){b:=(x).Ax x a} = AndR .(M{b:=(x).Ax x a}) .(N{b:=(x).Ax x a}) e" using fs neq by simp also have "\ \\<^sub>a* AndR .(M[b\c>a]) .(N[b\c>a]) e" using ih1 ih2 by (auto intro: a_star_congs) finally show "(AndR .M .N e){b:=(x).Ax x a} \\<^sub>a* (AndR .M .N e)[b\c>a]" using fs neq by simp qed next case (AndL1 u M v b a x) have fs: "u\b" "u\a" "u\x" "u\v" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have "(AndL1 (u).M v){b:=(x).Ax x a} = AndL1 (u).(M{b:=(x).Ax x a}) v" using fs by simp also have "\ \\<^sub>a* AndL1 (u).(M[b\c>a]) v" using ih by (auto intro: a_star_congs) finally show "(AndL1 (u).M v){b:=(x).Ax x a} \\<^sub>a* (AndL1 (u).M v)[b\c>a]" using fs by simp next case (AndL2 u M v b a x) have fs: "u\b" "u\a" "u\x" "u\v" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have "(AndL2 (u).M v){b:=(x).Ax x a} = AndL2 (u).(M{b:=(x).Ax x a}) v" using fs by simp also have "\ \\<^sub>a* AndL2 (u).(M[b\c>a]) v" using ih by (auto intro: a_star_congs) finally show "(AndL2 (u).M v){b:=(x).Ax x a} \\<^sub>a* (AndL2 (u).M v)[b\c>a]" using fs by simp next case (OrR1 c M d b a x) have fs: "c\b" "c\a" "c\x" "c\d" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact show "(OrR1 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR1 .M d)[b\c>a]" proof(cases "d=b") case True assume eq: "d=b" obtain a'::"coname" where new: "a'\(Ax x a,M{b:=(x).Ax x a},c,x,a)" by (rule exists_fresh(2)[OF fs_coname1]) have "(OrR1 .M d){b:=(x).Ax x a} = fresh_fun (\a'. Cut .OrR1 .M{b:=(x).Ax x a} a' (x).Ax x a)" using fs eq by (simp) also have "\ = Cut .OrR1 .M{b:=(x).Ax x a} a' (x).Ax x a" using new by (simp add: fresh_fun_simp_OrR1) also have "\ \\<^sub>a* (OrR1 .M{b:=(x).Ax x a} a')[a'\c>a]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxR_intro) apply(rule fic.intros) apply(simp_all add: abs_fresh) done also have "\ = OrR1 .M{b:=(x).Ax x a} a" using fs new by (auto simp add: fresh_prod fresh_atm crename_fresh subst_fresh) also have "\ \\<^sub>a* OrR1 .(M[b\c>a]) a" using ih by (auto intro: a_star_congs) also have "\ = (OrR1 .M d)[b\c>a]" using eq fs by simp finally show "(OrR1 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR1 .M d)[b\c>a]" by simp next case False assume neq: "d\b" have "(OrR1 .M d){b:=(x).Ax x a} = OrR1 .(M{b:=(x).Ax x a}) d" using fs neq by (simp) also have "\ \\<^sub>a* OrR1 .(M[b\c>a]) d" using ih by (auto intro: a_star_congs) finally show "(OrR1 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR1 .M d)[b\c>a]" using fs neq by simp qed next case (OrR2 c M d b a x) have fs: "c\b" "c\a" "c\x" "c\d" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact show "(OrR2 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR2 .M d)[b\c>a]" proof(cases "d=b") case True assume eq: "d=b" obtain a'::"coname" where new: "a'\(Ax x a,M{b:=(x).Ax x a},c,x,a)" by (rule exists_fresh(2)[OF fs_coname1]) have "(OrR2 .M d){b:=(x).Ax x a} = fresh_fun (\a'. Cut .OrR2 .M{b:=(x).Ax x a} a' (x).Ax x a)" using fs eq by (simp) also have "\ = Cut .OrR2 .M{b:=(x).Ax x a} a' (x).Ax x a" using new by (simp add: fresh_fun_simp_OrR2) also have "\ \\<^sub>a* (OrR2 .M{b:=(x).Ax x a} a')[a'\c>a]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxR_intro) apply(rule fic.intros) apply(simp_all add: abs_fresh) done also have "\ = OrR2 .M{b:=(x).Ax x a} a" using fs new by (auto simp add: fresh_prod fresh_atm crename_fresh subst_fresh) also have "\ \\<^sub>a* OrR2 .(M[b\c>a]) a" using ih by (auto intro: a_star_congs) also have "\ = (OrR2 .M d)[b\c>a]" using eq fs by simp finally show "(OrR2 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR2 .M d)[b\c>a]" by simp next case False assume neq: "d\b" have "(OrR2 .M d){b:=(x).Ax x a} = OrR2 .(M{b:=(x).Ax x a}) d" using fs neq by (simp) also have "\ \\<^sub>a* OrR2 .(M[b\c>a]) d" using ih by (auto intro: a_star_congs) finally show "(OrR2 .M d){b:=(x).Ax x a} \\<^sub>a* (OrR2 .M d)[b\c>a]" using fs neq by simp qed next case (OrL u M v N z b a x) have fs: "u\b" "u\a" "u\x" "v\b" "v\a" "v\x" "v\u" "u\N" "u\z" "v\M" "v\z" by fact+ have ih1: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have ih2: "N{b:=(x).Ax x a} \\<^sub>a* N[b\c>a]" by fact have "(OrL (u).M (v).N z){b:=(x).Ax x a} = OrL (u).(M{b:=(x).Ax x a}) (v).(N{b:=(x).Ax x a}) z" using fs by simp also have "\ \\<^sub>a* OrL (u).(M[b\c>a]) (v).(N[b\c>a]) z" using ih1 ih2 by (auto intro: a_star_congs) finally show "(OrL (u).M (v).N z){b:=(x).Ax x a} \\<^sub>a* (OrL (u).M (v).N z)[b\c>a]" using fs by simp next case (ImpR z c M d b a x) have fs: "z\b" "z\a" "z\x" "c\b" "c\a" "c\x" "z\d" "c\d" by fact+ have ih: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact show "(ImpR (z)..M d){b:=(x).Ax x a} \\<^sub>a* (ImpR (z)..M d)[b\c>a]" proof(cases "b=d") case True assume eq: "b=d" obtain a'::"coname" where new: "a'\(Ax x a,M{b:=(x).Ax x a},x,a,c)" by (rule exists_fresh(2)[OF fs_coname1]) have "(ImpR (z)..M d){b:=(x).Ax x a} = fresh_fun (\a'. Cut .ImpR z..M{b:=(x).Ax x a} a' (x).Ax x a)" using fs eq by simp also have "\ = Cut .ImpR z..M{b:=(x).Ax x a} a' (x).Ax x a" using new by (simp add: fresh_fun_simp_ImpR) also have "\ \\<^sub>a* (ImpR z..M{b:=(x).Ax x a} a')[a'\c>a]" using new apply(rule_tac a_starI) apply(rule a_redu.intros) apply(rule better_LAxR_intro) apply(rule fic.intros) apply(simp_all add: abs_fresh) done also have "\ = ImpR z..M{b:=(x).Ax x a} a" using fs new by (auto simp add: fresh_prod crename_fresh subst_fresh fresh_atm) also have "\ \\<^sub>a* ImpR z..(M[b\c>a]) a" using ih by (auto intro: a_star_congs) also have "\ = (ImpR z..M b)[b\c>a]" using eq fs by simp finally show "(ImpR (z)..M d){b:=(x).Ax x a} \\<^sub>a* (ImpR (z)..M d)[b\c>a]" using eq by simp next case False assume neq: "b\d" have "(ImpR (z)..M d){b:=(x).Ax x a} = ImpR (z)..(M{b:=(x).Ax x a}) d" using fs neq by simp also have "\ \\<^sub>a* ImpR (z)..(M[b\c>a]) d" using ih by (auto intro: a_star_congs) finally show "(ImpR (z)..M d){b:=(x).Ax x a} \\<^sub>a* (ImpR (z)..M d)[b\c>a]" using neq fs by simp qed next case (ImpL c M u N v b a x) have fs: "c\b" "c\a" "c\x" "u\b" "u\a" "u\x" "c\N" "c\v" "u\M" "u\v" by fact+ have ih1: "M{b:=(x).Ax x a} \\<^sub>a* M[b\c>a]" by fact have ih2: "N{b:=(x).Ax x a} \\<^sub>a* N[b\c>a]" by fact have "(ImpL .M (u).N v){b:=(x).Ax x a} = ImpL .(M{b:=(x).Ax x a}) (u).(N{b:=(x).Ax x a}) v" using fs by simp also have "\ \\<^sub>a* ImpL .(M[b\c>a]) (u).(N[b\c>a]) v" using ih1 ih2 by (auto intro: a_star_congs) finally show "(ImpL .M (u).N v){b:=(x).Ax x a} \\<^sub>a* (ImpL .M (u).N v)[b\c>a]" using fs by simp qed text \substitution lemmas\ lemma not_Ax1: shows "\(b\M) \ M{b:=(y).Q} \ Ax x a" apply(nominal_induct M avoiding: b y Q x a rule: trm.strong_induct) apply(auto simp add: fresh_atm abs_fresh abs_supp fin_supp) apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(y).Q},Q)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname:=(y).Q},Q)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(y).Q},Q,trm2{coname3:=(y).Q},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(y).Q},Q,trm2{coname3:=(y).Q},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm1{coname3:=(y).Q},Q,trm2{coname3:=(y).Q},coname1,coname2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\x'::coname. x'\(trm{coname2:=(y).Q},Q,coname1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh abs_supp fin_supp fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done lemma not_Ax2: shows "\(x\M) \ M{x:=.Q} \ Ax y a" apply(nominal_induct M avoiding: b y Q x a rule: trm.strong_induct) apply(auto simp add: fresh_atm abs_fresh abs_supp fin_supp) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm{x:=.Q},Q,name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.Q},Q,trm2{x:=.Q},name1,name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.Q},Q,trm2{x:=.Q},name1,name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{x:=.Q},Q,trm2{x:=.Q},name1,name2)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.Q},Q,trm2{name2:=.Q},name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.Q},Q,trm2{name2:=.Q},name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(trm1{name2:=.Q},Q,trm2{name2:=.Q},name1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done lemma interesting_subst1: assumes a: "x\y" "x\P" "y\P" shows "N{y:=.P}{x:=.P} = N{x:=.Ax y c}{y:=.P}" using a proof(nominal_induct N avoiding: x y c P rule: trm.strong_induct) case Ax then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject) next case (Cut d M u M' x' y' c P) with assms show ?case apply(simp) apply(auto) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(rule impI) apply(simp add: trm.inject alpha forget) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto) apply(case_tac "y'\M") apply(simp add: forget) apply(simp add: not_Ax2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto) apply(case_tac "x'\M") apply(simp add: forget) apply(simp add: not_Ax2) done next case NotR then show ?case by (auto simp add: abs_fresh fresh_atm forget) next case (NotL d M u) then show ?case apply (auto simp add: abs_fresh fresh_atm forget) apply(subgoal_tac "\x'::name. x'\(P,M{y:=.P},M{x:=.Ax y c}{y:=.P},y,x)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(P,M{x:=.Ax y c},M{x:=.Ax y c}{y:=.P},Ax y c,y,x)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_NotL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(simp add: trm.inject alpha forget subst_fresh) apply(rule trans) apply(rule substn.simps) apply(simp add: abs_fresh fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndR d1 M d2 M' d3) then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (AndL1 u M d) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{y:=.P},M{x:=.Ax y c}{y:=.P},u,y,x)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(P,Ax y c,M{x:=.Ax y c},M{x:=.Ax y c}{y:=.P},u,y,x)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_AndL1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndL2 u M d) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{y:=.P},M{x:=.Ax y c}{y:=.P},u,y,x)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(P,Ax y c,M{x:=.Ax y c},M{x:=.Ax y c}{y:=.P},u,y,x)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_AndL2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case OrR1 then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case OrR2 then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (OrL x1 M x2 M' x3) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{y:=.P},M{x:=.Ax y c}{y:=.P}, M'{y:=.P},M'{x:=.Ax y c}{y:=.P},x1,x2,x3,y,x)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(force) apply(simp) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(P,Ax y c,M{x:=.Ax y c},M{x:=.Ax y c}{y:=.P}, M'{x:=.Ax y c},M'{x:=.Ax y c}{y:=.P},x1,x2,x3,y,x)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_OrL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(force) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case ImpR then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (ImpL a M x1 M' x2) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{x2:=.P},M{x:=.Ax x2 c}{x2:=.P}, M'{x2:=.P},M'{x:=.Ax x2 c}{x2:=.P},x1,y,x)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(force) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(P,Ax y c,M{x2:=.Ax y c},M{x2:=.Ax y c}{y:=.P}, M'{x2:=.Ax y c},M'{x2:=.Ax y c}{y:=.P},x1,x2,y,x)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_ImpL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh) apply(simp) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done qed lemma interesting_subst1': assumes a: "x\y" "x\P" "y\P" shows "N{y:=.P}{x:=.P} = N{x:=.Ax y a}{y:=.P}" proof - show ?thesis proof (cases "c=a") case True then show ?thesis using a by (simp add: interesting_subst1) next case False then show ?thesis using a apply - apply(subgoal_tac "N{x:=.Ax y a} = N{x:=.([(c,a)]\Ax y a)}") apply(simp add: interesting_subst1 calc_atm) apply(rule subst_rename) apply(simp add: fresh_prod fresh_atm) done qed qed lemma interesting_subst2: assumes a: "a\b" "a\P" "b\P" shows "N{a:=(y).P}{b:=(y).P} = N{b:=(y).Ax y a}{a:=(y).P}" using a proof(nominal_induct N avoiding: a b y P rule: trm.strong_induct) case Ax then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject) next case (Cut d M u M' x' y' c P) with assms show ?case apply(simp) apply(auto simp add: trm.inject) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp add: forget) apply(auto) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(auto)[1] apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(rule impI) apply(simp add: fresh_atm trm.inject alpha forget) apply(case_tac "x'\M'") apply(simp add: forget) apply(simp add: not_Ax1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(auto) apply(case_tac "y'\M'") apply(simp add: forget) apply(simp add: not_Ax1) done next case NotL then show ?case by (auto simp add: abs_fresh fresh_atm forget) next case (NotR u M d) then show ?case apply (auto simp add: abs_fresh fresh_atm forget) apply(subgoal_tac "\a'::coname. a'\(b,P,M{d:=(y).P},M{b:=(y).Ax y d}{d:=(y).P},u,y)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\a'::coname. a'\(P,M{d:=(y).Ax y a},M{d:=(y).Ax y a}{a:=(y).P},Ax y a,y,d)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_NotR) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget subst_fresh) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (AndR d1 M d2 M' d3) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\a'::coname. a'\(P,M{d3:=(y).P},M{b:=(y).Ax y d3}{d3:=(y).P}, M'{d3:=(y).P},M'{b:=(y).Ax y d3}{d3:=(y).P},d1,d2,d3,b,y)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh fresh_atm) apply(simp add: abs_fresh fresh_atm) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(force) apply(simp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\a'::coname. a'\(P,Ax y a,M{d3:=(y).Ax y a},M{d3:=(y).Ax y a}{a:=(y).P}, M'{d3:=(y).Ax y a},M'{d3:=(y).Ax y a}{a:=(y).P},d1,d2,d3,y,b)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_AndR) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(force) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (AndL1 u M d) then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (AndL2 u M d) then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (OrR1 d M e) then show ?case apply (auto simp add: abs_fresh fresh_atm forget) apply(subgoal_tac "\a'::coname. a'\(b,P,M{e:=(y).P},M{b:=(y).Ax y e}{e:=(y).P},d,e)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\a'::coname. a'\(b,P,Ax y a,M{e:=(y).Ax y a},M{e:=(y).Ax y a}{a:=(y).P},d,e)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_OrR1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget subst_fresh) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrR2 d M e) then show ?case apply (auto simp add: abs_fresh fresh_atm forget) apply(subgoal_tac "\a'::coname. a'\(b,P,M{e:=(y).P},M{b:=(y).Ax y e}{e:=(y).P},d,e)") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\a'::coname. a'\(b,P,Ax y a,M{e:=(y).Ax y a},M{e:=(y).Ax y a}{a:=(y).P},d,e)") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_OrR2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget subst_fresh) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrL x1 M x2 M' x3) then show ?case by(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case ImpL then show ?case by (auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) next case (ImpR u e M d) then show ?case apply(auto simp add: abs_fresh fresh_atm forget trm.inject subst_fresh) apply(subgoal_tac "\a'::coname. a'\(b,e,d,P,M{d:=(y).P},M{b:=(y).Ax y d}{d:=(y).P})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(subgoal_tac "\a'::coname. a'\(e,d,P,Ax y a,M{d:=(y).Ax y a},M{d:=(y).Ax y a}{a:=(y).P})") apply(erule exE, simp only: fresh_prod) apply(erule conjE)+ apply(simp only: fresh_fun_simp_ImpR) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp) apply(auto simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done qed lemma interesting_subst2': assumes a: "a\b" "a\P" "b\P" shows "N{a:=(y).P}{b:=(y).P} = N{b:=(z).Ax z a}{a:=(y).P}" proof - show ?thesis proof (cases "z=y") case True then show ?thesis using a by (simp add: interesting_subst2) next case False then show ?thesis using a apply - apply(subgoal_tac "N{b:=(z).Ax z a} = N{b:=(y).([(y,z)]\Ax z a)}") apply(simp add: interesting_subst2 calc_atm) apply(rule subst_rename) apply(simp add: fresh_prod fresh_atm) done qed qed lemma subst_subst1: assumes a: "a\(Q,b)" "x\(y,P,Q)" "b\Q" "y\P" shows "M{x:=.P}{b:=(y).Q} = M{b:=(y).Q}{x:=.(P{b:=(y).Q})}" using a proof(nominal_induct M avoiding: x a P b y Q rule: trm.strong_induct) case (Ax z c) have fs: "a\(Q,b)" "x\(y,P,Q)" "b\Q" "y\P" by fact+ { assume asm: "z=x \ c=b" have "(Ax x b){x:=.P}{b:=(y).Q} = (Cut .P (x).Ax x b){b:=(y).Q}" using fs by simp also have "\ = Cut .(P{b:=(y).Q}) (y).Q" using fs by (simp_all add: fresh_prod fresh_atm) also have "\ = Cut .(P{b:=(y).Q}) (y).(Q{x:=.(P{b:=(y).Q})})" using fs by (simp add: forget) also have "\ = (Cut .Ax x b (y).Q){x:=.(P{b:=(y).Q})}" using fs asm by (auto simp add: fresh_prod fresh_atm subst_fresh) also have "\ = (Ax x b){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using fs by simp finally have "(Ax z c){x:=.P}{b:=(y).Q} = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using asm by simp } moreover { assume asm: "z\x \ c=b" have "(Ax z c){x:=.P}{b:=(y).Q} = (Ax z c){b:=(y).Q}" using asm by simp also have "\ = Cut .Ax z c (y).Q" using fs asm by simp also have "\ = Cut .(Ax z c{x:=.(P{b:=(y).Q})}) (y).(Q{x:=.(P{b:=(y).Q})})" using fs asm by (simp add: forget) also have "\ = (Cut .Ax z c (y).Q){x:=.(P{b:=(y).Q})}" using asm fs by (auto simp add: trm.inject subst_fresh fresh_prod fresh_atm abs_fresh) also have "\ = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using asm fs by simp finally have "(Ax z c){x:=.P}{b:=(y).Q} = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" by simp } moreover { assume asm: "z=x \ c\b" have "(Ax z c){x:=.P}{b:=(y).Q} = (Cut .P (x).Ax z c){b:=(y).Q}" using fs asm by simp also have "\ = Cut .(P{b:=(y).Q}) (x).Ax z c" using fs asm by (auto simp add: trm.inject abs_fresh) also have "\ = (Ax z c){x:=.(P{b:=(y).Q})}" using fs asm by simp also have "\ = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using asm by auto finally have "(Ax z c){x:=.P}{b:=(y).Q} = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" by simp } moreover { assume asm: "z\x \ c\b" have "(Ax z c){x:=.P}{b:=(y).Q} = (Ax z c){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using asm by auto } ultimately show ?case by blast next case (Cut c M z N) { assume asm: "M = Ax x c \ N = Ax z b" have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .P (z).(N{x:=.P})){b:=(y).Q}" using Cut asm by simp also have "\ = (Cut .P (z).N){b:=(y).Q}" using Cut asm by (simp add: fresh_atm) also have "\ = (Cut .(P{b:=(y).Q}) (y).Q)" using Cut asm by (auto simp add: fresh_prod fresh_atm) finally have eq1: "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .(P{b:=(y).Q}) (y).Q)" by simp have "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = (Cut .M (y).Q){x:=.(P{b:=(y).Q})}" using Cut asm by (simp add: fresh_atm) also have "\ = Cut .(P{b:=(y).Q}) (y).(Q{x:=.(P{b:=(y).Q})})" using Cut asm by (auto simp add: fresh_prod fresh_atm subst_fresh) also have "\ = Cut .(P{b:=(y).Q}) (y).Q" using Cut asm by (simp add: forget) finally have eq2: "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = Cut .(P{b:=(y).Q}) (y).Q" by simp have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using eq1 eq2 by simp } moreover { assume asm: "M \ Ax x c \ N = Ax z b" have neq: "M{b:=(y).Q} \ Ax x c" proof (cases "b\M") case True then show ?thesis using asm by (simp add: forget) next case False then show ?thesis by (simp add: not_Ax1) qed have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .(M{x:=.P}) (z).(N{x:=.P})){b:=(y).Q}" using Cut asm by simp also have "\ = (Cut .(M{x:=.P}) (z).N){b:=(y).Q}" using Cut asm by (simp add: fresh_atm) also have "\ = Cut .(M{x:=.P}{b:=(y).Q}) (y).Q" using Cut asm by (simp add: abs_fresh) also have "\ = Cut .(M{b:=(y).Q}{x:=.P{b:=(y).Q}}) (y).Q" using Cut asm by simp finally have eq1: "(Cut .M (z).N){x:=.P}{b:=(y).Q} = Cut .(M{b:=(y).Q}{x:=.P{b:=(y).Q}}) (y).Q" by simp have "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = (Cut .(M{b:=(y).Q}) (y).Q){x:=.(P{b:=(y).Q})}" using Cut asm by simp also have "\ = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (y).(Q{x:=.(P{b:=(y).Q})})" using Cut asm neq by (auto simp add: fresh_prod fresh_atm subst_fresh abs_fresh) also have "\ = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (y).Q" using Cut asm by (simp add: forget) finally have eq2: "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (y).Q" by simp have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using eq1 eq2 by simp } moreover { assume asm: "M = Ax x c \ N \ Ax z b" have neq: "N{x:=.P} \ Ax z b" proof (cases "x\N") case True then show ?thesis using asm by (simp add: forget) next case False then show ?thesis by (simp add: not_Ax2) qed have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .P (z).(N{x:=.P})){b:=(y).Q}" using Cut asm by simp also have "\ = Cut .(P{b:=(y).Q}) (z).(N{x:=.P}{b:=(y).Q})" using Cut asm neq by (simp add: abs_fresh) also have "\ = Cut .(P{b:=(y).Q}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" using Cut asm by simp finally have eq1: "(Cut .M (z).N){x:=.P}{b:=(y).Q} = Cut .(P{b:=(y).Q}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" by simp have "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = (Cut .(M{b:=(y).Q}) (z).(N{b:=(y).Q})){x:=.(P{b:=(y).Q})}" using Cut asm by auto also have "\ = (Cut .M (z).(N{b:=(y).Q})){x:=.(P{b:=(y).Q})}" using Cut asm by (auto simp add: fresh_atm) also have "\ = Cut .(P{b:=(y).Q}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" using Cut asm by (simp add: fresh_prod fresh_atm subst_fresh) finally have eq2: "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = Cut .(P{b:=(y).Q}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" by simp have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using eq1 eq2 by simp } moreover { assume asm: "M \ Ax x c \ N \ Ax z b" have neq1: "N{x:=.P} \ Ax z b" proof (cases "x\N") case True then show ?thesis using asm by (simp add: forget) next case False then show ?thesis by (simp add: not_Ax2) qed have neq2: "M{b:=(y).Q} \ Ax x c" proof (cases "b\M") case True then show ?thesis using asm by (simp add: forget) next case False then show ?thesis by (simp add: not_Ax1) qed have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .(M{x:=.P}) (z).(N{x:=.P})){b:=(y).Q}" using Cut asm by simp also have "\ = Cut .(M{x:=.P}{b:=(y).Q}) (z).(N{x:=.P}{b:=(y).Q})" using Cut asm neq1 by (simp add: abs_fresh) also have "\ = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" using Cut asm by simp finally have eq1: "(Cut .M (z).N){x:=.P}{b:=(y).Q} = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" by simp have "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = (Cut .(M{b:=(y).Q}) (z).(N{b:=(y).Q})){x:=.(P{b:=(y).Q})}" using Cut asm neq1 by simp also have "\ = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" using Cut asm neq2 by (simp add: fresh_prod fresh_atm subst_fresh) finally have eq2: "(Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})} = Cut .(M{b:=(y).Q}{x:=.(P{b:=(y).Q})}) (z).(N{b:=(y).Q}{x:=.(P{b:=(y).Q})})" by simp have "(Cut .M (z).N){x:=.P}{b:=(y).Q} = (Cut .M (z).N){b:=(y).Q}{x:=.(P{b:=(y).Q})}" using eq1 eq2 by simp } ultimately show ?case by blast next case (NotR z M c) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\a'::coname. a'\(M{c:=(y).Q},M{c:=(y).Q}{x:=.P{c:=(y).Q}},Q,a,P,c,y)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR abs_fresh fresh_atm) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: fresh_prod fresh_atm subst_fresh abs_fresh) apply(simp add: fresh_prod fresh_atm subst_fresh abs_fresh) apply(simp add: forget) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (NotL c M z) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{x:=.P},P{b:=(y).Q},M{b:=(y).Q}{x:=.P{b:=(y).Q}},y,Q)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndR c1 M c2 N c3) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\a'::coname. a'\(Q,M{c3:=(y).Q},M{c3:=(y).Q}{x:=.P{c3:=(y).Q}},c2,c3,a, P{c3:=(y).Q},N{c3:=(y).Q},N{c3:=(y).Q}{x:=.P{c3:=(y).Q}},c1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR abs_fresh fresh_atm) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp_all add: fresh_atm abs_fresh subst_fresh) apply(simp add: forget) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (AndL1 z1 M z2) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{x:=.P},P{b:=(y).Q},z1,y,Q,M{b:=(y).Q}{x:=.P{b:=(y).Q}})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndL2 z1 M z2) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{x:=.P},P{b:=(y).Q},z1,y,Q,M{b:=(y).Q}{x:=.P{b:=(y).Q}})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2 abs_fresh fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (OrL z1 M z2 N z3) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\x'::name. x'\(P,M{x:=.P},M{b:=(y).Q}{x:=.P{b:=(y).Q}},z2,z3,a,y,Q, P{b:=(y).Q},N{x:=.P},N{b:=(y).Q}{x:=.P{b:=(y).Q}},z1)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL abs_fresh fresh_atm) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(simp_all add: fresh_atm subst_fresh) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (OrR1 c1 M c2) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\a'::coname. a'\(Q,M{c2:=(y).Q},a,P{c2:=(y).Q},c1, M{c2:=(y).Q}{x:=.P{c2:=(y).Q}})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1 abs_fresh fresh_atm) apply(simp_all add: fresh_atm subst_fresh abs_fresh) apply(simp add: forget) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrR2 c1 M c2) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\a'::coname. a'\(Q,M{c2:=(y).Q},a,P{c2:=(y).Q},c1, M{c2:=(y).Q}{x:=.P{c2:=(y).Q}})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2 abs_fresh fresh_atm) apply(simp_all add: fresh_atm subst_fresh abs_fresh) apply(simp add: forget) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (ImpR z c M d) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\a'::coname. a'\(Q,M{d:=(y).Q},a,P{d:=(y).Q},c, M{d:=(y).Q}{x:=.P{d:=(y).Q}})") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR abs_fresh fresh_atm) apply(simp_all add: fresh_atm subst_fresh forget abs_fresh) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (ImpL c M z N u) then show ?case apply(auto simp add: fresh_prod fresh_atm subst_fresh) apply(subgoal_tac "\z'::name. z'\(P,P{b:=(y).Q},M{u:=.P},N{u:=.P},y,Q, M{b:=(y).Q}{u:=.P{b:=(y).Q}},N{b:=(y).Q}{u:=.P{b:=(y).Q}},z)") apply(erule exE) apply(simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL abs_fresh fresh_atm) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(simp_all add: fresh_atm subst_fresh forget) apply(rule exists_fresh'(1)[OF fs_name1]) done qed lemma subst_subst2: assumes a: "a\(b,P,N)" "x\(y,P,M)" "b\(M,N)" "y\P" shows "M{a:=(x).N}{y:=.P} = M{y:=.P}{a:=(x).N{y:=.P}}" using a proof(nominal_induct M avoiding: a x N y b P rule: trm.strong_induct) case (Ax z c) then show ?case by (auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) next case (Cut d M' u M'') then show ?case apply(simp add: fresh_atm fresh_prod trm.inject abs_fresh) apply(auto) apply(simp add: fresh_atm) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_prod fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: forget) apply(simp add: fresh_atm) apply(case_tac "a\M'") apply(simp add: forget) apply(simp add: not_Ax1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_prod fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(auto)[1] apply(case_tac "y\M''") apply(simp add: forget) apply(simp add: not_Ax2) apply(simp add: forget) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh) apply(auto)[1] apply(case_tac "y\M''") apply(simp add: forget) apply(simp add: not_Ax2) apply(case_tac "a\M'") apply(simp add: forget) apply(simp add: not_Ax1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh) apply(simp add: subst_fresh abs_fresh) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm) apply(simp add: subst_fresh abs_fresh) apply(auto)[1] apply(case_tac "y\M''") apply(simp add: forget) apply(simp add: not_Ax2) done next case (NotR z M' d) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\a'::coname. a'\(y,P,N,N{y:=.P},M'{d:=(x).N},M'{y:=.P}{d:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotR) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (NotL d M' z) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\x'::name. x'\(z,y,P,N,N{y:=.P},M'{y:=.P},M'{y:=.P}{a:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm) apply(simp add: fresh_atm subst_fresh) apply(simp) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndR d M' e M'' f) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\a'::coname. a'\(P,b,d,e,N,N{y:=.P},M'{f:=(x).N},M''{f:=(x).N}, M'{y:=.P}{f:=(x).N{y:=.P}},M''{y:=.P}{f:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndR) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(simp) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (AndL1 z M' u) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\x'::name. x'\(P,b,z,u,x,N,M'{y:=.P},M'{y:=.P}{a:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndL2 z M' u) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\x'::name. x'\(P,b,z,u,x,N,M'{y:=.P},M'{y:=.P}{a:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (OrL u M' v M'' w) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\z'::name. z'\(P,b,u,w,v,N,N{y:=.P},M'{y:=.P},M''{y:=.P}, M'{y:=.P}{a:=(x).N{y:=.P}},M''{y:=.P}{a:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(simp) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (OrR1 e M' f) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\c'::coname. c'\(P,b,e,f,x,N,N{y:=.P}, M'{f:=(x).N},M'{y:=.P}{f:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR1) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (OrR2 e M' f) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\c'::coname. c'\(P,b,e,f,x,N,N{y:=.P}, M'{f:=(x).N},M'{y:=.P}{f:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrR2) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp) apply(rule exists_fresh'(2)[OF fs_coname1]) done next case (ImpR x e M' f) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\c'::coname. c'\(P,b,e,f,x,N,N{y:=.P}, M'{f:=(x).N},M'{y:=.P}{f:=(x).N{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpR) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(simp add: fresh_atm trm.inject alpha abs_fresh fin_supp abs_supp) apply(rule exists_fresh'(2)[OF fs_coname1]) apply(simp add: fresh_atm trm.inject alpha abs_fresh fin_supp abs_supp) done next case (ImpL e M' v M'' w) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget trm.inject) apply(subgoal_tac "\z'::name. z'\(P,b,e,w,v,N,N{y:=.P},M'{w:=.P},M''{w:=.P}, M'{w:=.P}{a:=(x).N{w:=.P}},M''{w:=.P}{a:=(x).N{w:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh fresh_atm abs_fresh) apply(simp) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done qed lemma subst_subst3: assumes a: "a\(P,N,c)" "c\(M,N)" "x\(y,P,M)" "y\(P,x)" "M\Ax y a" shows "N{x:=.M}{y:=.P} = N{y:=.P}{x:=.(M{y:=.P})}" using a proof(nominal_induct N avoiding: x y a c M P rule: trm.strong_induct) case (Ax z c) then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (Cut d M' u M'') then show ?case apply(simp add: fresh_atm fresh_prod trm.inject abs_fresh) apply(auto) apply(simp add: fresh_atm) apply(simp add: trm.inject) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(subgoal_tac "P \ Ax x c") apply(simp) apply(simp add: forget) apply(clarify) apply(simp add: fresh_atm) apply(case_tac "x\M'") apply(simp add: forget) apply(simp add: not_Ax2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(auto) apply(case_tac "y\M'") apply(simp add: forget) apply(simp add: not_Ax2) done next case NotR then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (NotL d M' u) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\x'::name. x'\(y,P,M,M{y:=.P},M'{x:=.M},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(x,y,P,M,M'{y:=.P},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_NotL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_atm subst_fresh fresh_prod) apply(subgoal_tac "P \ Ax x c") apply(simp) apply(simp add: forget trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(clarify) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case AndR then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (AndL1 u M' v) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\x'::name. x'\(u,y,v,P,M,M{y:=.P},M'{x:=.M},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(x,y,u,v,P,M,M'{y:=.P},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_atm subst_fresh fresh_prod) apply(subgoal_tac "P \ Ax x c") apply(simp) apply(simp add: forget trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(clarify) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case (AndL2 u M' v) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\x'::name. x'\(u,y,v,P,M,M{y:=.P},M'{x:=.M},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(x,y,u,v,P,M,M'{y:=.P},M'{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_AndL2) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_atm subst_fresh fresh_prod) apply(subgoal_tac "P \ Ax x c") apply(simp) apply(simp add: forget trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(clarify) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case OrR1 then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case OrR2 then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (OrL x1 M' x2 M'' x3) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\x'::name. x'\(y,P,M,M{y:=.P},M'{x:=.M},M'{y:=.P}{x:=.M{y:=.P}}, x1,x2,x3,M''{x:=.M},M''{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(x,y,P,M,M'{y:=.P},M'{y:=.P}{x:=.M{y:=.P}}, x1,x2,x3,M''{y:=.P},M''{y:=.P}{x:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_OrL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_atm subst_fresh fresh_prod) apply(simp add: fresh_prod fresh_atm) apply(auto) apply(simp add: fresh_atm) apply(simp add: forget trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done next case ImpR then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (ImpL d M' x1 M'' x2) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(subgoal_tac "\x'::name. x'\(y,P,M,M{y:=.P},M'{x2:=.M},M'{y:=.P}{x2:=.M{y:=.P}}, x1,x2,M''{x2:=.M},M''{y:=.P}{x2:=.M{y:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) apply(subgoal_tac "\x'::name. x'\(x,y,P,M,M'{x2:=.P},M'{x2:=.P}{x:=.M{x2:=.P}}, x1,x2,M''{x2:=.P},M''{x2:=.P}{x:=.M{x2:=.P}})") apply(erule exE, simp add: fresh_prod) apply(erule conjE)+ apply(simp add: fresh_fun_simp_ImpL) apply(rule sym) apply(rule trans) apply(rule better_Cut_substn) apply(simp add: abs_fresh subst_fresh) apply(simp add: fresh_atm subst_fresh fresh_prod) apply(simp add: fresh_prod fresh_atm) apply(auto) apply(simp add: fresh_atm) apply(simp add: forget trm.inject alpha) apply(rule trans) apply(rule substn.simps) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm subst_fresh) apply(simp add: fresh_atm) apply(rule exists_fresh'(1)[OF fs_name1]) done qed lemma subst_subst4: assumes a: "x\(P,N,y)" "y\(M,N)" "a\(c,P,M)" "c\(P,a)" "M\Ax x c" shows "N{a:=(x).M}{c:=(y).P} = N{c:=(y).P}{a:=(x).(M{c:=(y).P})}" using a proof(nominal_induct N avoiding: x y a c M P rule: trm.strong_induct) case (Ax z c) then show ?case by (auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (Cut d M' u M'') then show ?case apply(simp add: fresh_atm fresh_prod trm.inject abs_fresh) apply(auto) apply(simp add: fresh_atm) apply(simp add: trm.inject) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: abs_fresh subst_fresh fresh_atm) apply(simp add: fresh_prod subst_fresh abs_fresh fresh_atm) apply(subgoal_tac "P \ Ax y a") apply(simp) apply(simp add: forget) apply(clarify) apply(simp add: fresh_atm) apply(case_tac "a\M''") apply(simp add: forget) apply(simp add: not_Ax1) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh) apply(simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: fresh_prod subst_fresh fresh_atm) apply(simp add: abs_fresh subst_fresh) apply(auto) apply(case_tac "c\M''") apply(simp add: forget) apply(simp add: not_Ax1) done next case NotL then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (NotR u M' d) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(simp add: abs_fresh subst_fresh) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_prod fresh_atm) apply(auto simp add: fresh_atm fresh_prod)[1] apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: fresh_prod fresh_atm subst_fresh) apply(simp add: abs_fresh subst_fresh) apply(auto simp add: fresh_atm) apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(simp add: fresh_atm subst_fresh) apply(auto simp add: fresh_prod fresh_atm) done next case AndL1 then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case AndL2 then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (AndR d M e M' f) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(simp add: abs_fresh subst_fresh) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(simp) apply(auto simp add: fresh_atm fresh_prod)[1] apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm fresh_prod) apply(simp add: abs_fresh subst_fresh) apply(auto simp add: fresh_atm)[1] apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(simp) apply(auto simp add: fresh_atm fresh_prod)[1] done next case OrL then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (OrR1 d M' e) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(simp add: abs_fresh subst_fresh) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm fresh_prod) apply(simp add: abs_fresh subst_fresh) apply(auto simp add: fresh_atm)[1] apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] done next case (OrR2 d M' e) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(simp add: abs_fresh subst_fresh) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm fresh_prod) apply(simp add: abs_fresh subst_fresh) apply(auto simp add: fresh_atm)[1] apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] done next case ImpL then show ?case by(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) next case (ImpR u d M' e) then show ?case apply(auto simp add: subst_fresh abs_fresh fresh_atm fresh_prod forget) apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(simp add: abs_fresh subst_fresh) apply(rule trans) apply(rule better_Cut_substc) apply(simp) apply(simp add: abs_fresh) apply(simp) apply(simp add: trm.inject alpha) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh abs_fresh abs_supp fin_supp)[1] apply(generate_fresh "coname") apply(fresh_fun_simp) apply(fresh_fun_simp) apply(rule sym) apply(rule trans) apply(rule better_Cut_substc) apply(simp add: subst_fresh fresh_atm fresh_prod) apply(simp add: abs_fresh subst_fresh) apply(auto simp add: fresh_atm)[1] apply(simp add: trm.inject alpha forget) apply(rule trans) apply(rule substc.simps) apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh abs_fresh abs_supp fin_supp)[1] apply(auto simp add: fresh_prod fresh_atm subst_fresh abs_fresh abs_supp fin_supp)[1] done qed end diff --git a/src/HOL/Numeral_Simprocs.thy b/src/HOL/Numeral_Simprocs.thy --- a/src/HOL/Numeral_Simprocs.thy +++ b/src/HOL/Numeral_Simprocs.thy @@ -1,312 +1,302 @@ (* Author: Various *) section \Combination and Cancellation Simprocs for Numeral Expressions\ theory Numeral_Simprocs imports Divides begin ML_file \~~/src/Provers/Arith/assoc_fold.ML\ ML_file \~~/src/Provers/Arith/cancel_numerals.ML\ ML_file \~~/src/Provers/Arith/combine_numerals.ML\ ML_file \~~/src/Provers/Arith/cancel_numeral_factor.ML\ ML_file \~~/src/Provers/Arith/extract_common_term.ML\ lemmas semiring_norm = Let_def arith_simps diff_nat_numeral rel_simps if_False if_True add_0 add_Suc add_numeral_left add_neg_numeral_left mult_numeral_left numeral_One [symmetric] uminus_numeral_One [symmetric] Suc_eq_plus1 eq_numeral_iff_iszero not_iszero_Numeral1 declare split_div [of _ _ "numeral k", arith_split] for k declare split_mod [of _ _ "numeral k", arith_split] for k text \For \combine_numerals\\ lemma left_add_mult_distrib: "i*u + (j*u + k) = (i+j)*u + (k::nat)" by (simp add: add_mult_distrib) text \For \cancel_numerals\\ lemma nat_diff_add_eq1: "j <= (i::nat) ==> ((i*u + m) - (j*u + n)) = (((i-j)*u + m) - n)" by (simp split: nat_diff_split add: add_mult_distrib) lemma nat_diff_add_eq2: "i <= (j::nat) ==> ((i*u + m) - (j*u + n)) = (m - ((j-i)*u + n))" by (simp split: nat_diff_split add: add_mult_distrib) lemma nat_eq_add_iff1: "j <= (i::nat) ==> (i*u + m = j*u + n) = ((i-j)*u + m = n)" by (auto split: nat_diff_split simp add: add_mult_distrib) lemma nat_eq_add_iff2: "i <= (j::nat) ==> (i*u + m = j*u + n) = (m = (j-i)*u + n)" by (auto split: nat_diff_split simp add: add_mult_distrib) lemma nat_less_add_iff1: "j <= (i::nat) ==> (i*u + m < j*u + n) = ((i-j)*u + m < n)" by (auto split: nat_diff_split simp add: add_mult_distrib) lemma nat_less_add_iff2: "i <= (j::nat) ==> (i*u + m < j*u + n) = (m < (j-i)*u + n)" by (auto split: nat_diff_split simp add: add_mult_distrib) lemma nat_le_add_iff1: "j <= (i::nat) ==> (i*u + m <= j*u + n) = ((i-j)*u + m <= n)" by (auto split: nat_diff_split simp add: add_mult_distrib) lemma nat_le_add_iff2: "i <= (j::nat) ==> (i*u + m <= j*u + n) = (m <= (j-i)*u + n)" by (auto split: nat_diff_split simp add: add_mult_distrib) text \For \cancel_numeral_factors\\ lemma nat_mult_le_cancel1: "(0::nat) < k ==> (k*m <= k*n) = (m<=n)" by auto lemma nat_mult_less_cancel1: "(0::nat) < k ==> (k*m < k*n) = (m (k*m = k*n) = (m=n)" by auto lemma nat_mult_div_cancel1: "(0::nat) < k ==> (k*m) div (k*n) = (m div n)" by auto lemma nat_mult_dvd_cancel_disj[simp]: "(k*m) dvd (k*n) = (k=0 \ m dvd (n::nat))" by (auto simp: dvd_eq_mod_eq_0 mod_mult_mult1) lemma nat_mult_dvd_cancel1: "0 < k \ (k*m) dvd (k*n::nat) = (m dvd n)" by(auto) text \For \cancel_factor\\ lemmas nat_mult_le_cancel_disj = mult_le_cancel1 lemmas nat_mult_less_cancel_disj = mult_less_cancel1 lemma nat_mult_eq_cancel_disj: fixes k m n :: nat shows "k * m = k * n \ k = 0 \ m = n" by auto lemma nat_mult_div_cancel_disj [simp]: fixes k m n :: nat shows "(k * m) div (k * n) = (if k = 0 then 0 else m div n)" by (fact div_mult_mult1_if) lemma numeral_times_minus_swap: fixes x:: "'a::comm_ring_1" shows "numeral w * -x = x * - numeral w" by (simp add: mult.commute) ML_file \Tools/numeral_simprocs.ML\ simproc_setup semiring_assoc_fold ("(a::'a::comm_semiring_1_cancel) * b") = \fn phi => Numeral_Simprocs.assoc_fold\ (* TODO: see whether the type class can be generalized further *) simproc_setup int_combine_numerals ("(i::'a::comm_ring_1) + j" | "(i::'a::comm_ring_1) - j") = \fn phi => Numeral_Simprocs.combine_numerals\ simproc_setup field_combine_numerals ("(i::'a::{field,ring_char_0}) + j" |"(i::'a::{field,ring_char_0}) - j") = \fn phi => Numeral_Simprocs.field_combine_numerals\ simproc_setup inteq_cancel_numerals ("(l::'a::comm_ring_1) + m = n" |"(l::'a::comm_ring_1) = m + n" |"(l::'a::comm_ring_1) - m = n" |"(l::'a::comm_ring_1) = m - n" |"(l::'a::comm_ring_1) * m = n" |"(l::'a::comm_ring_1) = m * n" |"- (l::'a::comm_ring_1) = m" |"(l::'a::comm_ring_1) = - m") = \fn phi => Numeral_Simprocs.eq_cancel_numerals\ simproc_setup intless_cancel_numerals ("(l::'a::linordered_idom) + m < n" |"(l::'a::linordered_idom) < m + n" |"(l::'a::linordered_idom) - m < n" |"(l::'a::linordered_idom) < m - n" |"(l::'a::linordered_idom) * m < n" |"(l::'a::linordered_idom) < m * n" |"- (l::'a::linordered_idom) < m" |"(l::'a::linordered_idom) < - m") = \fn phi => Numeral_Simprocs.less_cancel_numerals\ simproc_setup intle_cancel_numerals ("(l::'a::linordered_idom) + m \ n" |"(l::'a::linordered_idom) \ m + n" |"(l::'a::linordered_idom) - m \ n" |"(l::'a::linordered_idom) \ m - n" |"(l::'a::linordered_idom) * m \ n" |"(l::'a::linordered_idom) \ m * n" |"- (l::'a::linordered_idom) \ m" |"(l::'a::linordered_idom) \ - m") = \fn phi => Numeral_Simprocs.le_cancel_numerals\ simproc_setup ring_eq_cancel_numeral_factor ("(l::'a::{idom,ring_char_0}) * m = n" |"(l::'a::{idom,ring_char_0}) = m * n") = \fn phi => Numeral_Simprocs.eq_cancel_numeral_factor\ simproc_setup ring_less_cancel_numeral_factor ("(l::'a::linordered_idom) * m < n" |"(l::'a::linordered_idom) < m * n") = \fn phi => Numeral_Simprocs.less_cancel_numeral_factor\ simproc_setup ring_le_cancel_numeral_factor ("(l::'a::linordered_idom) * m <= n" |"(l::'a::linordered_idom) <= m * n") = \fn phi => Numeral_Simprocs.le_cancel_numeral_factor\ (* TODO: remove comm_ring_1 constraint if possible *) simproc_setup int_div_cancel_numeral_factors ("((l::'a::{euclidean_semiring_cancel,comm_ring_1,ring_char_0}) * m) div n" |"(l::'a::{euclidean_semiring_cancel,comm_ring_1,ring_char_0}) div (m * n)") = \fn phi => Numeral_Simprocs.div_cancel_numeral_factor\ simproc_setup divide_cancel_numeral_factor ("((l::'a::{field,ring_char_0}) * m) / n" |"(l::'a::{field,ring_char_0}) / (m * n)" |"((numeral v)::'a::{field,ring_char_0}) / (numeral w)") = \fn phi => Numeral_Simprocs.divide_cancel_numeral_factor\ simproc_setup ring_eq_cancel_factor ("(l::'a::idom) * m = n" | "(l::'a::idom) = m * n") = \fn phi => Numeral_Simprocs.eq_cancel_factor\ simproc_setup linordered_ring_le_cancel_factor ("(l::'a::linordered_idom) * m <= n" |"(l::'a::linordered_idom) <= m * n") = \fn phi => Numeral_Simprocs.le_cancel_factor\ simproc_setup linordered_ring_less_cancel_factor ("(l::'a::linordered_idom) * m < n" |"(l::'a::linordered_idom) < m * n") = \fn phi => Numeral_Simprocs.less_cancel_factor\ simproc_setup int_div_cancel_factor ("((l::'a::euclidean_semiring_cancel) * m) div n" |"(l::'a::euclidean_semiring_cancel) div (m * n)") = \fn phi => Numeral_Simprocs.div_cancel_factor\ simproc_setup int_mod_cancel_factor ("((l::'a::euclidean_semiring_cancel) * m) mod n" |"(l::'a::euclidean_semiring_cancel) mod (m * n)") = \fn phi => Numeral_Simprocs.mod_cancel_factor\ simproc_setup dvd_cancel_factor ("((l::'a::idom) * m) dvd n" |"(l::'a::idom) dvd (m * n)") = \fn phi => Numeral_Simprocs.dvd_cancel_factor\ simproc_setup divide_cancel_factor ("((l::'a::field) * m) / n" |"(l::'a::field) / (m * n)") = \fn phi => Numeral_Simprocs.divide_cancel_factor\ ML_file \Tools/nat_numeral_simprocs.ML\ simproc_setup nat_combine_numerals ("(i::nat) + j" | "Suc (i + j)") = \fn phi => Nat_Numeral_Simprocs.combine_numerals\ simproc_setup nateq_cancel_numerals ("(l::nat) + m = n" | "(l::nat) = m + n" | "(l::nat) * m = n" | "(l::nat) = m * n" | "Suc m = n" | "m = Suc n") = \fn phi => Nat_Numeral_Simprocs.eq_cancel_numerals\ simproc_setup natless_cancel_numerals ("(l::nat) + m < n" | "(l::nat) < m + n" | "(l::nat) * m < n" | "(l::nat) < m * n" | "Suc m < n" | "m < Suc n") = \fn phi => Nat_Numeral_Simprocs.less_cancel_numerals\ simproc_setup natle_cancel_numerals ("(l::nat) + m \ n" | "(l::nat) \ m + n" | "(l::nat) * m \ n" | "(l::nat) \ m * n" | "Suc m \ n" | "m \ Suc n") = \fn phi => Nat_Numeral_Simprocs.le_cancel_numerals\ simproc_setup natdiff_cancel_numerals ("((l::nat) + m) - n" | "(l::nat) - (m + n)" | "(l::nat) * m - n" | "(l::nat) - m * n" | "Suc m - n" | "m - Suc n") = \fn phi => Nat_Numeral_Simprocs.diff_cancel_numerals\ simproc_setup nat_eq_cancel_numeral_factor ("(l::nat) * m = n" | "(l::nat) = m * n") = \fn phi => Nat_Numeral_Simprocs.eq_cancel_numeral_factor\ simproc_setup nat_less_cancel_numeral_factor ("(l::nat) * m < n" | "(l::nat) < m * n") = \fn phi => Nat_Numeral_Simprocs.less_cancel_numeral_factor\ simproc_setup nat_le_cancel_numeral_factor ("(l::nat) * m <= n" | "(l::nat) <= m * n") = \fn phi => Nat_Numeral_Simprocs.le_cancel_numeral_factor\ simproc_setup nat_div_cancel_numeral_factor ("((l::nat) * m) div n" | "(l::nat) div (m * n)") = \fn phi => Nat_Numeral_Simprocs.div_cancel_numeral_factor\ simproc_setup nat_dvd_cancel_numeral_factor ("((l::nat) * m) dvd n" | "(l::nat) dvd (m * n)") = \fn phi => Nat_Numeral_Simprocs.dvd_cancel_numeral_factor\ simproc_setup nat_eq_cancel_factor ("(l::nat) * m = n" | "(l::nat) = m * n") = \fn phi => Nat_Numeral_Simprocs.eq_cancel_factor\ simproc_setup nat_less_cancel_factor ("(l::nat) * m < n" | "(l::nat) < m * n") = \fn phi => Nat_Numeral_Simprocs.less_cancel_factor\ simproc_setup nat_le_cancel_factor ("(l::nat) * m <= n" | "(l::nat) <= m * n") = \fn phi => Nat_Numeral_Simprocs.le_cancel_factor\ simproc_setup nat_div_cancel_factor ("((l::nat) * m) div n" | "(l::nat) div (m * n)") = \fn phi => Nat_Numeral_Simprocs.div_cancel_factor\ simproc_setup nat_dvd_cancel_factor ("((l::nat) * m) dvd n" | "(l::nat) dvd (m * n)") = \fn phi => Nat_Numeral_Simprocs.dvd_cancel_factor\ declaration \ K (Lin_Arith.add_simprocs [\<^simproc>\semiring_assoc_fold\, \<^simproc>\int_combine_numerals\, \<^simproc>\inteq_cancel_numerals\, \<^simproc>\intless_cancel_numerals\, \<^simproc>\intle_cancel_numerals\, \<^simproc>\field_combine_numerals\, \<^simproc>\nat_combine_numerals\, \<^simproc>\nateq_cancel_numerals\, \<^simproc>\natless_cancel_numerals\, \<^simproc>\natle_cancel_numerals\, \<^simproc>\natdiff_cancel_numerals\, Numeral_Simprocs.field_divide_cancel_numeral_factor]) \ -lemma bit_numeral_int_simps [simp]: - \bit (1 :: int) (numeral n) \ bit (0 :: int) (pred_numeral n)\ - \bit (numeral (num.Bit0 w) :: int) (numeral n) \ bit (numeral w :: int) (pred_numeral n)\ - \bit (numeral (num.Bit1 w) :: int) (numeral n) \ bit (numeral w :: int) (pred_numeral n)\ - \bit (numeral (Num.BitM w) :: int) (numeral n) \ \ bit (- numeral w :: int) (pred_numeral n)\ - \bit (- numeral (num.Bit0 w) :: int) (numeral n) \ bit (- numeral w :: int) (pred_numeral n)\ - \bit (- numeral (num.Bit1 w) :: int) (numeral n) \ \ bit (numeral w :: int) (pred_numeral n)\ - \bit (- numeral (Num.BitM w) :: int) (numeral n) \ bit (- (numeral w) :: int) (pred_numeral n)\ - by (simp_all add: bit_1_iff numeral_eq_Suc bit_Suc add_One sub_inc_One_eq bit_minus_int_iff) - end diff --git a/src/HOL/Parity.thy b/src/HOL/Parity.thy --- a/src/HOL/Parity.thy +++ b/src/HOL/Parity.thy @@ -1,2032 +1,705 @@ (* Title: HOL/Parity.thy Author: Jeremy Avigad Author: Jacques D. Fleuriot *) section \Parity in rings and semirings\ theory Parity imports Euclidean_Division begin subsection \Ring structures with parity and \even\/\odd\ predicates\ class semiring_parity = comm_semiring_1 + semiring_modulo + assumes even_iff_mod_2_eq_zero: "2 dvd a \ a mod 2 = 0" and odd_iff_mod_2_eq_one: "\ 2 dvd a \ a mod 2 = 1" and odd_one [simp]: "\ 2 dvd 1" begin abbreviation even :: "'a \ bool" where "even a \ 2 dvd a" abbreviation odd :: "'a \ bool" where "odd a \ \ 2 dvd a" lemma parity_cases [case_names even odd]: assumes "even a \ a mod 2 = 0 \ P" assumes "odd a \ a mod 2 = 1 \ P" shows P using assms by (cases "even a") (simp_all add: even_iff_mod_2_eq_zero [symmetric] odd_iff_mod_2_eq_one [symmetric]) lemma odd_of_bool_self [simp]: \odd (of_bool p) \ p\ by (cases p) simp_all lemma not_mod_2_eq_0_eq_1 [simp]: "a mod 2 \ 0 \ a mod 2 = 1" by (cases a rule: parity_cases) simp_all lemma not_mod_2_eq_1_eq_0 [simp]: "a mod 2 \ 1 \ a mod 2 = 0" by (cases a rule: parity_cases) simp_all lemma evenE [elim?]: assumes "even a" obtains b where "a = 2 * b" using assms by (rule dvdE) lemma oddE [elim?]: assumes "odd a" obtains b where "a = 2 * b + 1" proof - have "a = 2 * (a div 2) + a mod 2" by (simp add: mult_div_mod_eq) with assms have "a = 2 * (a div 2) + 1" by (simp add: odd_iff_mod_2_eq_one) then show ?thesis .. qed lemma mod_2_eq_odd: "a mod 2 = of_bool (odd a)" by (auto elim: oddE simp add: even_iff_mod_2_eq_zero) lemma of_bool_odd_eq_mod_2: "of_bool (odd a) = a mod 2" by (simp add: mod_2_eq_odd) lemma even_mod_2_iff [simp]: \even (a mod 2) \ even a\ by (simp add: mod_2_eq_odd) lemma mod2_eq_if: "a mod 2 = (if even a then 0 else 1)" by (simp add: mod_2_eq_odd) lemma even_zero [simp]: "even 0" by (fact dvd_0_right) lemma odd_even_add: "even (a + b)" if "odd a" and "odd b" proof - from that obtain c d where "a = 2 * c + 1" and "b = 2 * d + 1" by (blast elim: oddE) then have "a + b = 2 * c + 2 * d + (1 + 1)" by (simp only: ac_simps) also have "\ = 2 * (c + d + 1)" by (simp add: algebra_simps) finally show ?thesis .. qed lemma even_add [simp]: "even (a + b) \ (even a \ even b)" by (auto simp add: dvd_add_right_iff dvd_add_left_iff odd_even_add) lemma odd_add [simp]: "odd (a + b) \ \ (odd a \ odd b)" by simp lemma even_plus_one_iff [simp]: "even (a + 1) \ odd a" by (auto simp add: dvd_add_right_iff intro: odd_even_add) lemma even_mult_iff [simp]: "even (a * b) \ even a \ even b" (is "?P \ ?Q") proof assume ?Q then show ?P by auto next assume ?P show ?Q proof (rule ccontr) assume "\ (even a \ even b)" then have "odd a" and "odd b" by auto then obtain r s where "a = 2 * r + 1" and "b = 2 * s + 1" by (blast elim: oddE) then have "a * b = (2 * r + 1) * (2 * s + 1)" by simp also have "\ = 2 * (2 * r * s + r + s) + 1" by (simp add: algebra_simps) finally have "odd (a * b)" by simp with \?P\ show False by auto qed qed lemma even_numeral [simp]: "even (numeral (Num.Bit0 n))" proof - have "even (2 * numeral n)" unfolding even_mult_iff by simp then have "even (numeral n + numeral n)" unfolding mult_2 . then show ?thesis unfolding numeral.simps . qed lemma odd_numeral [simp]: "odd (numeral (Num.Bit1 n))" proof assume "even (numeral (num.Bit1 n))" then have "even (numeral n + numeral n + 1)" unfolding numeral.simps . then have "even (2 * numeral n + 1)" unfolding mult_2 . then have "2 dvd numeral n * 2 + 1" by (simp add: ac_simps) then have "2 dvd 1" using dvd_add_times_triv_left_iff [of 2 "numeral n" 1] by simp then show False by simp qed lemma odd_numeral_BitM [simp]: \odd (numeral (Num.BitM w))\ by (cases w) simp_all lemma even_power [simp]: "even (a ^ n) \ even a \ n > 0" by (induct n) auto lemma mask_eq_sum_exp: \2 ^ n - 1 = (\m\{q. q < n}. 2 ^ m)\ proof - have *: \{q. q < Suc m} = insert m {q. q < m}\ for m by auto have \2 ^ n = (\m\{q. q < n}. 2 ^ m) + 1\ by (induction n) (simp_all add: ac_simps mult_2 *) then have \2 ^ n - 1 = (\m\{q. q < n}. 2 ^ m) + 1 - 1\ by simp then show ?thesis by simp qed end class ring_parity = ring + semiring_parity begin subclass comm_ring_1 .. lemma even_minus: "even (- a) \ even a" by (fact dvd_minus_iff) lemma even_diff [simp]: "even (a - b) \ even (a + b)" using even_add [of a "- b"] by simp end subsection \Special case: euclidean rings containing the natural numbers\ context unique_euclidean_semiring_with_nat begin subclass semiring_parity proof show "2 dvd a \ a mod 2 = 0" for a by (fact dvd_eq_mod_eq_0) show "\ 2 dvd a \ a mod 2 = 1" for a proof assume "a mod 2 = 1" then show "\ 2 dvd a" by auto next assume "\ 2 dvd a" have eucl: "euclidean_size (a mod 2) = 1" proof (rule order_antisym) show "euclidean_size (a mod 2) \ 1" using mod_size_less [of 2 a] by simp show "1 \ euclidean_size (a mod 2)" using \\ 2 dvd a\ by (simp add: Suc_le_eq dvd_eq_mod_eq_0) qed from \\ 2 dvd a\ have "\ of_nat 2 dvd division_segment a * of_nat (euclidean_size a)" by simp then have "\ of_nat 2 dvd of_nat (euclidean_size a)" by (auto simp only: dvd_mult_unit_iff' is_unit_division_segment) then have "\ 2 dvd euclidean_size a" using of_nat_dvd_iff [of 2] by simp then have "euclidean_size a mod 2 = 1" by (simp add: semidom_modulo_class.dvd_eq_mod_eq_0) then have "of_nat (euclidean_size a mod 2) = of_nat 1" by simp then have "of_nat (euclidean_size a) mod 2 = 1" by (simp add: of_nat_mod) from \\ 2 dvd a\ eucl show "a mod 2 = 1" by (auto intro: division_segment_eq_iff simp add: division_segment_mod) qed show "\ is_unit 2" proof (rule notI) assume "is_unit 2" then have "of_nat 2 dvd of_nat 1" by simp then have "is_unit (2::nat)" by (simp only: of_nat_dvd_iff) then show False by simp qed qed lemma even_of_nat [simp]: "even (of_nat a) \ even a" proof - have "even (of_nat a) \ of_nat 2 dvd of_nat a" by simp also have "\ \ even a" by (simp only: of_nat_dvd_iff) finally show ?thesis . qed lemma even_succ_div_two [simp]: "even a \ (a + 1) div 2 = a div 2" by (cases "a = 0") (auto elim!: evenE dest: mult_not_zero) lemma odd_succ_div_two [simp]: "odd a \ (a + 1) div 2 = a div 2 + 1" by (auto elim!: oddE simp add: add.assoc) lemma even_two_times_div_two: "even a \ 2 * (a div 2) = a" by (fact dvd_mult_div_cancel) lemma odd_two_times_div_two_succ [simp]: "odd a \ 2 * (a div 2) + 1 = a" using mult_div_mod_eq [of 2 a] by (simp add: even_iff_mod_2_eq_zero) lemma coprime_left_2_iff_odd [simp]: "coprime 2 a \ odd a" proof assume "odd a" show "coprime 2 a" proof (rule coprimeI) fix b assume "b dvd 2" "b dvd a" then have "b dvd a mod 2" by (auto intro: dvd_mod) with \odd a\ show "is_unit b" by (simp add: mod_2_eq_odd) qed next assume "coprime 2 a" show "odd a" proof (rule notI) assume "even a" then obtain b where "a = 2 * b" .. with \coprime 2 a\ have "coprime 2 (2 * b)" by simp moreover have "\ coprime 2 (2 * b)" by (rule not_coprimeI [of 2]) simp_all ultimately show False by blast qed qed lemma coprime_right_2_iff_odd [simp]: "coprime a 2 \ odd a" using coprime_left_2_iff_odd [of a] by (simp add: ac_simps) end context unique_euclidean_ring_with_nat begin subclass ring_parity .. lemma minus_1_mod_2_eq [simp]: "- 1 mod 2 = 1" by (simp add: mod_2_eq_odd) lemma minus_1_div_2_eq [simp]: "- 1 div 2 = - 1" proof - from div_mult_mod_eq [of "- 1" 2] have "- 1 div 2 * 2 = - 1 * 2" using add_implies_diff by fastforce then show ?thesis using mult_right_cancel [of 2 "- 1 div 2" "- 1"] by simp qed end subsection \Instance for \<^typ>\nat\\ instance nat :: unique_euclidean_semiring_with_nat by standard (simp_all add: dvd_eq_mod_eq_0) lemma even_Suc_Suc_iff [simp]: "even (Suc (Suc n)) \ even n" using dvd_add_triv_right_iff [of 2 n] by simp lemma even_Suc [simp]: "even (Suc n) \ odd n" using even_plus_one_iff [of n] by simp lemma even_diff_nat [simp]: "even (m - n) \ m < n \ even (m + n)" for m n :: nat proof (cases "n \ m") case True then have "m - n + n * 2 = m + n" by (simp add: mult_2_right) moreover have "even (m - n) \ even (m - n + n * 2)" by simp ultimately have "even (m - n) \ even (m + n)" by (simp only:) then show ?thesis by auto next case False then show ?thesis by simp qed lemma odd_pos: "odd n \ 0 < n" for n :: nat by (auto elim: oddE) lemma Suc_double_not_eq_double: "Suc (2 * m) \ 2 * n" proof assume "Suc (2 * m) = 2 * n" moreover have "odd (Suc (2 * m))" and "even (2 * n)" by simp_all ultimately show False by simp qed lemma double_not_eq_Suc_double: "2 * m \ Suc (2 * n)" using Suc_double_not_eq_double [of n m] by simp lemma odd_Suc_minus_one [simp]: "odd n \ Suc (n - Suc 0) = n" by (auto elim: oddE) lemma even_Suc_div_two [simp]: "even n \ Suc n div 2 = n div 2" using even_succ_div_two [of n] by simp lemma odd_Suc_div_two [simp]: "odd n \ Suc n div 2 = Suc (n div 2)" using odd_succ_div_two [of n] by simp lemma odd_two_times_div_two_nat [simp]: assumes "odd n" shows "2 * (n div 2) = n - (1 :: nat)" proof - from assms have "2 * (n div 2) + 1 = n" by (rule odd_two_times_div_two_succ) then have "Suc (2 * (n div 2)) - 1 = n - 1" by simp then show ?thesis by simp qed lemma not_mod2_eq_Suc_0_eq_0 [simp]: "n mod 2 \ Suc 0 \ n mod 2 = 0" using not_mod_2_eq_1_eq_0 [of n] by simp lemma odd_card_imp_not_empty: \A \ {}\ if \odd (card A)\ using that by auto lemma nat_induct2 [case_names 0 1 step]: assumes "P 0" "P 1" and step: "\n::nat. P n \ P (n + 2)" shows "P n" proof (induct n rule: less_induct) case (less n) show ?case proof (cases "n < Suc (Suc 0)") case True then show ?thesis using assms by (auto simp: less_Suc_eq) next case False then obtain k where k: "n = Suc (Suc k)" by (force simp: not_less nat_le_iff_add) then have "k2 ^ n - Suc 0 = (\m\{q. q < n}. 2 ^ m)\ using mask_eq_sum_exp [where ?'a = nat] by simp context semiring_parity begin lemma even_sum_iff: \even (sum f A) \ even (card {a\A. odd (f a)})\ if \finite A\ using that proof (induction A) case empty then show ?case by simp next case (insert a A) moreover have \{b \ insert a A. odd (f b)} = (if odd (f a) then {a} else {}) \ {b \ A. odd (f b)}\ by auto ultimately show ?case by simp qed lemma even_prod_iff: \even (prod f A) \ (\a\A. even (f a))\ if \finite A\ using that by (induction A) simp_all lemma even_mask_iff [simp]: \even (2 ^ n - 1) \ n = 0\ proof (cases \n = 0\) case True then show ?thesis by simp next case False then have \{a. a = 0 \ a < n} = {0}\ by auto then show ?thesis by (auto simp add: mask_eq_sum_exp even_sum_iff) qed end subsection \Parity and powers\ context ring_1 begin lemma power_minus_even [simp]: "even n \ (- a) ^ n = a ^ n" by (auto elim: evenE) lemma power_minus_odd [simp]: "odd n \ (- a) ^ n = - (a ^ n)" by (auto elim: oddE) lemma uminus_power_if: "(- a) ^ n = (if even n then a ^ n else - (a ^ n))" by auto lemma neg_one_even_power [simp]: "even n \ (- 1) ^ n = 1" by simp lemma neg_one_odd_power [simp]: "odd n \ (- 1) ^ n = - 1" by simp lemma neg_one_power_add_eq_neg_one_power_diff: "k \ n \ (- 1) ^ (n + k) = (- 1) ^ (n - k)" by (cases "even (n + k)") auto lemma minus_one_power_iff: "(- 1) ^ n = (if even n then 1 else - 1)" by (induct n) auto end context linordered_idom begin lemma zero_le_even_power: "even n \ 0 \ a ^ n" by (auto elim: evenE) lemma zero_le_odd_power: "odd n \ 0 \ a ^ n \ 0 \ a" by (auto simp add: power_even_eq zero_le_mult_iff elim: oddE) lemma zero_le_power_eq: "0 \ a ^ n \ even n \ odd n \ 0 \ a" by (auto simp add: zero_le_even_power zero_le_odd_power) lemma zero_less_power_eq: "0 < a ^ n \ n = 0 \ even n \ a \ 0 \ odd n \ 0 < a" proof - have [simp]: "0 = a ^ n \ a = 0 \ n > 0" unfolding power_eq_0_iff [of a n, symmetric] by blast show ?thesis unfolding less_le zero_le_power_eq by auto qed lemma power_less_zero_eq [simp]: "a ^ n < 0 \ odd n \ a < 0" unfolding not_le [symmetric] zero_le_power_eq by auto lemma power_le_zero_eq: "a ^ n \ 0 \ n > 0 \ (odd n \ a \ 0 \ even n \ a = 0)" unfolding not_less [symmetric] zero_less_power_eq by auto lemma power_even_abs: "even n \ \a\ ^ n = a ^ n" using power_abs [of a n] by (simp add: zero_le_even_power) lemma power_mono_even: assumes "even n" and "\a\ \ \b\" shows "a ^ n \ b ^ n" proof - have "0 \ \a\" by auto with \\a\ \ \b\\ have "\a\ ^ n \ \b\ ^ n" by (rule power_mono) with \even n\ show ?thesis by (simp add: power_even_abs) qed lemma power_mono_odd: assumes "odd n" and "a \ b" shows "a ^ n \ b ^ n" proof (cases "b < 0") case True with \a \ b\ have "- b \ - a" and "0 \ - b" by auto then have "(- b) ^ n \ (- a) ^ n" by (rule power_mono) with \odd n\ show ?thesis by simp next case False then have "0 \ b" by auto show ?thesis proof (cases "a < 0") case True then have "n \ 0" and "a \ 0" using \odd n\ [THEN odd_pos] by auto then have "a ^ n \ 0" unfolding power_le_zero_eq using \odd n\ by auto moreover from \0 \ b\ have "0 \ b ^ n" by auto ultimately show ?thesis by auto next case False then have "0 \ a" by auto with \a \ b\ show ?thesis using power_mono by auto qed qed text \Simplify, when the exponent is a numeral\ lemma zero_le_power_eq_numeral [simp]: "0 \ a ^ numeral w \ even (numeral w :: nat) \ odd (numeral w :: nat) \ 0 \ a" by (fact zero_le_power_eq) lemma zero_less_power_eq_numeral [simp]: "0 < a ^ numeral w \ numeral w = (0 :: nat) \ even (numeral w :: nat) \ a \ 0 \ odd (numeral w :: nat) \ 0 < a" by (fact zero_less_power_eq) lemma power_le_zero_eq_numeral [simp]: "a ^ numeral w \ 0 \ (0 :: nat) < numeral w \ (odd (numeral w :: nat) \ a \ 0 \ even (numeral w :: nat) \ a = 0)" by (fact power_le_zero_eq) lemma power_less_zero_eq_numeral [simp]: "a ^ numeral w < 0 \ odd (numeral w :: nat) \ a < 0" by (fact power_less_zero_eq) lemma power_even_abs_numeral [simp]: "even (numeral w :: nat) \ \a\ ^ numeral w = a ^ numeral w" by (fact power_even_abs) end context unique_euclidean_semiring_with_nat begin lemma even_mask_div_iff': \even ((2 ^ m - 1) div 2 ^ n) \ m \ n\ proof - have \even ((2 ^ m - 1) div 2 ^ n) \ even (of_nat ((2 ^ m - Suc 0) div 2 ^ n))\ by (simp only: of_nat_div) (simp add: of_nat_diff) also have \\ \ even ((2 ^ m - Suc 0) div 2 ^ n)\ by simp also have \\ \ m \ n\ proof (cases \m \ n\) case True then show ?thesis by (simp add: Suc_le_lessD) next case False then obtain r where r: \m = n + Suc r\ using less_imp_Suc_add by fastforce from r have \{q. q < m} \ {q. 2 ^ n dvd (2::nat) ^ q} = {q. n \ q \ q < m}\ by (auto simp add: dvd_power_iff_le) moreover from r have \{q. q < m} \ {q. \ 2 ^ n dvd (2::nat) ^ q} = {q. q < n}\ by (auto simp add: dvd_power_iff_le) moreover from False have \{q. n \ q \ q < m \ q \ n} = {n}\ by auto then have \odd ((\a\{q. n \ q \ q < m}. 2 ^ a div (2::nat) ^ n) + sum ((^) 2) {q. q < n} div 2 ^ n)\ by (simp_all add: euclidean_semiring_cancel_class.power_diff_power_eq semiring_parity_class.even_sum_iff not_less mask_eq_sum_exp_nat [symmetric]) ultimately have \odd (sum ((^) (2::nat)) {q. q < m} div 2 ^ n)\ by (subst euclidean_semiring_cancel_class.sum_div_partition) simp_all with False show ?thesis by (simp add: mask_eq_sum_exp_nat) qed finally show ?thesis . qed end subsection \Instance for \<^typ>\int\\ lemma even_diff_iff: "even (k - l) \ even (k + l)" for k l :: int by (fact even_diff) lemma even_abs_add_iff: "even (\k\ + l) \ even (k + l)" for k l :: int by simp lemma even_add_abs_iff: "even (k + \l\) \ even (k + l)" for k l :: int by simp lemma even_nat_iff: "0 \ k \ even (nat k) \ even k" by (simp add: even_of_nat [of "nat k", where ?'a = int, symmetric]) lemma zdiv_zmult2_eq: \a div (b * c) = (a div b) div c\ if \c \ 0\ for a b c :: int proof (cases \b \ 0\) case True with that show ?thesis using div_mult2_eq' [of a \nat b\ \nat c\] by simp next case False with that show ?thesis using div_mult2_eq' [of \- a\ \nat (- b)\ \nat c\] by simp qed lemma zmod_zmult2_eq: \a mod (b * c) = b * (a div b mod c) + a mod b\ if \c \ 0\ for a b c :: int proof (cases \b \ 0\) case True with that show ?thesis using mod_mult2_eq' [of a \nat b\ \nat c\] by simp next case False with that show ?thesis using mod_mult2_eq' [of \- a\ \nat (- b)\ \nat c\] by simp qed context assumes "SORT_CONSTRAINT('a::division_ring)" begin lemma power_int_minus_left: "power_int (-a :: 'a) n = (if even n then power_int a n else -power_int a n)" by (auto simp: power_int_def minus_one_power_iff even_nat_iff) lemma power_int_minus_left_even [simp]: "even n \ power_int (-a :: 'a) n = power_int a n" by (simp add: power_int_minus_left) lemma power_int_minus_left_odd [simp]: "odd n \ power_int (-a :: 'a) n = -power_int a n" by (simp add: power_int_minus_left) lemma power_int_minus_left_distrib: "NO_MATCH (-1) x \ power_int (-a :: 'a) n = power_int (-1) n * power_int a n" by (simp add: power_int_minus_left) lemma power_int_minus_one_minus: "power_int (-1 :: 'a) (-n) = power_int (-1) n" by (simp add: power_int_minus_left) lemma power_int_minus_one_diff_commute: "power_int (-1 :: 'a) (a - b) = power_int (-1) (b - a)" by (subst power_int_minus_one_minus [symmetric]) auto lemma power_int_minus_one_mult_self [simp]: "power_int (-1 :: 'a) m * power_int (-1) m = 1" by (simp add: power_int_minus_left) lemma power_int_minus_one_mult_self' [simp]: "power_int (-1 :: 'a) m * (power_int (-1) m * b) = b" by (simp add: power_int_minus_left) end - -subsection \Abstract bit structures\ - -class semiring_bits = semiring_parity + - assumes bits_induct [case_names stable rec]: - \(\a. a div 2 = a \ P a) - \ (\a b. P a \ (of_bool b + 2 * a) div 2 = a \ P (of_bool b + 2 * a)) - \ P a\ - assumes bits_div_0 [simp]: \0 div a = 0\ - and bits_div_by_1 [simp]: \a div 1 = a\ - and bits_mod_div_trivial [simp]: \a mod b div b = 0\ - and even_succ_div_2 [simp]: \even a \ (1 + a) div 2 = a div 2\ - and even_mask_div_iff: \even ((2 ^ m - 1) div 2 ^ n) \ 2 ^ n = 0 \ m \ n\ - and exp_div_exp_eq: \2 ^ m div 2 ^ n = of_bool (2 ^ m \ 0 \ m \ n) * 2 ^ (m - n)\ - and div_exp_eq: \a div 2 ^ m div 2 ^ n = a div 2 ^ (m + n)\ - and mod_exp_eq: \a mod 2 ^ m mod 2 ^ n = a mod 2 ^ min m n\ - and mult_exp_mod_exp_eq: \m \ n \ (a * 2 ^ m) mod (2 ^ n) = (a mod 2 ^ (n - m)) * 2 ^ m\ - and div_exp_mod_exp_eq: \a div 2 ^ n mod 2 ^ m = a mod (2 ^ (n + m)) div 2 ^ n\ - and even_mult_exp_div_exp_iff: \even (a * 2 ^ m div 2 ^ n) \ m > n \ 2 ^ n = 0 \ (m \ n \ even (a div 2 ^ (n - m)))\ - fixes bit :: \'a \ nat \ bool\ - assumes bit_iff_odd: \bit a n \ odd (a div 2 ^ n)\ -begin - -text \ - Having \<^const>\bit\ as definitional class operation - takes into account that specific instances can be implemented - differently wrt. code generation. -\ - -lemma bits_div_by_0 [simp]: - \a div 0 = 0\ - by (metis add_cancel_right_right bits_mod_div_trivial mod_mult_div_eq mult_not_zero) - -lemma bits_1_div_2 [simp]: - \1 div 2 = 0\ - using even_succ_div_2 [of 0] by simp - -lemma bits_1_div_exp [simp]: - \1 div 2 ^ n = of_bool (n = 0)\ - using div_exp_eq [of 1 1] by (cases n) simp_all - -lemma even_succ_div_exp [simp]: - \(1 + a) div 2 ^ n = a div 2 ^ n\ if \even a\ and \n > 0\ -proof (cases n) - case 0 - with that show ?thesis - by simp -next - case (Suc n) - with \even a\ have \(1 + a) div 2 ^ Suc n = a div 2 ^ Suc n\ - proof (induction n) - case 0 - then show ?case - by simp - next - case (Suc n) - then show ?case - using div_exp_eq [of _ 1 \Suc n\, symmetric] - by simp - qed - with Suc show ?thesis - by simp -qed - -lemma even_succ_mod_exp [simp]: - \(1 + a) mod 2 ^ n = 1 + (a mod 2 ^ n)\ if \even a\ and \n > 0\ - using div_mult_mod_eq [of \1 + a\ \2 ^ n\] that - apply simp - by (metis local.add.left_commute local.add_left_cancel local.div_mult_mod_eq) - -lemma bits_mod_by_1 [simp]: - \a mod 1 = 0\ - using div_mult_mod_eq [of a 1] by simp - -lemma bits_mod_0 [simp]: - \0 mod a = 0\ - using div_mult_mod_eq [of 0 a] by simp - -lemma bits_one_mod_two_eq_one [simp]: - \1 mod 2 = 1\ - by (simp add: mod2_eq_if) - -lemma bit_0 [simp]: - \bit a 0 \ odd a\ - by (simp add: bit_iff_odd) - -lemma bit_Suc: - \bit a (Suc n) \ bit (a div 2) n\ - using div_exp_eq [of a 1 n] by (simp add: bit_iff_odd) - -lemma bit_rec: - \bit a n \ (if n = 0 then odd a else bit (a div 2) (n - 1))\ - by (cases n) (simp_all add: bit_Suc) - -lemma bit_0_eq [simp]: - \bit 0 = bot\ - by (simp add: fun_eq_iff bit_iff_odd) - -context - fixes a - assumes stable: \a div 2 = a\ -begin - -lemma bits_stable_imp_add_self: - \a + a mod 2 = 0\ -proof - - have \a div 2 * 2 + a mod 2 = a\ - by (fact div_mult_mod_eq) - then have \a * 2 + a mod 2 = a\ - by (simp add: stable) - then show ?thesis - by (simp add: mult_2_right ac_simps) -qed - -lemma stable_imp_bit_iff_odd: - \bit a n \ odd a\ - by (induction n) (simp_all add: stable bit_Suc) - -end - -lemma bit_iff_idd_imp_stable: - \a div 2 = a\ if \\n. bit a n \ odd a\ -using that proof (induction a rule: bits_induct) - case (stable a) - then show ?case - by simp -next - case (rec a b) - from rec.prems [of 1] have [simp]: \b = odd a\ - by (simp add: rec.hyps bit_Suc) - from rec.hyps have hyp: \(of_bool (odd a) + 2 * a) div 2 = a\ - by simp - have \bit a n \ odd a\ for n - using rec.prems [of \Suc n\] by (simp add: hyp bit_Suc) - then have \a div 2 = a\ - by (rule rec.IH) - then have \of_bool (odd a) + 2 * a = 2 * (a div 2) + of_bool (odd a)\ - by (simp add: ac_simps) - also have \\ = a\ - using mult_div_mod_eq [of 2 a] - by (simp add: of_bool_odd_eq_mod_2) - finally show ?case - using \a div 2 = a\ by (simp add: hyp) -qed - -lemma exp_eq_0_imp_not_bit: - \\ bit a n\ if \2 ^ n = 0\ - using that by (simp add: bit_iff_odd) - -lemma bit_eqI: - \a = b\ if \\n. 2 ^ n \ 0 \ bit a n \ bit b n\ -proof - - have \bit a n \ bit b n\ for n - proof (cases \2 ^ n = 0\) - case True - then show ?thesis - by (simp add: exp_eq_0_imp_not_bit) - next - case False - then show ?thesis - by (rule that) - qed - then show ?thesis proof (induction a arbitrary: b rule: bits_induct) - case (stable a) - from stable(2) [of 0] have **: \even b \ even a\ - by simp - have \b div 2 = b\ - proof (rule bit_iff_idd_imp_stable) - fix n - from stable have *: \bit b n \ bit a n\ - by simp - also have \bit a n \ odd a\ - using stable by (simp add: stable_imp_bit_iff_odd) - finally show \bit b n \ odd b\ - by (simp add: **) - qed - from ** have \a mod 2 = b mod 2\ - by (simp add: mod2_eq_if) - then have \a mod 2 + (a + b) = b mod 2 + (a + b)\ - by simp - then have \a + a mod 2 + b = b + b mod 2 + a\ - by (simp add: ac_simps) - with \a div 2 = a\ \b div 2 = b\ show ?case - by (simp add: bits_stable_imp_add_self) - next - case (rec a p) - from rec.prems [of 0] have [simp]: \p = odd b\ - by simp - from rec.hyps have \bit a n \ bit (b div 2) n\ for n - using rec.prems [of \Suc n\] by (simp add: bit_Suc) - then have \a = b div 2\ - by (rule rec.IH) - then have \2 * a = 2 * (b div 2)\ - by simp - then have \b mod 2 + 2 * a = b mod 2 + 2 * (b div 2)\ - by simp - also have \\ = b\ - by (fact mod_mult_div_eq) - finally show ?case - by (auto simp add: mod2_eq_if) - qed -qed - -lemma bit_eq_iff: - \a = b \ (\n. bit a n \ bit b n)\ - by (auto intro: bit_eqI) - -named_theorems bit_simps \Simplification rules for \<^const>\bit\\ - -lemma bit_exp_iff [bit_simps]: - \bit (2 ^ m) n \ 2 ^ m \ 0 \ m = n\ - by (auto simp add: bit_iff_odd exp_div_exp_eq) - -lemma bit_1_iff [bit_simps]: - \bit 1 n \ 1 \ 0 \ n = 0\ - using bit_exp_iff [of 0 n] by simp - -lemma bit_2_iff [bit_simps]: - \bit 2 n \ 2 \ 0 \ n = 1\ - using bit_exp_iff [of 1 n] by auto - -lemma even_bit_succ_iff: - \bit (1 + a) n \ bit a n \ n = 0\ if \even a\ - using that by (cases \n = 0\) (simp_all add: bit_iff_odd) - -lemma odd_bit_iff_bit_pred: - \bit a n \ bit (a - 1) n \ n = 0\ if \odd a\ -proof - - from \odd a\ obtain b where \a = 2 * b + 1\ .. - moreover have \bit (2 * b) n \ n = 0 \ bit (1 + 2 * b) n\ - using even_bit_succ_iff by simp - ultimately show ?thesis by (simp add: ac_simps) -qed - -lemma bit_double_iff [bit_simps]: - \bit (2 * a) n \ bit a (n - 1) \ n \ 0 \ 2 ^ n \ 0\ - using even_mult_exp_div_exp_iff [of a 1 n] - by (cases n, auto simp add: bit_iff_odd ac_simps) - -lemma bit_eq_rec: - \a = b \ (even a \ even b) \ a div 2 = b div 2\ (is \?P = ?Q\) -proof - assume ?P - then show ?Q - by simp -next - assume ?Q - then have \even a \ even b\ and \a div 2 = b div 2\ - by simp_all - show ?P - proof (rule bit_eqI) - fix n - show \bit a n \ bit b n\ - proof (cases n) - case 0 - with \even a \ even b\ show ?thesis - by simp - next - case (Suc n) - moreover from \a div 2 = b div 2\ have \bit (a div 2) n = bit (b div 2) n\ - by simp - ultimately show ?thesis - by (simp add: bit_Suc) - qed - qed -qed - -lemma bit_mod_2_iff [simp]: - \bit (a mod 2) n \ n = 0 \ odd a\ - by (cases a rule: parity_cases) (simp_all add: bit_iff_odd) - -lemma bit_mask_iff: - \bit (2 ^ m - 1) n \ 2 ^ n \ 0 \ n < m\ - by (simp add: bit_iff_odd even_mask_div_iff not_le) - -lemma bit_Numeral1_iff [simp]: - \bit (numeral Num.One) n \ n = 0\ - by (simp add: bit_rec) - -lemma exp_add_not_zero_imp: - \2 ^ m \ 0\ and \2 ^ n \ 0\ if \2 ^ (m + n) \ 0\ -proof - - have \\ (2 ^ m = 0 \ 2 ^ n = 0)\ - proof (rule notI) - assume \2 ^ m = 0 \ 2 ^ n = 0\ - then have \2 ^ (m + n) = 0\ - by (rule disjE) (simp_all add: power_add) - with that show False .. - qed - then show \2 ^ m \ 0\ and \2 ^ n \ 0\ - by simp_all -qed - -lemma bit_disjunctive_add_iff: - \bit (a + b) n \ bit a n \ bit b n\ - if \\n. \ bit a n \ \ bit b n\ -proof (cases \2 ^ n = 0\) - case True - then show ?thesis - by (simp add: exp_eq_0_imp_not_bit) -next - case False - with that show ?thesis proof (induction n arbitrary: a b) - case 0 - from "0.prems"(1) [of 0] show ?case - by auto - next - case (Suc n) - from Suc.prems(1) [of 0] have even: \even a \ even b\ - by auto - have bit: \\ bit (a div 2) n \ \ bit (b div 2) n\ for n - using Suc.prems(1) [of \Suc n\] by (simp add: bit_Suc) - from Suc.prems(2) have \2 * 2 ^ n \ 0\ \2 ^ n \ 0\ - by (auto simp add: mult_2) - have \a + b = (a div 2 * 2 + a mod 2) + (b div 2 * 2 + b mod 2)\ - using div_mult_mod_eq [of a 2] div_mult_mod_eq [of b 2] by simp - also have \\ = of_bool (odd a \ odd b) + 2 * (a div 2 + b div 2)\ - using even by (auto simp add: algebra_simps mod2_eq_if) - finally have \bit ((a + b) div 2) n \ bit (a div 2 + b div 2) n\ - using \2 * 2 ^ n \ 0\ by simp (simp_all flip: bit_Suc add: bit_double_iff) - also have \\ \ bit (a div 2) n \ bit (b div 2) n\ - using bit \2 ^ n \ 0\ by (rule Suc.IH) - finally show ?case - by (simp add: bit_Suc) - qed -qed - -lemma - exp_add_not_zero_imp_left: \2 ^ m \ 0\ - and exp_add_not_zero_imp_right: \2 ^ n \ 0\ - if \2 ^ (m + n) \ 0\ -proof - - have \\ (2 ^ m = 0 \ 2 ^ n = 0)\ - proof (rule notI) - assume \2 ^ m = 0 \ 2 ^ n = 0\ - then have \2 ^ (m + n) = 0\ - by (rule disjE) (simp_all add: power_add) - with that show False .. - qed - then show \2 ^ m \ 0\ and \2 ^ n \ 0\ - by simp_all -qed - -lemma exp_not_zero_imp_exp_diff_not_zero: - \2 ^ (n - m) \ 0\ if \2 ^ n \ 0\ -proof (cases \m \ n\) - case True - moreover define q where \q = n - m\ - ultimately have \n = m + q\ - by simp - with that show ?thesis - by (simp add: exp_add_not_zero_imp_right) -next - case False - with that show ?thesis - by simp -qed - -end - -lemma nat_bit_induct [case_names zero even odd]: - "P n" if zero: "P 0" - and even: "\n. P n \ n > 0 \ P (2 * n)" - and odd: "\n. P n \ P (Suc (2 * n))" -proof (induction n rule: less_induct) - case (less n) - show "P n" - proof (cases "n = 0") - case True with zero show ?thesis by simp - next - case False - with less have hyp: "P (n div 2)" by simp - show ?thesis - proof (cases "even n") - case True - then have "n \ 1" - by auto - with \n \ 0\ have "n div 2 > 0" - by simp - with \even n\ hyp even [of "n div 2"] show ?thesis - by simp - next - case False - with hyp odd [of "n div 2"] show ?thesis - by simp - qed - qed -qed - -instantiation nat :: semiring_bits -begin - -definition bit_nat :: \nat \ nat \ bool\ - where \bit_nat m n \ odd (m div 2 ^ n)\ - -instance -proof - show \P n\ if stable: \\n. n div 2 = n \ P n\ - and rec: \\n b. P n \ (of_bool b + 2 * n) div 2 = n \ P (of_bool b + 2 * n)\ - for P and n :: nat - proof (induction n rule: nat_bit_induct) - case zero - from stable [of 0] show ?case - by simp - next - case (even n) - with rec [of n False] show ?case - by simp - next - case (odd n) - with rec [of n True] show ?case - by simp - qed - show \q mod 2 ^ m mod 2 ^ n = q mod 2 ^ min m n\ - for q m n :: nat - apply (auto simp add: less_iff_Suc_add power_add mod_mod_cancel split: split_min_lin) - apply (metis div_mult2_eq mod_div_trivial mod_eq_self_iff_div_eq_0 mod_mult_self2_is_0 power_commutes) - done - show \(q * 2 ^ m) mod (2 ^ n) = (q mod 2 ^ (n - m)) * 2 ^ m\ if \m \ n\ - for q m n :: nat - using that - apply (auto simp add: mod_mod_cancel div_mult2_eq power_add mod_mult2_eq le_iff_add split: split_min_lin) - apply (simp add: mult.commute) - done - show \even ((2 ^ m - (1::nat)) div 2 ^ n) \ 2 ^ n = (0::nat) \ m \ n\ - for m n :: nat - using even_mask_div_iff' [where ?'a = nat, of m n] by simp - show \even (q * 2 ^ m div 2 ^ n) \ n < m \ (2::nat) ^ n = 0 \ m \ n \ even (q div 2 ^ (n - m))\ - for m n q r :: nat - apply (auto simp add: not_less power_add ac_simps dest!: le_Suc_ex) - apply (metis (full_types) dvd_mult dvd_mult_imp_div dvd_power_iff_le not_less not_less_eq order_refl power_Suc) - done -qed (auto simp add: div_mult2_eq mod_mult2_eq power_add power_diff bit_nat_def) - -end - -lemma int_bit_induct [case_names zero minus even odd]: - "P k" if zero_int: "P 0" - and minus_int: "P (- 1)" - and even_int: "\k. P k \ k \ 0 \ P (k * 2)" - and odd_int: "\k. P k \ k \ - 1 \ P (1 + (k * 2))" for k :: int -proof (cases "k \ 0") - case True - define n where "n = nat k" - with True have "k = int n" - by simp - then show "P k" - proof (induction n arbitrary: k rule: nat_bit_induct) - case zero - then show ?case - by (simp add: zero_int) - next - case (even n) - have "P (int n * 2)" - by (rule even_int) (use even in simp_all) - with even show ?case - by (simp add: ac_simps) - next - case (odd n) - have "P (1 + (int n * 2))" - by (rule odd_int) (use odd in simp_all) - with odd show ?case - by (simp add: ac_simps) - qed -next - case False - define n where "n = nat (- k - 1)" - with False have "k = - int n - 1" - by simp - then show "P k" - proof (induction n arbitrary: k rule: nat_bit_induct) - case zero - then show ?case - by (simp add: minus_int) - next - case (even n) - have "P (1 + (- int (Suc n) * 2))" - by (rule odd_int) (use even in \simp_all add: algebra_simps\) - also have "\ = - int (2 * n) - 1" - by (simp add: algebra_simps) - finally show ?case - using even.prems by simp - next - case (odd n) - have "P (- int (Suc n) * 2)" - by (rule even_int) (use odd in \simp_all add: algebra_simps\) - also have "\ = - int (Suc (2 * n)) - 1" - by (simp add: algebra_simps) - finally show ?case - using odd.prems by simp - qed -qed - -context semiring_bits -begin - -lemma bit_of_bool_iff [bit_simps]: - \bit (of_bool b) n \ b \ n = 0\ - by (simp add: bit_1_iff) - -lemma even_of_nat_iff: - \even (of_nat n) \ even n\ - by (induction n rule: nat_bit_induct) simp_all - -lemma bit_of_nat_iff [bit_simps]: - \bit (of_nat m) n \ (2::'a) ^ n \ 0 \ bit m n\ -proof (cases \(2::'a) ^ n = 0\) - case True - then show ?thesis - by (simp add: exp_eq_0_imp_not_bit) -next - case False - then have \bit (of_nat m) n \ bit m n\ - proof (induction m arbitrary: n rule: nat_bit_induct) - case zero - then show ?case - by simp - next - case (even m) - then show ?case - by (cases n) - (auto simp add: bit_double_iff Parity.bit_double_iff dest: mult_not_zero) - next - case (odd m) - then show ?case - by (cases n) - (auto simp add: bit_double_iff even_bit_succ_iff Parity.bit_Suc dest: mult_not_zero) - qed - with False show ?thesis - by simp -qed - -end - -instantiation int :: semiring_bits -begin - -definition bit_int :: \int \ nat \ bool\ - where \bit_int k n \ odd (k div 2 ^ n)\ - -instance -proof - show \P k\ if stable: \\k. k div 2 = k \ P k\ - and rec: \\k b. P k \ (of_bool b + 2 * k) div 2 = k \ P (of_bool b + 2 * k)\ - for P and k :: int - proof (induction k rule: int_bit_induct) - case zero - from stable [of 0] show ?case - by simp - next - case minus - from stable [of \- 1\] show ?case - by simp - next - case (even k) - with rec [of k False] show ?case - by (simp add: ac_simps) - next - case (odd k) - with rec [of k True] show ?case - by (simp add: ac_simps) - qed - show \(2::int) ^ m div 2 ^ n = of_bool ((2::int) ^ m \ 0 \ n \ m) * 2 ^ (m - n)\ - for m n :: nat - proof (cases \m < n\) - case True - then have \n = m + (n - m)\ - by simp - then have \(2::int) ^ m div 2 ^ n = (2::int) ^ m div 2 ^ (m + (n - m))\ - by simp - also have \\ = (2::int) ^ m div (2 ^ m * 2 ^ (n - m))\ - by (simp add: power_add) - also have \\ = (2::int) ^ m div 2 ^ m div 2 ^ (n - m)\ - by (simp add: zdiv_zmult2_eq) - finally show ?thesis using \m < n\ by simp - next - case False - then show ?thesis - by (simp add: power_diff) - qed - show \k mod 2 ^ m mod 2 ^ n = k mod 2 ^ min m n\ - for m n :: nat and k :: int - using mod_exp_eq [of \nat k\ m n] - apply (auto simp add: mod_mod_cancel zdiv_zmult2_eq power_add zmod_zmult2_eq le_iff_add split: split_min_lin) - apply (auto simp add: less_iff_Suc_add mod_mod_cancel power_add) - apply (simp only: flip: mult.left_commute [of \2 ^ m\]) - apply (subst zmod_zmult2_eq) apply simp_all - done - show \(k * 2 ^ m) mod (2 ^ n) = (k mod 2 ^ (n - m)) * 2 ^ m\ - if \m \ n\ for m n :: nat and k :: int - using that - apply (auto simp add: power_add zmod_zmult2_eq le_iff_add split: split_min_lin) - apply (simp add: ac_simps) - done - show \even ((2 ^ m - (1::int)) div 2 ^ n) \ 2 ^ n = (0::int) \ m \ n\ - for m n :: nat - using even_mask_div_iff' [where ?'a = int, of m n] by simp - show \even (k * 2 ^ m div 2 ^ n) \ n < m \ (2::int) ^ n = 0 \ m \ n \ even (k div 2 ^ (n - m))\ - for m n :: nat and k l :: int - apply (auto simp add: not_less power_add ac_simps dest!: le_Suc_ex) - apply (metis Suc_leI dvd_mult dvd_mult_imp_div dvd_power_le dvd_refl power.simps(2)) - done -qed (auto simp add: zdiv_zmult2_eq zmod_zmult2_eq power_add power_diff not_le bit_int_def) - -end - -class semiring_bit_shifts = semiring_bits + - fixes push_bit :: \nat \ 'a \ 'a\ - assumes push_bit_eq_mult: \push_bit n a = a * 2 ^ n\ - fixes drop_bit :: \nat \ 'a \ 'a\ - assumes drop_bit_eq_div: \drop_bit n a = a div 2 ^ n\ - fixes take_bit :: \nat \ 'a \ 'a\ - assumes take_bit_eq_mod: \take_bit n a = a mod 2 ^ n\ -begin - -text \ - Logically, \<^const>\push_bit\, - \<^const>\drop_bit\ and \<^const>\take_bit\ are just aliases; having them - as separate operations makes proofs easier, otherwise proof automation - would fiddle with concrete expressions \<^term>\2 ^ n\ in a way obfuscating the basic - algebraic relationships between those operations. - Having - them as definitional class operations - takes into account that specific instances of these can be implemented - differently wrt. code generation. -\ - -lemma bit_iff_odd_drop_bit: - \bit a n \ odd (drop_bit n a)\ - by (simp add: bit_iff_odd drop_bit_eq_div) - -lemma even_drop_bit_iff_not_bit: - \even (drop_bit n a) \ \ bit a n\ - by (simp add: bit_iff_odd_drop_bit) - -lemma div_push_bit_of_1_eq_drop_bit: - \a div push_bit n 1 = drop_bit n a\ - by (simp add: push_bit_eq_mult drop_bit_eq_div) - -lemma bits_ident: - "push_bit n (drop_bit n a) + take_bit n a = a" - using div_mult_mod_eq by (simp add: push_bit_eq_mult take_bit_eq_mod drop_bit_eq_div) - -lemma push_bit_push_bit [simp]: - "push_bit m (push_bit n a) = push_bit (m + n) a" - by (simp add: push_bit_eq_mult power_add ac_simps) - -lemma push_bit_0_id [simp]: - "push_bit 0 = id" - by (simp add: fun_eq_iff push_bit_eq_mult) - -lemma push_bit_of_0 [simp]: - "push_bit n 0 = 0" - by (simp add: push_bit_eq_mult) - -lemma push_bit_of_1: - "push_bit n 1 = 2 ^ n" - by (simp add: push_bit_eq_mult) - -lemma push_bit_Suc [simp]: - "push_bit (Suc n) a = push_bit n (a * 2)" - by (simp add: push_bit_eq_mult ac_simps) - -lemma push_bit_double: - "push_bit n (a * 2) = push_bit n a * 2" - by (simp add: push_bit_eq_mult ac_simps) - -lemma push_bit_add: - "push_bit n (a + b) = push_bit n a + push_bit n b" - by (simp add: push_bit_eq_mult algebra_simps) - -lemma push_bit_numeral [simp]: - \push_bit (numeral l) (numeral k) = push_bit (pred_numeral l) (numeral (Num.Bit0 k))\ - by (simp add: numeral_eq_Suc mult_2_right) (simp add: numeral_Bit0) - -lemma take_bit_0 [simp]: - "take_bit 0 a = 0" - by (simp add: take_bit_eq_mod) - -lemma take_bit_Suc: - \take_bit (Suc n) a = take_bit n (a div 2) * 2 + a mod 2\ -proof - - have \take_bit (Suc n) (a div 2 * 2 + of_bool (odd a)) = take_bit n (a div 2) * 2 + of_bool (odd a)\ - using even_succ_mod_exp [of \2 * (a div 2)\ \Suc n\] - mult_exp_mod_exp_eq [of 1 \Suc n\ \a div 2\] - by (auto simp add: take_bit_eq_mod ac_simps) - then show ?thesis - using div_mult_mod_eq [of a 2] by (simp add: mod_2_eq_odd) -qed - -lemma take_bit_rec: - \take_bit n a = (if n = 0 then 0 else take_bit (n - 1) (a div 2) * 2 + a mod 2)\ - by (cases n) (simp_all add: take_bit_Suc) - -lemma take_bit_Suc_0 [simp]: - \take_bit (Suc 0) a = a mod 2\ - by (simp add: take_bit_eq_mod) - -lemma take_bit_of_0 [simp]: - "take_bit n 0 = 0" - by (simp add: take_bit_eq_mod) - -lemma take_bit_of_1 [simp]: - "take_bit n 1 = of_bool (n > 0)" - by (cases n) (simp_all add: take_bit_Suc) - -lemma drop_bit_of_0 [simp]: - "drop_bit n 0 = 0" - by (simp add: drop_bit_eq_div) - -lemma drop_bit_of_1 [simp]: - "drop_bit n 1 = of_bool (n = 0)" - by (simp add: drop_bit_eq_div) - -lemma drop_bit_0 [simp]: - "drop_bit 0 = id" - by (simp add: fun_eq_iff drop_bit_eq_div) - -lemma drop_bit_Suc: - "drop_bit (Suc n) a = drop_bit n (a div 2)" - using div_exp_eq [of a 1] by (simp add: drop_bit_eq_div) - -lemma drop_bit_rec: - "drop_bit n a = (if n = 0 then a else drop_bit (n - 1) (a div 2))" - by (cases n) (simp_all add: drop_bit_Suc) - -lemma drop_bit_half: - "drop_bit n (a div 2) = drop_bit n a div 2" - by (induction n arbitrary: a) (simp_all add: drop_bit_Suc) - -lemma drop_bit_of_bool [simp]: - "drop_bit n (of_bool b) = of_bool (n = 0 \ b)" - by (cases n) simp_all - -lemma even_take_bit_eq [simp]: - \even (take_bit n a) \ n = 0 \ even a\ - by (simp add: take_bit_rec [of n a]) - -lemma take_bit_take_bit [simp]: - "take_bit m (take_bit n a) = take_bit (min m n) a" - by (simp add: take_bit_eq_mod mod_exp_eq ac_simps) - -lemma drop_bit_drop_bit [simp]: - "drop_bit m (drop_bit n a) = drop_bit (m + n) a" - by (simp add: drop_bit_eq_div power_add div_exp_eq ac_simps) - -lemma push_bit_take_bit: - "push_bit m (take_bit n a) = take_bit (m + n) (push_bit m a)" - apply (simp add: push_bit_eq_mult take_bit_eq_mod power_add ac_simps) - using mult_exp_mod_exp_eq [of m \m + n\ a] apply (simp add: ac_simps power_add) - done - -lemma take_bit_push_bit: - "take_bit m (push_bit n a) = push_bit n (take_bit (m - n) a)" -proof (cases "m \ n") - case True - then show ?thesis - apply (simp add:) - apply (simp_all add: push_bit_eq_mult take_bit_eq_mod) - apply (auto dest!: le_Suc_ex simp add: power_add ac_simps) - using mult_exp_mod_exp_eq [of m m \a * 2 ^ n\ for n] - apply (simp add: ac_simps) - done -next - case False - then show ?thesis - using push_bit_take_bit [of n "m - n" a] - by simp -qed - -lemma take_bit_drop_bit: - "take_bit m (drop_bit n a) = drop_bit n (take_bit (m + n) a)" - by (simp add: drop_bit_eq_div take_bit_eq_mod ac_simps div_exp_mod_exp_eq) - -lemma drop_bit_take_bit: - "drop_bit m (take_bit n a) = take_bit (n - m) (drop_bit m a)" -proof (cases "m \ n") - case True - then show ?thesis - using take_bit_drop_bit [of "n - m" m a] by simp -next - case False - then obtain q where \m = n + q\ - by (auto simp add: not_le dest: less_imp_Suc_add) - then have \drop_bit m (take_bit n a) = 0\ - using div_exp_eq [of \a mod 2 ^ n\ n q] - by (simp add: take_bit_eq_mod drop_bit_eq_div) - with False show ?thesis - by simp -qed - -lemma even_push_bit_iff [simp]: - \even (push_bit n a) \ n \ 0 \ even a\ - by (simp add: push_bit_eq_mult) auto - -lemma bit_push_bit_iff [bit_simps]: - \bit (push_bit m a) n \ m \ n \ 2 ^ n \ 0 \ bit a (n - m)\ - by (auto simp add: bit_iff_odd push_bit_eq_mult even_mult_exp_div_exp_iff) - -lemma bit_drop_bit_eq [bit_simps]: - \bit (drop_bit n a) = bit a \ (+) n\ - by (simp add: bit_iff_odd fun_eq_iff ac_simps flip: drop_bit_eq_div) - -lemma bit_take_bit_iff [bit_simps]: - \bit (take_bit m a) n \ n < m \ bit a n\ - by (simp add: bit_iff_odd drop_bit_take_bit not_le flip: drop_bit_eq_div) - -lemma stable_imp_drop_bit_eq: - \drop_bit n a = a\ - if \a div 2 = a\ - by (induction n) (simp_all add: that drop_bit_Suc) - -lemma stable_imp_take_bit_eq: - \take_bit n a = (if even a then 0 else 2 ^ n - 1)\ - if \a div 2 = a\ -proof (rule bit_eqI) - fix m - assume \2 ^ m \ 0\ - with that show \bit (take_bit n a) m \ bit (if even a then 0 else 2 ^ n - 1) m\ - by (simp add: bit_take_bit_iff bit_mask_iff stable_imp_bit_iff_odd) -qed - -lemma exp_dvdE: - assumes \2 ^ n dvd a\ - obtains b where \a = push_bit n b\ -proof - - from assms obtain b where \a = 2 ^ n * b\ .. - then have \a = push_bit n b\ - by (simp add: push_bit_eq_mult ac_simps) - with that show thesis . -qed - -lemma take_bit_eq_0_iff: - \take_bit n a = 0 \ 2 ^ n dvd a\ (is \?P \ ?Q\) -proof - assume ?P - then show ?Q - by (simp add: take_bit_eq_mod mod_0_imp_dvd) -next - assume ?Q - then obtain b where \a = push_bit n b\ - by (rule exp_dvdE) - then show ?P - by (simp add: take_bit_push_bit) -qed - -lemma take_bit_tightened: - \take_bit m a = take_bit m b\ if \take_bit n a = take_bit n b\ and \m \ n\ -proof - - from that have \take_bit m (take_bit n a) = take_bit m (take_bit n b)\ - by simp - then have \take_bit (min m n) a = take_bit (min m n) b\ - by simp - with that show ?thesis - by (simp add: min_def) -qed - -lemma take_bit_eq_self_iff_drop_bit_eq_0: - \take_bit n a = a \ drop_bit n a = 0\ (is \?P \ ?Q\) -proof - assume ?P - show ?Q - proof (rule bit_eqI) - fix m - from \?P\ have \a = take_bit n a\ .. - also have \\ bit (take_bit n a) (n + m)\ - unfolding bit_simps - by (simp add: bit_simps) - finally show \bit (drop_bit n a) m \ bit 0 m\ - by (simp add: bit_simps) - qed -next - assume ?Q - show ?P - proof (rule bit_eqI) - fix m - from \?Q\ have \\ bit (drop_bit n a) (m - n)\ - by simp - then have \ \ bit a (n + (m - n))\ - by (simp add: bit_simps) - then show \bit (take_bit n a) m \ bit a m\ - by (cases \m < n\) (auto simp add: bit_simps) - qed -qed - -lemma drop_bit_exp_eq: - \drop_bit m (2 ^ n) = of_bool (m \ n \ 2 ^ n \ 0) * 2 ^ (n - m)\ - by (rule bit_eqI) (auto simp add: bit_simps) - -end - -instantiation nat :: semiring_bit_shifts -begin - -definition push_bit_nat :: \nat \ nat \ nat\ - where \push_bit_nat n m = m * 2 ^ n\ - -definition drop_bit_nat :: \nat \ nat \ nat\ - where \drop_bit_nat n m = m div 2 ^ n\ - -definition take_bit_nat :: \nat \ nat \ nat\ - where \take_bit_nat n m = m mod 2 ^ n\ - -instance - by standard (simp_all add: push_bit_nat_def drop_bit_nat_def take_bit_nat_def) - -end - -context semiring_bit_shifts -begin - -lemma push_bit_of_nat: - \push_bit n (of_nat m) = of_nat (push_bit n m)\ - by (simp add: push_bit_eq_mult semiring_bit_shifts_class.push_bit_eq_mult) - -lemma of_nat_push_bit: - \of_nat (push_bit m n) = push_bit m (of_nat n)\ - by (simp add: push_bit_eq_mult semiring_bit_shifts_class.push_bit_eq_mult) - -lemma take_bit_of_nat: - \take_bit n (of_nat m) = of_nat (take_bit n m)\ - by (rule bit_eqI) (simp add: bit_take_bit_iff Parity.bit_take_bit_iff bit_of_nat_iff) - -lemma of_nat_take_bit: - \of_nat (take_bit n m) = take_bit n (of_nat m)\ - by (rule bit_eqI) (simp add: bit_take_bit_iff Parity.bit_take_bit_iff bit_of_nat_iff) - -end - -instantiation int :: semiring_bit_shifts -begin - -definition push_bit_int :: \nat \ int \ int\ - where \push_bit_int n k = k * 2 ^ n\ - -definition drop_bit_int :: \nat \ int \ int\ - where \drop_bit_int n k = k div 2 ^ n\ - -definition take_bit_int :: \nat \ int \ int\ - where \take_bit_int n k = k mod 2 ^ n\ - -instance - by standard (simp_all add: push_bit_int_def drop_bit_int_def take_bit_int_def) - -end - -lemma bit_push_bit_iff_nat: - \bit (push_bit m q) n \ m \ n \ bit q (n - m)\ for q :: nat - by (auto simp add: bit_push_bit_iff) - -lemma bit_push_bit_iff_int: - \bit (push_bit m k) n \ m \ n \ bit k (n - m)\ for k :: int - by (auto simp add: bit_push_bit_iff) - -lemma take_bit_nat_less_exp [simp]: - \take_bit n m < 2 ^ n\ for n m ::nat - by (simp add: take_bit_eq_mod) - -lemma take_bit_nonnegative [simp]: - \take_bit n k \ 0\ for k :: int - by (simp add: take_bit_eq_mod) - -lemma not_take_bit_negative [simp]: - \\ take_bit n k < 0\ for k :: int - by (simp add: not_less) - -lemma take_bit_int_less_exp [simp]: - \take_bit n k < 2 ^ n\ for k :: int - by (simp add: take_bit_eq_mod) - -lemma take_bit_nat_eq_self_iff: - \take_bit n m = m \ m < 2 ^ n\ (is \?P \ ?Q\) - for n m :: nat -proof - assume ?P - moreover note take_bit_nat_less_exp [of n m] - ultimately show ?Q - by simp -next - assume ?Q - then show ?P - by (simp add: take_bit_eq_mod) -qed - -lemma take_bit_nat_eq_self: - \take_bit n m = m\ if \m < 2 ^ n\ for m n :: nat - using that by (simp add: take_bit_nat_eq_self_iff) - -lemma take_bit_int_eq_self_iff: - \take_bit n k = k \ 0 \ k \ k < 2 ^ n\ (is \?P \ ?Q\) - for k :: int -proof - assume ?P - moreover note take_bit_int_less_exp [of n k] take_bit_nonnegative [of n k] - ultimately show ?Q - by simp -next - assume ?Q - then show ?P - by (simp add: take_bit_eq_mod) -qed - -lemma take_bit_int_eq_self: - \take_bit n k = k\ if \0 \ k\ \k < 2 ^ n\ for k :: int - using that by (simp add: take_bit_int_eq_self_iff) - -lemma take_bit_nat_less_eq_self [simp]: - \take_bit n m \ m\ for n m :: nat - by (simp add: take_bit_eq_mod) - -lemma take_bit_nat_less_self_iff: - \take_bit n m < m \ 2 ^ n \ m\ (is \?P \ ?Q\) - for m n :: nat -proof - assume ?P - then have \take_bit n m \ m\ - by simp - then show \?Q\ - by (simp add: take_bit_nat_eq_self_iff) -next - have \take_bit n m < 2 ^ n\ - by (fact take_bit_nat_less_exp) - also assume ?Q - finally show ?P . -qed - -class unique_euclidean_semiring_with_bit_shifts = - unique_euclidean_semiring_with_nat + semiring_bit_shifts -begin - -lemma take_bit_of_exp [simp]: - \take_bit m (2 ^ n) = of_bool (n < m) * 2 ^ n\ - by (simp add: take_bit_eq_mod exp_mod_exp) - -lemma take_bit_of_2 [simp]: - \take_bit n 2 = of_bool (2 \ n) * 2\ - using take_bit_of_exp [of n 1] by simp - -lemma take_bit_of_mask: - \take_bit m (2 ^ n - 1) = 2 ^ min m n - 1\ - by (simp add: take_bit_eq_mod mask_mod_exp) - -lemma push_bit_eq_0_iff [simp]: - "push_bit n a = 0 \ a = 0" - by (simp add: push_bit_eq_mult) - -lemma take_bit_add: - "take_bit n (take_bit n a + take_bit n b) = take_bit n (a + b)" - by (simp add: take_bit_eq_mod mod_simps) - -lemma take_bit_of_1_eq_0_iff [simp]: - "take_bit n 1 = 0 \ n = 0" - by (simp add: take_bit_eq_mod) - -lemma take_bit_Suc_1 [simp]: - \take_bit (Suc n) 1 = 1\ - by (simp add: take_bit_Suc) - -lemma take_bit_Suc_bit0 [simp]: - \take_bit (Suc n) (numeral (Num.Bit0 k)) = take_bit n (numeral k) * 2\ - by (simp add: take_bit_Suc numeral_Bit0_div_2) - -lemma take_bit_Suc_bit1 [simp]: - \take_bit (Suc n) (numeral (Num.Bit1 k)) = take_bit n (numeral k) * 2 + 1\ - by (simp add: take_bit_Suc numeral_Bit1_div_2 mod_2_eq_odd) - -lemma take_bit_numeral_1 [simp]: - \take_bit (numeral l) 1 = 1\ - by (simp add: take_bit_rec [of \numeral l\ 1]) - -lemma take_bit_numeral_bit0 [simp]: - \take_bit (numeral l) (numeral (Num.Bit0 k)) = take_bit (pred_numeral l) (numeral k) * 2\ - by (simp add: take_bit_rec numeral_Bit0_div_2) - -lemma take_bit_numeral_bit1 [simp]: - \take_bit (numeral l) (numeral (Num.Bit1 k)) = take_bit (pred_numeral l) (numeral k) * 2 + 1\ - by (simp add: take_bit_rec numeral_Bit1_div_2 mod_2_eq_odd) - -lemma drop_bit_Suc_bit0 [simp]: - \drop_bit (Suc n) (numeral (Num.Bit0 k)) = drop_bit n (numeral k)\ - by (simp add: drop_bit_Suc numeral_Bit0_div_2) - -lemma drop_bit_Suc_bit1 [simp]: - \drop_bit (Suc n) (numeral (Num.Bit1 k)) = drop_bit n (numeral k)\ - by (simp add: drop_bit_Suc numeral_Bit1_div_2) - -lemma drop_bit_numeral_bit0 [simp]: - \drop_bit (numeral l) (numeral (Num.Bit0 k)) = drop_bit (pred_numeral l) (numeral k)\ - by (simp add: drop_bit_rec numeral_Bit0_div_2) - -lemma drop_bit_numeral_bit1 [simp]: - \drop_bit (numeral l) (numeral (Num.Bit1 k)) = drop_bit (pred_numeral l) (numeral k)\ - by (simp add: drop_bit_rec numeral_Bit1_div_2) - -lemma drop_bit_of_nat: - "drop_bit n (of_nat m) = of_nat (drop_bit n m)" - by (simp add: drop_bit_eq_div Parity.drop_bit_eq_div of_nat_div [of m "2 ^ n"]) - -lemma bit_of_nat_iff_bit [bit_simps]: - \bit (of_nat m) n \ bit m n\ -proof - - have \even (m div 2 ^ n) \ even (of_nat (m div 2 ^ n))\ - by simp - also have \of_nat (m div 2 ^ n) = of_nat m div of_nat (2 ^ n)\ - by (simp add: of_nat_div) - finally show ?thesis - by (simp add: bit_iff_odd semiring_bits_class.bit_iff_odd) -qed - -lemma of_nat_drop_bit: - \of_nat (drop_bit m n) = drop_bit m (of_nat n)\ - by (simp add: drop_bit_eq_div semiring_bit_shifts_class.drop_bit_eq_div of_nat_div) - -lemma bit_push_bit_iff_of_nat_iff [bit_simps]: - \bit (push_bit m (of_nat r)) n \ m \ n \ bit (of_nat r) (n - m)\ - by (auto simp add: bit_push_bit_iff) - -end - -instance nat :: unique_euclidean_semiring_with_bit_shifts .. - -instance int :: unique_euclidean_semiring_with_bit_shifts .. - -lemma bit_numeral_int_iff [bit_simps]: - \bit (numeral m :: int) n \ bit (numeral m :: nat) n\ - using bit_of_nat_iff_bit [of \numeral m\ n] by simp - -lemma bit_not_int_iff': - \bit (- k - 1) n \ \ bit k n\ - for k :: int -proof (induction n arbitrary: k) - case 0 - show ?case - by simp -next - case (Suc n) - have \- k - 1 = - (k + 2) + 1\ - by simp - also have \(- (k + 2) + 1) div 2 = - (k div 2) - 1\ - proof (cases \even k\) - case True - then have \- k div 2 = - (k div 2)\ - by rule (simp flip: mult_minus_right) - with True show ?thesis - by simp - next - case False - have \4 = 2 * (2::int)\ - by simp - also have \2 * 2 div 2 = (2::int)\ - by (simp only: nonzero_mult_div_cancel_left) - finally have *: \4 div 2 = (2::int)\ . - from False obtain l where k: \k = 2 * l + 1\ .. - then have \- k - 2 = 2 * - (l + 2) + 1\ - by simp - then have \(- k - 2) div 2 + 1 = - (k div 2) - 1\ - by (simp flip: mult_minus_right add: *) (simp add: k) - with False show ?thesis - by simp - qed - finally have \(- k - 1) div 2 = - (k div 2) - 1\ . - with Suc show ?case - by (simp add: bit_Suc) -qed - -lemma bit_minus_int_iff [bit_simps]: - \bit (- k) n \ \ bit (k - 1) n\ - for k :: int - using bit_not_int_iff' [of \k - 1\] by simp - -lemma bit_nat_iff [bit_simps]: - \bit (nat k) n \ k \ 0 \ bit k n\ -proof (cases \k \ 0\) - case True - moreover define m where \m = nat k\ - ultimately have \k = int m\ - by simp - then show ?thesis - by (simp add: bit_simps) -next - case False - then show ?thesis - by simp -qed - -lemma push_bit_nat_eq: - \push_bit n (nat k) = nat (push_bit n k)\ - by (cases \k \ 0\) (simp_all add: push_bit_eq_mult nat_mult_distrib not_le mult_nonneg_nonpos2) - -lemma drop_bit_nat_eq: - \drop_bit n (nat k) = nat (drop_bit n k)\ - apply (cases \k \ 0\) - apply (simp_all add: drop_bit_eq_div nat_div_distrib nat_power_eq not_le) - apply (simp add: divide_int_def) - done - -lemma take_bit_nat_eq: - \take_bit n (nat k) = nat (take_bit n k)\ if \k \ 0\ - using that by (simp add: take_bit_eq_mod nat_mod_distrib nat_power_eq) - -lemma nat_take_bit_eq: - \nat (take_bit n k) = take_bit n (nat k)\ - if \k \ 0\ - using that by (simp add: take_bit_eq_mod nat_mod_distrib nat_power_eq) - -lemma not_exp_less_eq_0_int [simp]: - \\ 2 ^ n \ (0::int)\ - by (simp add: power_le_zero_eq) - -lemma half_nonnegative_int_iff [simp]: - \k div 2 \ 0 \ k \ 0\ for k :: int -proof (cases \k \ 0\) - case True - then show ?thesis - by (auto simp add: divide_int_def sgn_1_pos) -next - case False - then show ?thesis - apply (auto simp add: divide_int_def not_le elim!: evenE) - apply (simp only: minus_mult_right) - apply (subst (asm) nat_mult_distrib) - apply simp_all - done -qed - -lemma half_negative_int_iff [simp]: - \k div 2 < 0 \ k < 0\ for k :: int - by (subst Not_eq_iff [symmetric]) (simp add: not_less) - -lemma push_bit_of_Suc_0 [simp]: - "push_bit n (Suc 0) = 2 ^ n" - using push_bit_of_1 [where ?'a = nat] by simp - -lemma take_bit_of_Suc_0 [simp]: - "take_bit n (Suc 0) = of_bool (0 < n)" - using take_bit_of_1 [where ?'a = nat] by simp - -lemma drop_bit_of_Suc_0 [simp]: - "drop_bit n (Suc 0) = of_bool (n = 0)" - using drop_bit_of_1 [where ?'a = nat] by simp - -lemma push_bit_minus_one: - "push_bit n (- 1 :: int) = - (2 ^ n)" - by (simp add: push_bit_eq_mult) - -lemma minus_1_div_exp_eq_int: - \- 1 div (2 :: int) ^ n = - 1\ - by (induction n) (use div_exp_eq [symmetric, of \- 1 :: int\ 1] in \simp_all add: ac_simps\) - -lemma drop_bit_minus_one [simp]: - \drop_bit n (- 1 :: int) = - 1\ - by (simp add: drop_bit_eq_div minus_1_div_exp_eq_int) - -lemma take_bit_Suc_from_most: - \take_bit (Suc n) k = 2 ^ n * of_bool (bit k n) + take_bit n k\ for k :: int - by (simp only: take_bit_eq_mod power_Suc2) (simp_all add: bit_iff_odd odd_iff_mod_2_eq_one zmod_zmult2_eq) - -lemma take_bit_minus: - \take_bit n (- take_bit n k) = take_bit n (- k)\ - for k :: int - by (simp add: take_bit_eq_mod mod_minus_eq) - -lemma take_bit_diff: - \take_bit n (take_bit n k - take_bit n l) = take_bit n (k - l)\ - for k l :: int - by (simp add: take_bit_eq_mod mod_diff_eq) - -lemma bit_imp_take_bit_positive: - \0 < take_bit m k\ if \n < m\ and \bit k n\ for k :: int -proof (rule ccontr) - assume \\ 0 < take_bit m k\ - then have \take_bit m k = 0\ - by (auto simp add: not_less intro: order_antisym) - then have \bit (take_bit m k) n = bit 0 n\ - by simp - with that show False - by (simp add: bit_take_bit_iff) -qed - -lemma take_bit_mult: - \take_bit n (take_bit n k * take_bit n l) = take_bit n (k * l)\ - for k l :: int - by (simp add: take_bit_eq_mod mod_mult_eq) - -lemma (in ring_1) of_nat_nat_take_bit_eq [simp]: - \of_nat (nat (take_bit n k)) = of_int (take_bit n k)\ - by simp - -lemma take_bit_minus_small_eq: - \take_bit n (- k) = 2 ^ n - k\ if \0 < k\ \k \ 2 ^ n\ for k :: int -proof - - define m where \m = nat k\ - with that have \k = int m\ and \0 < m\ and \m \ 2 ^ n\ - by simp_all - have \(2 ^ n - m) mod 2 ^ n = 2 ^ n - m\ - using \0 < m\ by simp - then have \int ((2 ^ n - m) mod 2 ^ n) = int (2 ^ n - m)\ - by simp - then have \(2 ^ n - int m) mod 2 ^ n = 2 ^ n - int m\ - using \m \ 2 ^ n\ by (simp only: of_nat_mod of_nat_diff) simp - with \k = int m\ have \(2 ^ n - k) mod 2 ^ n = 2 ^ n - k\ - by simp - then show ?thesis - by (simp add: take_bit_eq_mod) -qed - -lemma drop_bit_push_bit_int: - \drop_bit m (push_bit n k) = drop_bit (m - n) (push_bit (n - m) k)\ for k :: int - by (cases \m \ n\) (auto simp add: mult.left_commute [of _ \2 ^ n\] mult.commute [of _ \2 ^ n\] mult.assoc - mult.commute [of k] drop_bit_eq_div push_bit_eq_mult not_le power_add dest!: le_Suc_ex less_imp_Suc_add) - -lemma push_bit_nonnegative_int_iff [simp]: - \push_bit n k \ 0 \ k \ 0\ for k :: int - by (simp add: push_bit_eq_mult zero_le_mult_iff) - -lemma push_bit_negative_int_iff [simp]: - \push_bit n k < 0 \ k < 0\ for k :: int - by (subst Not_eq_iff [symmetric]) (simp add: not_less) - -lemma drop_bit_nonnegative_int_iff [simp]: - \drop_bit n k \ 0 \ k \ 0\ for k :: int - by (induction n) (simp_all add: drop_bit_Suc drop_bit_half) - -lemma drop_bit_negative_int_iff [simp]: - \drop_bit n k < 0 \ k < 0\ for k :: int - by (subst Not_eq_iff [symmetric]) (simp add: not_less) - code_identifier code_module Parity \ (SML) Arith and (OCaml) Arith and (Haskell) Arith end diff --git a/src/HOL/Random.thy b/src/HOL/Random.thy --- a/src/HOL/Random.thy +++ b/src/HOL/Random.thy @@ -1,188 +1,188 @@ (* Author: Florian Haftmann, TU Muenchen *) section \A HOL random engine\ theory Random -imports List Groups_List +imports List Groups_List Code_Numeral begin subsection \Auxiliary functions\ fun log :: "natural \ natural \ natural" where "log b i = (if b \ 1 \ i < b then 1 else 1 + log b (i div b))" definition inc_shift :: "natural \ natural \ natural" where "inc_shift v k = (if v = k then 1 else k + 1)" definition minus_shift :: "natural \ natural \ natural \ natural" where "minus_shift r k l = (if k < l then r + k - l else k - l)" subsection \Random seeds\ type_synonym seed = "natural \ natural" primrec "next" :: "seed \ natural \ seed" where "next (v, w) = (let k = v div 53668; v' = minus_shift 2147483563 ((v mod 53668) * 40014) (k * 12211); l = w div 52774; w' = minus_shift 2147483399 ((w mod 52774) * 40692) (l * 3791); z = minus_shift 2147483562 v' (w' + 1) + 1 in (z, (v', w')))" definition split_seed :: "seed \ seed \ seed" where "split_seed s = (let (v, w) = s; (v', w') = snd (next s); v'' = inc_shift 2147483562 v; w'' = inc_shift 2147483398 w in ((v'', w'), (v', w'')))" subsection \Base selectors\ context includes state_combinator_syntax begin fun iterate :: "natural \ ('b \ 'a \ 'b \ 'a) \ 'b \ 'a \ 'b \ 'a" where "iterate k f x = (if k = 0 then Pair x else f x \\ iterate (k - 1) f)" definition range :: "natural \ seed \ natural \ seed" where "range k = iterate (log 2147483561 k) (\l. next \\ (\v. Pair (v + l * 2147483561))) 1 \\ (\v. Pair (v mod k))" lemma range: "k > 0 \ fst (range k s) < k" by (simp add: range_def split_def less_natural_def del: log.simps iterate.simps) definition select :: "'a list \ seed \ 'a \ seed" where "select xs = range (natural_of_nat (length xs)) \\ (\k. Pair (nth xs (nat_of_natural k)))" lemma select: assumes "xs \ []" shows "fst (select xs s) \ set xs" proof - from assms have "natural_of_nat (length xs) > 0" by (simp add: less_natural_def) with range have "fst (range (natural_of_nat (length xs)) s) < natural_of_nat (length xs)" by best then have "nat_of_natural (fst (range (natural_of_nat (length xs)) s)) < length xs" by (simp add: less_natural_def) then show ?thesis by (simp add: split_beta select_def) qed primrec pick :: "(natural \ 'a) list \ natural \ 'a" where "pick (x # xs) i = (if i < fst x then snd x else pick xs (i - fst x))" lemma pick_member: "i < sum_list (map fst xs) \ pick xs i \ set (map snd xs)" by (induct xs arbitrary: i) (simp_all add: less_natural_def) lemma pick_drop_zero: "pick (filter (\(k, _). k > 0) xs) = pick xs" by (induct xs) (auto simp add: fun_eq_iff less_natural_def minus_natural_def) lemma pick_same: "l < length xs \ Random.pick (map (Pair 1) xs) (natural_of_nat l) = nth xs l" proof (induct xs arbitrary: l) case Nil then show ?case by simp next case (Cons x xs) then show ?case by (cases l) (simp_all add: less_natural_def) qed definition select_weight :: "(natural \ 'a) list \ seed \ 'a \ seed" where "select_weight xs = range (sum_list (map fst xs)) \\ (\k. Pair (pick xs k))" lemma select_weight_member: assumes "0 < sum_list (map fst xs)" shows "fst (select_weight xs s) \ set (map snd xs)" proof - from range assms have "fst (range (sum_list (map fst xs)) s) < sum_list (map fst xs)" . with pick_member have "pick xs (fst (range (sum_list (map fst xs)) s)) \ set (map snd xs)" . then show ?thesis by (simp add: select_weight_def scomp_def split_def) qed lemma select_weight_cons_zero: "select_weight ((0, x) # xs) = select_weight xs" by (simp add: select_weight_def less_natural_def) lemma select_weight_drop_zero: "select_weight (filter (\(k, _). k > 0) xs) = select_weight xs" proof - have "sum_list (map fst [(k, _)\xs . 0 < k]) = sum_list (map fst xs)" by (induct xs) (auto simp add: less_natural_def natural_eq_iff) then show ?thesis by (simp only: select_weight_def pick_drop_zero) qed lemma select_weight_select: assumes "xs \ []" shows "select_weight (map (Pair 1) xs) = select xs" proof - have less: "\s. fst (range (natural_of_nat (length xs)) s) < natural_of_nat (length xs)" using assms by (intro range) (simp add: less_natural_def) moreover have "sum_list (map fst (map (Pair 1) xs)) = natural_of_nat (length xs)" by (induct xs) simp_all ultimately show ?thesis by (auto simp add: select_weight_def select_def scomp_def split_def fun_eq_iff pick_same [symmetric] less_natural_def) qed end subsection \\ML\ interface\ code_reflect Random_Engine functions range select select_weight ML \ structure Random_Engine = struct open Random_Engine; type seed = Code_Numeral.natural * Code_Numeral.natural; local val seed = Unsynchronized.ref (let val now = Time.toMilliseconds (Time.now ()); val (q, s1) = IntInf.divMod (now, 2147483562); val s2 = q mod 2147483398; in apply2 Code_Numeral.natural_of_integer (s1 + 1, s2 + 1) end); in fun next_seed () = let val (seed1, seed') = @{code split_seed} (! seed) val _ = seed := seed' in seed1 end fun run f = let val (x, seed') = f (! seed); val _ = seed := seed' in x end; end; end; \ hide_type (open) seed hide_const (open) inc_shift minus_shift log "next" split_seed iterate range select pick select_weight hide_fact (open) range_def end diff --git a/src/HOL/Set.thy b/src/HOL/Set.thy --- a/src/HOL/Set.thy +++ b/src/HOL/Set.thy @@ -1,2041 +1,2041 @@ (* Title: HOL/Set.thy Author: Tobias Nipkow Author: Lawrence C Paulson Author: Markus Wenzel *) section \Set theory for higher-order logic\ theory Set - imports Lattices + imports Lattices Boolean_Algebra begin subsection \Sets as predicates\ typedecl 'a set axiomatization Collect :: "('a \ bool) \ 'a set" \ \comprehension\ and member :: "'a \ 'a set \ bool" \ \membership\ where mem_Collect_eq [iff, code_unfold]: "member a (Collect P) = P a" and Collect_mem_eq [simp]: "Collect (\x. member x A) = A" notation member ("'(\')") and member ("(_/ \ _)" [51, 51] 50) abbreviation not_member where "not_member x A \ \ (x \ A)" \ \non-membership\ notation not_member ("'(\')") and not_member ("(_/ \ _)" [51, 51] 50) notation (ASCII) member ("'(:')") and member ("(_/ : _)" [51, 51] 50) and not_member ("'(~:')") and not_member ("(_/ ~: _)" [51, 51] 50) text \Set comprehensions\ syntax "_Coll" :: "pttrn \ bool \ 'a set" ("(1{_./ _})") translations "{x. P}" \ "CONST Collect (\x. P)" syntax (ASCII) "_Collect" :: "pttrn \ 'a set \ bool \ 'a set" ("(1{(_/: _)./ _})") syntax "_Collect" :: "pttrn \ 'a set \ bool \ 'a set" ("(1{(_/ \ _)./ _})") translations "{p:A. P}" \ "CONST Collect (\p. p \ A \ P)" lemma CollectI: "P a \ a \ {x. P x}" by simp lemma CollectD: "a \ {x. P x} \ P a" by simp lemma Collect_cong: "(\x. P x = Q x) \ {x. P x} = {x. Q x}" by simp text \ Simproc for pulling \x = t\ in \{x. \ \ x = t \ \}\ to the front (and similarly for \t = x\): \ simproc_setup defined_Collect ("{x. P x \ Q x}") = \ fn _ => Quantifier1.rearrange_Collect (fn ctxt => resolve_tac ctxt @{thms Collect_cong} 1 THEN resolve_tac ctxt @{thms iffI} 1 THEN ALLGOALS (EVERY' [REPEAT_DETERM o eresolve_tac ctxt @{thms conjE}, DEPTH_SOLVE_1 o (assume_tac ctxt ORELSE' resolve_tac ctxt @{thms conjI})])) \ lemmas CollectE = CollectD [elim_format] lemma set_eqI: assumes "\x. x \ A \ x \ B" shows "A = B" proof - from assms have "{x. x \ A} = {x. x \ B}" by simp then show ?thesis by simp qed lemma set_eq_iff: "A = B \ (\x. x \ A \ x \ B)" by (auto intro:set_eqI) lemma Collect_eqI: assumes "\x. P x = Q x" shows "Collect P = Collect Q" using assms by (auto intro: set_eqI) text \Lifting of predicate class instances\ instantiation set :: (type) boolean_algebra begin definition less_eq_set where "A \ B \ (\x. member x A) \ (\x. member x B)" definition less_set where "A < B \ (\x. member x A) < (\x. member x B)" definition inf_set where "A \ B = Collect ((\x. member x A) \ (\x. member x B))" definition sup_set where "A \ B = Collect ((\x. member x A) \ (\x. member x B))" definition bot_set where "\ = Collect \" definition top_set where "\ = Collect \" definition uminus_set where "- A = Collect (- (\x. member x A))" definition minus_set where "A - B = Collect ((\x. member x A) - (\x. member x B))" instance by standard (simp_all add: less_eq_set_def less_set_def inf_set_def sup_set_def bot_set_def top_set_def uminus_set_def minus_set_def less_le_not_le sup_inf_distrib1 diff_eq set_eqI fun_eq_iff del: inf_apply sup_apply bot_apply top_apply minus_apply uminus_apply) end text \Set enumerations\ abbreviation empty :: "'a set" ("{}") where "{} \ bot" definition insert :: "'a \ 'a set \ 'a set" where insert_compr: "insert a B = {x. x = a \ x \ B}" syntax "_Finset" :: "args \ 'a set" ("{(_)}") translations "{x, xs}" \ "CONST insert x {xs}" "{x}" \ "CONST insert x {}" subsection \Subsets and bounded quantifiers\ abbreviation subset :: "'a set \ 'a set \ bool" where "subset \ less" abbreviation subset_eq :: "'a set \ 'a set \ bool" where "subset_eq \ less_eq" notation subset ("'(\')") and subset ("(_/ \ _)" [51, 51] 50) and subset_eq ("'(\')") and subset_eq ("(_/ \ _)" [51, 51] 50) abbreviation (input) supset :: "'a set \ 'a set \ bool" where "supset \ greater" abbreviation (input) supset_eq :: "'a set \ 'a set \ bool" where "supset_eq \ greater_eq" notation supset ("'(\')") and supset ("(_/ \ _)" [51, 51] 50) and supset_eq ("'(\')") and supset_eq ("(_/ \ _)" [51, 51] 50) notation (ASCII output) subset ("'(<')") and subset ("(_/ < _)" [51, 51] 50) and subset_eq ("'(<=')") and subset_eq ("(_/ <= _)" [51, 51] 50) definition Ball :: "'a set \ ('a \ bool) \ bool" where "Ball A P \ (\x. x \ A \ P x)" \ \bounded universal quantifiers\ definition Bex :: "'a set \ ('a \ bool) \ bool" where "Bex A P \ (\x. x \ A \ P x)" \ \bounded existential quantifiers\ syntax (ASCII) "_Ball" :: "pttrn \ 'a set \ bool \ bool" ("(3ALL (_/:_)./ _)" [0, 0, 10] 10) "_Bex" :: "pttrn \ 'a set \ bool \ bool" ("(3EX (_/:_)./ _)" [0, 0, 10] 10) "_Bex1" :: "pttrn \ 'a set \ bool \ bool" ("(3EX! (_/:_)./ _)" [0, 0, 10] 10) "_Bleast" :: "id \ 'a set \ bool \ 'a" ("(3LEAST (_/:_)./ _)" [0, 0, 10] 10) syntax (input) "_Ball" :: "pttrn \ 'a set \ bool \ bool" ("(3! (_/:_)./ _)" [0, 0, 10] 10) "_Bex" :: "pttrn \ 'a set \ bool \ bool" ("(3? (_/:_)./ _)" [0, 0, 10] 10) "_Bex1" :: "pttrn \ 'a set \ bool \ bool" ("(3?! (_/:_)./ _)" [0, 0, 10] 10) syntax "_Ball" :: "pttrn \ 'a set \ bool \ bool" ("(3\(_/\_)./ _)" [0, 0, 10] 10) "_Bex" :: "pttrn \ 'a set \ bool \ bool" ("(3\(_/\_)./ _)" [0, 0, 10] 10) "_Bex1" :: "pttrn \ 'a set \ bool \ bool" ("(3\!(_/\_)./ _)" [0, 0, 10] 10) "_Bleast" :: "id \ 'a set \ bool \ 'a" ("(3LEAST(_/\_)./ _)" [0, 0, 10] 10) translations "\x\A. P" \ "CONST Ball A (\x. P)" "\x\A. P" \ "CONST Bex A (\x. P)" "\!x\A. P" \ "\!x. x \ A \ P" "LEAST x:A. P" \ "LEAST x. x \ A \ P" syntax (ASCII output) "_setlessAll" :: "[idt, 'a, bool] \ bool" ("(3ALL _<_./ _)" [0, 0, 10] 10) "_setlessEx" :: "[idt, 'a, bool] \ bool" ("(3EX _<_./ _)" [0, 0, 10] 10) "_setleAll" :: "[idt, 'a, bool] \ bool" ("(3ALL _<=_./ _)" [0, 0, 10] 10) "_setleEx" :: "[idt, 'a, bool] \ bool" ("(3EX _<=_./ _)" [0, 0, 10] 10) "_setleEx1" :: "[idt, 'a, bool] \ bool" ("(3EX! _<=_./ _)" [0, 0, 10] 10) syntax "_setlessAll" :: "[idt, 'a, bool] \ bool" ("(3\_\_./ _)" [0, 0, 10] 10) "_setlessEx" :: "[idt, 'a, bool] \ bool" ("(3\_\_./ _)" [0, 0, 10] 10) "_setleAll" :: "[idt, 'a, bool] \ bool" ("(3\_\_./ _)" [0, 0, 10] 10) "_setleEx" :: "[idt, 'a, bool] \ bool" ("(3\_\_./ _)" [0, 0, 10] 10) "_setleEx1" :: "[idt, 'a, bool] \ bool" ("(3\!_\_./ _)" [0, 0, 10] 10) translations "\A\B. P" \ "\A. A \ B \ P" "\A\B. P" \ "\A. A \ B \ P" "\A\B. P" \ "\A. A \ B \ P" "\A\B. P" \ "\A. A \ B \ P" "\!A\B. P" \ "\!A. A \ B \ P" print_translation \ let val All_binder = Mixfix.binder_name \<^const_syntax>\All\; val Ex_binder = Mixfix.binder_name \<^const_syntax>\Ex\; val impl = \<^const_syntax>\HOL.implies\; val conj = \<^const_syntax>\HOL.conj\; val sbset = \<^const_syntax>\subset\; val sbset_eq = \<^const_syntax>\subset_eq\; val trans = [((All_binder, impl, sbset), \<^syntax_const>\_setlessAll\), ((All_binder, impl, sbset_eq), \<^syntax_const>\_setleAll\), ((Ex_binder, conj, sbset), \<^syntax_const>\_setlessEx\), ((Ex_binder, conj, sbset_eq), \<^syntax_const>\_setleEx\)]; fun mk v (v', T) c n P = if v = v' andalso not (Term.exists_subterm (fn Free (x, _) => x = v | _ => false) n) then Syntax.const c $ Syntax_Trans.mark_bound_body (v', T) $ n $ P else raise Match; fun tr' q = (q, fn _ => (fn [Const (\<^syntax_const>\_bound\, _) $ Free (v, Type (\<^type_name>\set\, _)), Const (c, _) $ (Const (d, _) $ (Const (\<^syntax_const>\_bound\, _) $ Free (v', T)) $ n) $ P] => (case AList.lookup (=) trans (q, c, d) of NONE => raise Match | SOME l => mk v (v', T) l n P) | _ => raise Match)); in [tr' All_binder, tr' Ex_binder] end \ text \ \<^medskip> Translate between \{e | x1\xn. P}\ and \{u. \x1\xn. u = e \ P}\; \{y. \x1\xn. y = e \ P}\ is only translated if \[0..n] \ bvs e\. \ syntax "_Setcompr" :: "'a \ idts \ bool \ 'a set" ("(1{_ |/_./ _})") parse_translation \ let val ex_tr = snd (Syntax_Trans.mk_binder_tr ("EX ", \<^const_syntax>\Ex\)); fun nvars (Const (\<^syntax_const>\_idts\, _) $ _ $ idts) = nvars idts + 1 | nvars _ = 1; fun setcompr_tr ctxt [e, idts, b] = let val eq = Syntax.const \<^const_syntax>\HOL.eq\ $ Bound (nvars idts) $ e; val P = Syntax.const \<^const_syntax>\HOL.conj\ $ eq $ b; val exP = ex_tr ctxt [idts, P]; in Syntax.const \<^const_syntax>\Collect\ $ absdummy dummyT exP end; in [(\<^syntax_const>\_Setcompr\, setcompr_tr)] end \ print_translation \ [Syntax_Trans.preserve_binder_abs2_tr' \<^const_syntax>\Ball\ \<^syntax_const>\_Ball\, Syntax_Trans.preserve_binder_abs2_tr' \<^const_syntax>\Bex\ \<^syntax_const>\_Bex\] \ \ \to avoid eta-contraction of body\ print_translation \ let val ex_tr' = snd (Syntax_Trans.mk_binder_tr' (\<^const_syntax>\Ex\, "DUMMY")); fun setcompr_tr' ctxt [Abs (abs as (_, _, P))] = let fun check (Const (\<^const_syntax>\Ex\, _) $ Abs (_, _, P), n) = check (P, n + 1) | check (Const (\<^const_syntax>\HOL.conj\, _) $ (Const (\<^const_syntax>\HOL.eq\, _) $ Bound m $ e) $ P, n) = n > 0 andalso m = n andalso not (loose_bvar1 (P, n)) andalso subset (=) (0 upto (n - 1), add_loose_bnos (e, 0, [])) | check _ = false; fun tr' (_ $ abs) = let val _ $ idts $ (_ $ (_ $ _ $ e) $ Q) = ex_tr' ctxt [abs] in Syntax.const \<^syntax_const>\_Setcompr\ $ e $ idts $ Q end; in if check (P, 0) then tr' P else let val (x as _ $ Free(xN, _), t) = Syntax_Trans.atomic_abs_tr' abs; val M = Syntax.const \<^syntax_const>\_Coll\ $ x $ t; in case t of Const (\<^const_syntax>\HOL.conj\, _) $ (Const (\<^const_syntax>\Set.member\, _) $ (Const (\<^syntax_const>\_bound\, _) $ Free (yN, _)) $ A) $ P => if xN = yN then Syntax.const \<^syntax_const>\_Collect\ $ x $ A $ P else M | _ => M end end; in [(\<^const_syntax>\Collect\, setcompr_tr')] end \ simproc_setup defined_Bex ("\x\A. P x \ Q x") = \ fn _ => Quantifier1.rearrange_Bex (fn ctxt => unfold_tac ctxt @{thms Bex_def}) \ simproc_setup defined_All ("\x\A. P x \ Q x") = \ fn _ => Quantifier1.rearrange_Ball (fn ctxt => unfold_tac ctxt @{thms Ball_def}) \ lemma ballI [intro!]: "(\x. x \ A \ P x) \ \x\A. P x" by (simp add: Ball_def) lemmas strip = impI allI ballI lemma bspec [dest?]: "\x\A. P x \ x \ A \ P x" by (simp add: Ball_def) text \Gives better instantiation for bound:\ setup \ map_theory_claset (fn ctxt => ctxt addbefore ("bspec", fn ctxt' => dresolve_tac ctxt' @{thms bspec} THEN' assume_tac ctxt')) \ ML \ structure Simpdata = struct open Simpdata; val mksimps_pairs = [(\<^const_name>\Ball\, @{thms bspec})] @ mksimps_pairs; end; open Simpdata; \ declaration \fn _ => Simplifier.map_ss (Simplifier.set_mksimps (mksimps mksimps_pairs))\ lemma ballE [elim]: "\x\A. P x \ (P x \ Q) \ (x \ A \ Q) \ Q" unfolding Ball_def by blast lemma bexI [intro]: "P x \ x \ A \ \x\A. P x" \ \Normally the best argument order: \P x\ constrains the choice of \x \ A\.\ unfolding Bex_def by blast lemma rev_bexI [intro?]: "x \ A \ P x \ \x\A. P x" \ \The best argument order when there is only one \x \ A\.\ unfolding Bex_def by blast lemma bexCI: "(\x\A. \ P x \ P a) \ a \ A \ \x\A. P x" unfolding Bex_def by blast lemma bexE [elim!]: "\x\A. P x \ (\x. x \ A \ P x \ Q) \ Q" unfolding Bex_def by blast lemma ball_triv [simp]: "(\x\A. P) \ ((\x. x \ A) \ P)" \ \trivial rewrite rule.\ by (simp add: Ball_def) lemma bex_triv [simp]: "(\x\A. P) \ ((\x. x \ A) \ P)" \ \Dual form for existentials.\ by (simp add: Bex_def) lemma bex_triv_one_point1 [simp]: "(\x\A. x = a) \ a \ A" by blast lemma bex_triv_one_point2 [simp]: "(\x\A. a = x) \ a \ A" by blast lemma bex_one_point1 [simp]: "(\x\A. x = a \ P x) \ a \ A \ P a" by blast lemma bex_one_point2 [simp]: "(\x\A. a = x \ P x) \ a \ A \ P a" by blast lemma ball_one_point1 [simp]: "(\x\A. x = a \ P x) \ (a \ A \ P a)" by blast lemma ball_one_point2 [simp]: "(\x\A. a = x \ P x) \ (a \ A \ P a)" by blast lemma ball_conj_distrib: "(\x\A. P x \ Q x) \ (\x\A. P x) \ (\x\A. Q x)" by blast lemma bex_disj_distrib: "(\x\A. P x \ Q x) \ (\x\A. P x) \ (\x\A. Q x)" by blast text \Congruence rules\ lemma ball_cong: "\ A = B; \x. x \ B \ P x \ Q x \ \ (\x\A. P x) \ (\x\B. Q x)" by (simp add: Ball_def) lemma ball_cong_simp [cong]: "\ A = B; \x. x \ B =simp=> P x \ Q x \ \ (\x\A. P x) \ (\x\B. Q x)" by (simp add: simp_implies_def Ball_def) lemma bex_cong: "\ A = B; \x. x \ B \ P x \ Q x \ \ (\x\A. P x) \ (\x\B. Q x)" by (simp add: Bex_def cong: conj_cong) lemma bex_cong_simp [cong]: "\ A = B; \x. x \ B =simp=> P x \ Q x \ \ (\x\A. P x) \ (\x\B. Q x)" by (simp add: simp_implies_def Bex_def cong: conj_cong) lemma bex1_def: "(\!x\X. P x) \ (\x\X. P x) \ (\x\X. \y\X. P x \ P y \ x = y)" by auto subsection \Basic operations\ subsubsection \Subsets\ lemma subsetI [intro!]: "(\x. x \ A \ x \ B) \ A \ B" by (simp add: less_eq_set_def le_fun_def) text \ \<^medskip> Map the type \'a set \ anything\ to just \'a\; for overloading constants whose first argument has type \'a set\. \ lemma subsetD [elim, intro?]: "A \ B \ c \ A \ c \ B" by (simp add: less_eq_set_def le_fun_def) \ \Rule in Modus Ponens style.\ lemma rev_subsetD [intro?,no_atp]: "c \ A \ A \ B \ c \ B" \ \The same, with reversed premises for use with @{method erule} -- cf. @{thm rev_mp}.\ by (rule subsetD) lemma subsetCE [elim,no_atp]: "A \ B \ (c \ A \ P) \ (c \ B \ P) \ P" \ \Classical elimination rule.\ by (auto simp add: less_eq_set_def le_fun_def) lemma subset_eq: "A \ B \ (\x\A. x \ B)" by blast lemma contra_subsetD [no_atp]: "A \ B \ c \ B \ c \ A" by blast lemma subset_refl: "A \ A" by (fact order_refl) (* already [iff] *) lemma subset_trans: "A \ B \ B \ C \ A \ C" by (fact order_trans) lemma subset_not_subset_eq [code]: "A \ B \ A \ B \ \ B \ A" by (fact less_le_not_le) lemma eq_mem_trans: "a = b \ b \ A \ a \ A" by simp lemmas basic_trans_rules [trans] = order_trans_rules rev_subsetD subsetD eq_mem_trans subsubsection \Equality\ lemma subset_antisym [intro!]: "A \ B \ B \ A \ A = B" \ \Anti-symmetry of the subset relation.\ by (iprover intro: set_eqI subsetD) text \\<^medskip> Equality rules from ZF set theory -- are they appropriate here?\ lemma equalityD1: "A = B \ A \ B" by simp lemma equalityD2: "A = B \ B \ A" by simp text \ \<^medskip> Be careful when adding this to the claset as \subset_empty\ is in the simpset: \<^prop>\A = {}\ goes to \<^prop>\{} \ A\ and \<^prop>\A \ {}\ and then back to \<^prop>\A = {}\! \ lemma equalityE: "A = B \ (A \ B \ B \ A \ P) \ P" by simp lemma equalityCE [elim]: "A = B \ (c \ A \ c \ B \ P) \ (c \ A \ c \ B \ P) \ P" by blast lemma eqset_imp_iff: "A = B \ x \ A \ x \ B" by simp lemma eqelem_imp_iff: "x = y \ x \ A \ y \ A" by simp subsubsection \The empty set\ lemma empty_def: "{} = {x. False}" by (simp add: bot_set_def bot_fun_def) lemma empty_iff [simp]: "c \ {} \ False" by (simp add: empty_def) lemma emptyE [elim!]: "a \ {} \ P" by simp lemma empty_subsetI [iff]: "{} \ A" \ \One effect is to delete the ASSUMPTION \<^prop>\{} \ A\\ by blast lemma equals0I: "(\y. y \ A \ False) \ A = {}" by blast lemma equals0D: "A = {} \ a \ A" \ \Use for reasoning about disjointness: \A \ B = {}\\ by blast lemma ball_empty [simp]: "Ball {} P \ True" by (simp add: Ball_def) lemma bex_empty [simp]: "Bex {} P \ False" by (simp add: Bex_def) subsubsection \The universal set -- UNIV\ abbreviation UNIV :: "'a set" where "UNIV \ top" lemma UNIV_def: "UNIV = {x. True}" by (simp add: top_set_def top_fun_def) lemma UNIV_I [simp]: "x \ UNIV" by (simp add: UNIV_def) declare UNIV_I [intro] \ \unsafe makes it less likely to cause problems\ lemma UNIV_witness [intro?]: "\x. x \ UNIV" by simp lemma subset_UNIV: "A \ UNIV" by (fact top_greatest) (* already simp *) text \ \<^medskip> Eta-contracting these two rules (to remove \P\) causes them to be ignored because of their interaction with congruence rules. \ lemma ball_UNIV [simp]: "Ball UNIV P \ All P" by (simp add: Ball_def) lemma bex_UNIV [simp]: "Bex UNIV P \ Ex P" by (simp add: Bex_def) lemma UNIV_eq_I: "(\x. x \ A) \ UNIV = A" by auto lemma UNIV_not_empty [iff]: "UNIV \ {}" by (blast elim: equalityE) lemma empty_not_UNIV[simp]: "{} \ UNIV" by blast subsubsection \The Powerset operator -- Pow\ definition Pow :: "'a set \ 'a set set" where Pow_def: "Pow A = {B. B \ A}" lemma Pow_iff [iff]: "A \ Pow B \ A \ B" by (simp add: Pow_def) lemma PowI: "A \ B \ A \ Pow B" by (simp add: Pow_def) lemma PowD: "A \ Pow B \ A \ B" by (simp add: Pow_def) lemma Pow_bottom: "{} \ Pow B" by simp lemma Pow_top: "A \ Pow A" by simp lemma Pow_not_empty: "Pow A \ {}" using Pow_top by blast subsubsection \Set complement\ lemma Compl_iff [simp]: "c \ - A \ c \ A" by (simp add: fun_Compl_def uminus_set_def) lemma ComplI [intro!]: "(c \ A \ False) \ c \ - A" by (simp add: fun_Compl_def uminus_set_def) blast text \ \<^medskip> This form, with negated conclusion, works well with the Classical prover. Negated assumptions behave like formulae on the right side of the notional turnstile \dots \ lemma ComplD [dest!]: "c \ - A \ c \ A" by simp lemmas ComplE = ComplD [elim_format] lemma Compl_eq: "- A = {x. \ x \ A}" by blast subsubsection \Binary intersection\ abbreviation inter :: "'a set \ 'a set \ 'a set" (infixl "\" 70) where "(\) \ inf" notation (ASCII) inter (infixl "Int" 70) lemma Int_def: "A \ B = {x. x \ A \ x \ B}" by (simp add: inf_set_def inf_fun_def) lemma Int_iff [simp]: "c \ A \ B \ c \ A \ c \ B" unfolding Int_def by blast lemma IntI [intro!]: "c \ A \ c \ B \ c \ A \ B" by simp lemma IntD1: "c \ A \ B \ c \ A" by simp lemma IntD2: "c \ A \ B \ c \ B" by simp lemma IntE [elim!]: "c \ A \ B \ (c \ A \ c \ B \ P) \ P" by simp lemma mono_Int: "mono f \ f (A \ B) \ f A \ f B" by (fact mono_inf) subsubsection \Binary union\ abbreviation union :: "'a set \ 'a set \ 'a set" (infixl "\" 65) where "union \ sup" notation (ASCII) union (infixl "Un" 65) lemma Un_def: "A \ B = {x. x \ A \ x \ B}" by (simp add: sup_set_def sup_fun_def) lemma Un_iff [simp]: "c \ A \ B \ c \ A \ c \ B" unfolding Un_def by blast lemma UnI1 [elim?]: "c \ A \ c \ A \ B" by simp lemma UnI2 [elim?]: "c \ B \ c \ A \ B" by simp text \\<^medskip> Classical introduction rule: no commitment to \A\ vs. \B\.\ lemma UnCI [intro!]: "(c \ B \ c \ A) \ c \ A \ B" by auto lemma UnE [elim!]: "c \ A \ B \ (c \ A \ P) \ (c \ B \ P) \ P" unfolding Un_def by blast lemma insert_def: "insert a B = {x. x = a} \ B" by (simp add: insert_compr Un_def) lemma mono_Un: "mono f \ f A \ f B \ f (A \ B)" by (fact mono_sup) subsubsection \Set difference\ lemma Diff_iff [simp]: "c \ A - B \ c \ A \ c \ B" by (simp add: minus_set_def fun_diff_def) lemma DiffI [intro!]: "c \ A \ c \ B \ c \ A - B" by simp lemma DiffD1: "c \ A - B \ c \ A" by simp lemma DiffD2: "c \ A - B \ c \ B \ P" by simp lemma DiffE [elim!]: "c \ A - B \ (c \ A \ c \ B \ P) \ P" by simp lemma set_diff_eq: "A - B = {x. x \ A \ x \ B}" by blast lemma Compl_eq_Diff_UNIV: "- A = (UNIV - A)" by blast subsubsection \Augmenting a set -- \<^const>\insert\\ lemma insert_iff [simp]: "a \ insert b A \ a = b \ a \ A" unfolding insert_def by blast lemma insertI1: "a \ insert a B" by simp lemma insertI2: "a \ B \ a \ insert b B" by simp lemma insertE [elim!]: "a \ insert b A \ (a = b \ P) \ (a \ A \ P) \ P" unfolding insert_def by blast lemma insertCI [intro!]: "(a \ B \ a = b) \ a \ insert b B" \ \Classical introduction rule.\ by auto lemma subset_insert_iff: "A \ insert x B \ (if x \ A then A - {x} \ B else A \ B)" by auto lemma set_insert: assumes "x \ A" obtains B where "A = insert x B" and "x \ B" proof show "A = insert x (A - {x})" using assms by blast show "x \ A - {x}" by blast qed lemma insert_ident: "x \ A \ x \ B \ insert x A = insert x B \ A = B" by auto lemma insert_eq_iff: assumes "a \ A" "b \ B" shows "insert a A = insert b B \ (if a = b then A = B else \C. A = insert b C \ b \ C \ B = insert a C \ a \ C)" (is "?L \ ?R") proof show ?R if ?L proof (cases "a = b") case True with assms \?L\ show ?R by (simp add: insert_ident) next case False let ?C = "A - {b}" have "A = insert b ?C \ b \ ?C \ B = insert a ?C \ a \ ?C" using assms \?L\ \a \ b\ by auto then show ?R using \a \ b\ by auto qed show ?L if ?R using that by (auto split: if_splits) qed lemma insert_UNIV: "insert x UNIV = UNIV" by auto subsubsection \Singletons, using insert\ lemma singletonI [intro!]: "a \ {a}" \ \Redundant? But unlike \insertCI\, it proves the subgoal immediately!\ by (rule insertI1) lemma singletonD [dest!]: "b \ {a} \ b = a" by blast lemmas singletonE = singletonD [elim_format] lemma singleton_iff: "b \ {a} \ b = a" by blast lemma singleton_inject [dest!]: "{a} = {b} \ a = b" by blast lemma singleton_insert_inj_eq [iff]: "{b} = insert a A \ a = b \ A \ {b}" by blast lemma singleton_insert_inj_eq' [iff]: "insert a A = {b} \ a = b \ A \ {b}" by blast lemma subset_singletonD: "A \ {x} \ A = {} \ A = {x}" by fast lemma subset_singleton_iff: "X \ {a} \ X = {} \ X = {a}" by blast lemma subset_singleton_iff_Uniq: "(\a. A \ {a}) \ (\\<^sub>\\<^sub>1x. x \ A)" unfolding Uniq_def by blast lemma singleton_conv [simp]: "{x. x = a} = {a}" by blast lemma singleton_conv2 [simp]: "{x. a = x} = {a}" by blast lemma Diff_single_insert: "A - {x} \ B \ A \ insert x B" by blast lemma subset_Diff_insert: "A \ B - insert x C \ A \ B - C \ x \ A" by blast lemma doubleton_eq_iff: "{a, b} = {c, d} \ a = c \ b = d \ a = d \ b = c" by (blast elim: equalityE) lemma Un_singleton_iff: "A \ B = {x} \ A = {} \ B = {x} \ A = {x} \ B = {} \ A = {x} \ B = {x}" by auto lemma singleton_Un_iff: "{x} = A \ B \ A = {} \ B = {x} \ A = {x} \ B = {} \ A = {x} \ B = {x}" by auto subsubsection \Image of a set under a function\ text \Frequently \b\ does not have the syntactic form of \f x\.\ definition image :: "('a \ 'b) \ 'a set \ 'b set" (infixr "`" 90) where "f ` A = {y. \x\A. y = f x}" lemma image_eqI [simp, intro]: "b = f x \ x \ A \ b \ f ` A" unfolding image_def by blast lemma imageI: "x \ A \ f x \ f ` A" by (rule image_eqI) (rule refl) lemma rev_image_eqI: "x \ A \ b = f x \ b \ f ` A" \ \This version's more effective when we already have the required \x\.\ by (rule image_eqI) lemma imageE [elim!]: assumes "b \ (\x. f x) ` A" \ \The eta-expansion gives variable-name preservation.\ obtains x where "b = f x" and "x \ A" using assms unfolding image_def by blast lemma Compr_image_eq: "{x \ f ` A. P x} = f ` {x \ A. P (f x)}" by auto lemma image_Un: "f ` (A \ B) = f ` A \ f ` B" by blast lemma image_iff: "z \ f ` A \ (\x\A. z = f x)" by blast lemma image_subsetI: "(\x. x \ A \ f x \ B) \ f ` A \ B" \ \Replaces the three steps \subsetI\, \imageE\, \hypsubst\, but breaks too many existing proofs.\ by blast lemma image_subset_iff: "f ` A \ B \ (\x\A. f x \ B)" \ \This rewrite rule would confuse users if made default.\ by blast lemma subset_imageE: assumes "B \ f ` A" obtains C where "C \ A" and "B = f ` C" proof - from assms have "B = f ` {a \ A. f a \ B}" by fast moreover have "{a \ A. f a \ B} \ A" by blast ultimately show thesis by (blast intro: that) qed lemma subset_image_iff: "B \ f ` A \ (\AA\A. B = f ` AA)" by (blast elim: subset_imageE) lemma image_ident [simp]: "(\x. x) ` Y = Y" by blast lemma image_empty [simp]: "f ` {} = {}" by blast lemma image_insert [simp]: "f ` insert a B = insert (f a) (f ` B)" by blast lemma image_constant: "x \ A \ (\x. c) ` A = {c}" by auto lemma image_constant_conv: "(\x. c) ` A = (if A = {} then {} else {c})" by auto lemma image_image: "f ` (g ` A) = (\x. f (g x)) ` A" by blast lemma insert_image [simp]: "x \ A \ insert (f x) (f ` A) = f ` A" by blast lemma image_is_empty [iff]: "f ` A = {} \ A = {}" by blast lemma empty_is_image [iff]: "{} = f ` A \ A = {}" by blast lemma image_Collect: "f ` {x. P x} = {f x | x. P x}" \ \NOT suitable as a default simp rule: the RHS isn't simpler than the LHS, with its implicit quantifier and conjunction. Also image enjoys better equational properties than does the RHS.\ by blast lemma if_image_distrib [simp]: "(\x. if P x then f x else g x) ` S = f ` (S \ {x. P x}) \ g ` (S \ {x. \ P x})" by auto lemma image_cong: "f ` M = g ` N" if "M = N" "\x. x \ N \ f x = g x" using that by (simp add: image_def) lemma image_cong_simp [cong]: "f ` M = g ` N" if "M = N" "\x. x \ N =simp=> f x = g x" using that image_cong [of M N f g] by (simp add: simp_implies_def) lemma image_Int_subset: "f ` (A \ B) \ f ` A \ f ` B" by blast lemma image_diff_subset: "f ` A - f ` B \ f ` (A - B)" by blast lemma Setcompr_eq_image: "{f x |x. x \ A} = f ` A" by blast lemma setcompr_eq_image: "{f x |x. P x} = f ` {x. P x}" by auto lemma ball_imageD: "\x\f ` A. P x \ \x\A. P (f x)" by simp lemma bex_imageD: "\x\f ` A. P x \ \x\A. P (f x)" by auto lemma image_add_0 [simp]: "(+) (0::'a::comm_monoid_add) ` S = S" by auto text \\<^medskip> Range of a function -- just an abbreviation for image!\ abbreviation range :: "('a \ 'b) \ 'b set" \ \of function\ where "range f \ f ` UNIV" lemma range_eqI: "b = f x \ b \ range f" by simp lemma rangeI: "f x \ range f" by simp lemma rangeE [elim?]: "b \ range (\x. f x) \ (\x. b = f x \ P) \ P" by (rule imageE) lemma full_SetCompr_eq: "{u. \x. u = f x} = range f" by auto lemma range_composition: "range (\x. f (g x)) = f ` range g" by auto lemma range_constant [simp]: "range (\_. x) = {x}" by (simp add: image_constant) lemma range_eq_singletonD: "range f = {a} \ f x = a" by auto subsubsection \Some rules with \if\\ text \Elimination of \{x. \ \ x = t \ \}\.\ lemma Collect_conv_if: "{x. x = a \ P x} = (if P a then {a} else {})" by auto lemma Collect_conv_if2: "{x. a = x \ P x} = (if P a then {a} else {})" by auto text \ Rewrite rules for boolean case-splitting: faster than \if_split [split]\. \ lemma if_split_eq1: "(if Q then x else y) = b \ (Q \ x = b) \ (\ Q \ y = b)" by (rule if_split) lemma if_split_eq2: "a = (if Q then x else y) \ (Q \ a = x) \ (\ Q \ a = y)" by (rule if_split) text \ Split ifs on either side of the membership relation. Not for \[simp]\ -- can cause goals to blow up! \ lemma if_split_mem1: "(if Q then x else y) \ b \ (Q \ x \ b) \ (\ Q \ y \ b)" by (rule if_split) lemma if_split_mem2: "(a \ (if Q then x else y)) \ (Q \ a \ x) \ (\ Q \ a \ y)" by (rule if_split [where P = "\S. a \ S"]) lemmas split_ifs = if_bool_eq_conj if_split_eq1 if_split_eq2 if_split_mem1 if_split_mem2 (*Would like to add these, but the existing code only searches for the outer-level constant, which in this case is just Set.member; we instead need to use term-nets to associate patterns with rules. Also, if a rule fails to apply, then the formula should be kept. [("uminus", Compl_iff RS iffD1), ("minus", [Diff_iff RS iffD1]), ("Int", [IntD1,IntD2]), ("Collect", [CollectD]), ("Inter", [InterD]), ("INTER", [INT_D])] *) subsection \Further operations and lemmas\ subsubsection \The ``proper subset'' relation\ lemma psubsetI [intro!]: "A \ B \ A \ B \ A \ B" unfolding less_le by blast lemma psubsetE [elim!]: "A \ B \ (A \ B \ \ B \ A \ R) \ R" unfolding less_le by blast lemma psubset_insert_iff: "A \ insert x B \ (if x \ B then A \ B else if x \ A then A - {x} \ B else A \ B)" by (auto simp add: less_le subset_insert_iff) lemma psubset_eq: "A \ B \ A \ B \ A \ B" by (simp only: less_le) lemma psubset_imp_subset: "A \ B \ A \ B" by (simp add: psubset_eq) lemma psubset_trans: "A \ B \ B \ C \ A \ C" unfolding less_le by (auto dest: subset_antisym) lemma psubsetD: "A \ B \ c \ A \ c \ B" unfolding less_le by (auto dest: subsetD) lemma psubset_subset_trans: "A \ B \ B \ C \ A \ C" by (auto simp add: psubset_eq) lemma subset_psubset_trans: "A \ B \ B \ C \ A \ C" by (auto simp add: psubset_eq) lemma psubset_imp_ex_mem: "A \ B \ \b. b \ B - A" unfolding less_le by blast lemma atomize_ball: "(\x. x \ A \ P x) \ Trueprop (\x\A. P x)" by (simp only: Ball_def atomize_all atomize_imp) lemmas [symmetric, rulify] = atomize_ball and [symmetric, defn] = atomize_ball lemma image_Pow_mono: "f ` A \ B \ image f ` Pow A \ Pow B" by blast lemma image_Pow_surj: "f ` A = B \ image f ` Pow A = Pow B" by (blast elim: subset_imageE) subsubsection \Derived rules involving subsets.\ text \\insert\.\ lemma subset_insertI: "B \ insert a B" by (rule subsetI) (erule insertI2) lemma subset_insertI2: "A \ B \ A \ insert b B" by blast lemma subset_insert: "x \ A \ A \ insert x B \ A \ B" by blast text \\<^medskip> Finite Union -- the least upper bound of two sets.\ lemma Un_upper1: "A \ A \ B" by (fact sup_ge1) lemma Un_upper2: "B \ A \ B" by (fact sup_ge2) lemma Un_least: "A \ C \ B \ C \ A \ B \ C" by (fact sup_least) text \\<^medskip> Finite Intersection -- the greatest lower bound of two sets.\ lemma Int_lower1: "A \ B \ A" by (fact inf_le1) lemma Int_lower2: "A \ B \ B" by (fact inf_le2) lemma Int_greatest: "C \ A \ C \ B \ C \ A \ B" by (fact inf_greatest) text \\<^medskip> Set difference.\ lemma Diff_subset[simp]: "A - B \ A" by blast lemma Diff_subset_conv: "A - B \ C \ A \ B \ C" by blast subsubsection \Equalities involving union, intersection, inclusion, etc.\ text \\{}\.\ lemma Collect_const [simp]: "{s. P} = (if P then UNIV else {})" \ \supersedes \Collect_False_empty\\ by auto lemma subset_empty [simp]: "A \ {} \ A = {}" by (fact bot_unique) lemma not_psubset_empty [iff]: "\ (A < {})" by (fact not_less_bot) (* FIXME: already simp *) lemma Collect_subset [simp]: "{x\A. P x} \ A" by auto lemma Collect_empty_eq [simp]: "Collect P = {} \ (\x. \ P x)" by blast lemma empty_Collect_eq [simp]: "{} = Collect P \ (\x. \ P x)" by blast lemma Collect_neg_eq: "{x. \ P x} = - {x. P x}" by blast lemma Collect_disj_eq: "{x. P x \ Q x} = {x. P x} \ {x. Q x}" by blast lemma Collect_imp_eq: "{x. P x \ Q x} = - {x. P x} \ {x. Q x}" by blast lemma Collect_conj_eq: "{x. P x \ Q x} = {x. P x} \ {x. Q x}" by blast lemma Collect_mono_iff: "Collect P \ Collect Q \ (\x. P x \ Q x)" by blast text \\<^medskip> \insert\.\ lemma insert_is_Un: "insert a A = {a} \ A" \ \NOT SUITABLE FOR REWRITING since \{a} \ insert a {}\\ by blast lemma insert_not_empty [simp]: "insert a A \ {}" and empty_not_insert [simp]: "{} \ insert a A" by blast+ lemma insert_absorb: "a \ A \ insert a A = A" \ \\[simp]\ causes recursive calls when there are nested inserts\ \ \with \<^emph>\quadratic\ running time\ by blast lemma insert_absorb2 [simp]: "insert x (insert x A) = insert x A" by blast lemma insert_commute: "insert x (insert y A) = insert y (insert x A)" by blast lemma insert_subset [simp]: "insert x A \ B \ x \ B \ A \ B" by blast lemma mk_disjoint_insert: "a \ A \ \B. A = insert a B \ a \ B" \ \use new \B\ rather than \A - {a}\ to avoid infinite unfolding\ by (rule exI [where x = "A - {a}"]) blast lemma insert_Collect: "insert a (Collect P) = {u. u \ a \ P u}" by auto lemma insert_inter_insert [simp]: "insert a A \ insert a B = insert a (A \ B)" by blast lemma insert_disjoint [simp]: "insert a A \ B = {} \ a \ B \ A \ B = {}" "{} = insert a A \ B \ a \ B \ {} = A \ B" by auto lemma disjoint_insert [simp]: "B \ insert a A = {} \ a \ B \ B \ A = {}" "{} = A \ insert b B \ b \ A \ {} = A \ B" by auto text \\<^medskip> \Int\\ lemma Int_absorb: "A \ A = A" by (fact inf_idem) (* already simp *) lemma Int_left_absorb: "A \ (A \ B) = A \ B" by (fact inf_left_idem) lemma Int_commute: "A \ B = B \ A" by (fact inf_commute) lemma Int_left_commute: "A \ (B \ C) = B \ (A \ C)" by (fact inf_left_commute) lemma Int_assoc: "(A \ B) \ C = A \ (B \ C)" by (fact inf_assoc) lemmas Int_ac = Int_assoc Int_left_absorb Int_commute Int_left_commute \ \Intersection is an AC-operator\ lemma Int_absorb1: "B \ A \ A \ B = B" by (fact inf_absorb2) lemma Int_absorb2: "A \ B \ A \ B = A" by (fact inf_absorb1) lemma Int_empty_left: "{} \ B = {}" by (fact inf_bot_left) (* already simp *) lemma Int_empty_right: "A \ {} = {}" by (fact inf_bot_right) (* already simp *) lemma disjoint_eq_subset_Compl: "A \ B = {} \ A \ - B" by blast lemma disjoint_iff: "A \ B = {} \ (\x. x\A \ x \ B)" by blast lemma disjoint_iff_not_equal: "A \ B = {} \ (\x\A. \y\B. x \ y)" by blast lemma Int_UNIV_left: "UNIV \ B = B" by (fact inf_top_left) (* already simp *) lemma Int_UNIV_right: "A \ UNIV = A" by (fact inf_top_right) (* already simp *) lemma Int_Un_distrib: "A \ (B \ C) = (A \ B) \ (A \ C)" by (fact inf_sup_distrib1) lemma Int_Un_distrib2: "(B \ C) \ A = (B \ A) \ (C \ A)" by (fact inf_sup_distrib2) lemma Int_UNIV [simp]: "A \ B = UNIV \ A = UNIV \ B = UNIV" by (fact inf_eq_top_iff) (* already simp *) lemma Int_subset_iff [simp]: "C \ A \ B \ C \ A \ C \ B" by (fact le_inf_iff) lemma Int_Collect: "x \ A \ {x. P x} \ x \ A \ P x" by blast text \\<^medskip> \Un\.\ lemma Un_absorb: "A \ A = A" by (fact sup_idem) (* already simp *) lemma Un_left_absorb: "A \ (A \ B) = A \ B" by (fact sup_left_idem) lemma Un_commute: "A \ B = B \ A" by (fact sup_commute) lemma Un_left_commute: "A \ (B \ C) = B \ (A \ C)" by (fact sup_left_commute) lemma Un_assoc: "(A \ B) \ C = A \ (B \ C)" by (fact sup_assoc) lemmas Un_ac = Un_assoc Un_left_absorb Un_commute Un_left_commute \ \Union is an AC-operator\ lemma Un_absorb1: "A \ B \ A \ B = B" by (fact sup_absorb2) lemma Un_absorb2: "B \ A \ A \ B = A" by (fact sup_absorb1) lemma Un_empty_left: "{} \ B = B" by (fact sup_bot_left) (* already simp *) lemma Un_empty_right: "A \ {} = A" by (fact sup_bot_right) (* already simp *) lemma Un_UNIV_left: "UNIV \ B = UNIV" by (fact sup_top_left) (* already simp *) lemma Un_UNIV_right: "A \ UNIV = UNIV" by (fact sup_top_right) (* already simp *) lemma Un_insert_left [simp]: "(insert a B) \ C = insert a (B \ C)" by blast lemma Un_insert_right [simp]: "A \ (insert a B) = insert a (A \ B)" by blast lemma Int_insert_left: "(insert a B) \ C = (if a \ C then insert a (B \ C) else B \ C)" by auto lemma Int_insert_left_if0 [simp]: "a \ C \ (insert a B) \ C = B \ C" by auto lemma Int_insert_left_if1 [simp]: "a \ C \ (insert a B) \ C = insert a (B \ C)" by auto lemma Int_insert_right: "A \ (insert a B) = (if a \ A then insert a (A \ B) else A \ B)" by auto lemma Int_insert_right_if0 [simp]: "a \ A \ A \ (insert a B) = A \ B" by auto lemma Int_insert_right_if1 [simp]: "a \ A \ A \ (insert a B) = insert a (A \ B)" by auto lemma Un_Int_distrib: "A \ (B \ C) = (A \ B) \ (A \ C)" by (fact sup_inf_distrib1) lemma Un_Int_distrib2: "(B \ C) \ A = (B \ A) \ (C \ A)" by (fact sup_inf_distrib2) lemma Un_Int_crazy: "(A \ B) \ (B \ C) \ (C \ A) = (A \ B) \ (B \ C) \ (C \ A)" by blast lemma subset_Un_eq: "A \ B \ A \ B = B" by (fact le_iff_sup) lemma Un_empty [iff]: "A \ B = {} \ A = {} \ B = {}" by (fact sup_eq_bot_iff) (* FIXME: already simp *) lemma Un_subset_iff [simp]: "A \ B \ C \ A \ C \ B \ C" by (fact le_sup_iff) lemma Un_Diff_Int: "(A - B) \ (A \ B) = A" by blast lemma Diff_Int2: "A \ C - B \ C = A \ C - B" by blast lemma subset_UnE: assumes "C \ A \ B" obtains A' B' where "A' \ A" "B' \ B" "C = A' \ B'" proof show "C \ A \ A" "C \ B \ B" "C = (C \ A) \ (C \ B)" using assms by blast+ qed lemma Un_Int_eq [simp]: "(S \ T) \ S = S" "(S \ T) \ T = T" "S \ (S \ T) = S" "T \ (S \ T) = T" by auto lemma Int_Un_eq [simp]: "(S \ T) \ S = S" "(S \ T) \ T = T" "S \ (S \ T) = S" "T \ (S \ T) = T" by auto text \\<^medskip> Set complement\ lemma Compl_disjoint [simp]: "A \ - A = {}" by (fact inf_compl_bot) lemma Compl_disjoint2 [simp]: "- A \ A = {}" by (fact compl_inf_bot) lemma Compl_partition: "A \ - A = UNIV" by (fact sup_compl_top) lemma Compl_partition2: "- A \ A = UNIV" by (fact compl_sup_top) lemma double_complement: "- (-A) = A" for A :: "'a set" by (fact double_compl) (* already simp *) lemma Compl_Un: "- (A \ B) = (- A) \ (- B)" by (fact compl_sup) (* already simp *) lemma Compl_Int: "- (A \ B) = (- A) \ (- B)" by (fact compl_inf) (* already simp *) lemma subset_Compl_self_eq: "A \ - A \ A = {}" by blast lemma Un_Int_assoc_eq: "(A \ B) \ C = A \ (B \ C) \ C \ A" \ \Halmos, Naive Set Theory, page 16.\ by blast lemma Compl_UNIV_eq: "- UNIV = {}" by (fact compl_top_eq) (* already simp *) lemma Compl_empty_eq: "- {} = UNIV" by (fact compl_bot_eq) (* already simp *) lemma Compl_subset_Compl_iff [iff]: "- A \ - B \ B \ A" by (fact compl_le_compl_iff) (* FIXME: already simp *) lemma Compl_eq_Compl_iff [iff]: "- A = - B \ A = B" for A B :: "'a set" by (fact compl_eq_compl_iff) (* FIXME: already simp *) lemma Compl_insert: "- insert x A = (- A) - {x}" by blast text \\<^medskip> Bounded quantifiers. The following are not added to the default simpset because (a) they duplicate the body and (b) there are no similar rules for \Int\. \ lemma ball_Un: "(\x \ A \ B. P x) \ (\x\A. P x) \ (\x\B. P x)" by blast lemma bex_Un: "(\x \ A \ B. P x) \ (\x\A. P x) \ (\x\B. P x)" by blast text \\<^medskip> Set difference.\ lemma Diff_eq: "A - B = A \ (- B)" by blast lemma Diff_eq_empty_iff [simp]: "A - B = {} \ A \ B" by blast lemma Diff_cancel [simp]: "A - A = {}" by blast lemma Diff_idemp [simp]: "(A - B) - B = A - B" for A B :: "'a set" by blast lemma Diff_triv: "A \ B = {} \ A - B = A" by (blast elim: equalityE) lemma empty_Diff [simp]: "{} - A = {}" by blast lemma Diff_empty [simp]: "A - {} = A" by blast lemma Diff_UNIV [simp]: "A - UNIV = {}" by blast lemma Diff_insert0 [simp]: "x \ A \ A - insert x B = A - B" by blast lemma Diff_insert: "A - insert a B = A - B - {a}" \ \NOT SUITABLE FOR REWRITING since \{a} \ insert a 0\\ by blast lemma Diff_insert2: "A - insert a B = A - {a} - B" \ \NOT SUITABLE FOR REWRITING since \{a} \ insert a 0\\ by blast lemma insert_Diff_if: "insert x A - B = (if x \ B then A - B else insert x (A - B))" by auto lemma insert_Diff1 [simp]: "x \ B \ insert x A - B = A - B" by blast lemma insert_Diff_single[simp]: "insert a (A - {a}) = insert a A" by blast lemma insert_Diff: "a \ A \ insert a (A - {a}) = A" by blast lemma Diff_insert_absorb: "x \ A \ (insert x A) - {x} = A" by auto lemma Diff_disjoint [simp]: "A \ (B - A) = {}" by blast lemma Diff_partition: "A \ B \ A \ (B - A) = B" by blast lemma double_diff: "A \ B \ B \ C \ B - (C - A) = A" by blast lemma Un_Diff_cancel [simp]: "A \ (B - A) = A \ B" by blast lemma Un_Diff_cancel2 [simp]: "(B - A) \ A = B \ A" by blast lemma Diff_Un: "A - (B \ C) = (A - B) \ (A - C)" by blast lemma Diff_Int: "A - (B \ C) = (A - B) \ (A - C)" by blast lemma Diff_Diff_Int: "A - (A - B) = A \ B" by blast lemma Un_Diff: "(A \ B) - C = (A - C) \ (B - C)" by blast lemma Int_Diff: "(A \ B) - C = A \ (B - C)" by blast lemma Diff_Int_distrib: "C \ (A - B) = (C \ A) - (C \ B)" by blast lemma Diff_Int_distrib2: "(A - B) \ C = (A \ C) - (B \ C)" by blast lemma Diff_Compl [simp]: "A - (- B) = A \ B" by auto lemma Compl_Diff_eq [simp]: "- (A - B) = - A \ B" by blast lemma subset_Compl_singleton [simp]: "A \ - {b} \ b \ A" by blast text \\<^medskip> Quantification over type \<^typ>\bool\.\ lemma bool_induct: "P True \ P False \ P x" by (cases x) auto lemma all_bool_eq: "(\b. P b) \ P True \ P False" by (auto intro: bool_induct) lemma bool_contrapos: "P x \ \ P False \ P True" by (cases x) auto lemma ex_bool_eq: "(\b. P b) \ P True \ P False" by (auto intro: bool_contrapos) lemma UNIV_bool: "UNIV = {False, True}" by (auto intro: bool_induct) text \\<^medskip> \Pow\\ lemma Pow_empty [simp]: "Pow {} = {{}}" by (auto simp add: Pow_def) lemma Pow_singleton_iff [simp]: "Pow X = {Y} \ X = {} \ Y = {}" by blast (* somewhat slow *) lemma Pow_insert: "Pow (insert a A) = Pow A \ (insert a ` Pow A)" by (blast intro: image_eqI [where ?x = "u - {a}" for u]) lemma Pow_Compl: "Pow (- A) = {- B | B. A \ Pow B}" by (blast intro: exI [where ?x = "- u" for u]) lemma Pow_UNIV [simp]: "Pow UNIV = UNIV" by blast lemma Un_Pow_subset: "Pow A \ Pow B \ Pow (A \ B)" by blast lemma Pow_Int_eq [simp]: "Pow (A \ B) = Pow A \ Pow B" by blast text \\<^medskip> Miscellany.\ lemma set_eq_subset: "A = B \ A \ B \ B \ A" by blast lemma subset_iff: "A \ B \ (\t. t \ A \ t \ B)" by blast lemma subset_iff_psubset_eq: "A \ B \ A \ B \ A = B" unfolding less_le by blast lemma all_not_in_conv [simp]: "(\x. x \ A) \ A = {}" by blast lemma ex_in_conv: "(\x. x \ A) \ A \ {}" by blast lemma ball_simps [simp, no_atp]: "\A P Q. (\x\A. P x \ Q) \ ((\x\A. P x) \ Q)" "\A P Q. (\x\A. P \ Q x) \ (P \ (\x\A. Q x))" "\A P Q. (\x\A. P \ Q x) \ (P \ (\x\A. Q x))" "\A P Q. (\x\A. P x \ Q) \ ((\x\A. P x) \ Q)" "\P. (\x\{}. P x) \ True" "\P. (\x\UNIV. P x) \ (\x. P x)" "\a B P. (\x\insert a B. P x) \ (P a \ (\x\B. P x))" "\P Q. (\x\Collect Q. P x) \ (\x. Q x \ P x)" "\A P f. (\x\f`A. P x) \ (\x\A. P (f x))" "\A P. (\ (\x\A. P x)) \ (\x\A. \ P x)" by auto lemma bex_simps [simp, no_atp]: "\A P Q. (\x\A. P x \ Q) \ ((\x\A. P x) \ Q)" "\A P Q. (\x\A. P \ Q x) \ (P \ (\x\A. Q x))" "\P. (\x\{}. P x) \ False" "\P. (\x\UNIV. P x) \ (\x. P x)" "\a B P. (\x\insert a B. P x) \ (P a \ (\x\B. P x))" "\P Q. (\x\Collect Q. P x) \ (\x. Q x \ P x)" "\A P f. (\x\f`A. P x) \ (\x\A. P (f x))" "\A P. (\(\x\A. P x)) \ (\x\A. \ P x)" by auto lemma ex_image_cong_iff [simp, no_atp]: "(\x. x\f`A) \ A \ {}" "(\x. x\f`A \ P x) \ (\x\A. P (f x))" by auto subsubsection \Monotonicity of various operations\ lemma image_mono: "A \ B \ f ` A \ f ` B" by blast lemma Pow_mono: "A \ B \ Pow A \ Pow B" by blast lemma insert_mono: "C \ D \ insert a C \ insert a D" by blast lemma Un_mono: "A \ C \ B \ D \ A \ B \ C \ D" by (fact sup_mono) lemma Int_mono: "A \ C \ B \ D \ A \ B \ C \ D" by (fact inf_mono) lemma Diff_mono: "A \ C \ D \ B \ A - B \ C - D" by blast lemma Compl_anti_mono: "A \ B \ - B \ - A" by (fact compl_mono) text \\<^medskip> Monotonicity of implications.\ lemma in_mono: "A \ B \ x \ A \ x \ B" by (rule impI) (erule subsetD) lemma conj_mono: "P1 \ Q1 \ P2 \ Q2 \ (P1 \ P2) \ (Q1 \ Q2)" by iprover lemma disj_mono: "P1 \ Q1 \ P2 \ Q2 \ (P1 \ P2) \ (Q1 \ Q2)" by iprover lemma imp_mono: "Q1 \ P1 \ P2 \ Q2 \ (P1 \ P2) \ (Q1 \ Q2)" by iprover lemma imp_refl: "P \ P" .. lemma not_mono: "Q \ P \ \ P \ \ Q" by iprover lemma ex_mono: "(\x. P x \ Q x) \ (\x. P x) \ (\x. Q x)" by iprover lemma all_mono: "(\x. P x \ Q x) \ (\x. P x) \ (\x. Q x)" by iprover lemma Collect_mono: "(\x. P x \ Q x) \ Collect P \ Collect Q" by blast lemma Int_Collect_mono: "A \ B \ (\x. x \ A \ P x \ Q x) \ A \ Collect P \ B \ Collect Q" by blast lemmas basic_monos = subset_refl imp_refl disj_mono conj_mono ex_mono Collect_mono in_mono lemma eq_to_mono: "a = b \ c = d \ b \ d \ a \ c" by iprover subsubsection \Inverse image of a function\ definition vimage :: "('a \ 'b) \ 'b set \ 'a set" (infixr "-`" 90) where "f -` B \ {x. f x \ B}" lemma vimage_eq [simp]: "a \ f -` B \ f a \ B" unfolding vimage_def by blast lemma vimage_singleton_eq: "a \ f -` {b} \ f a = b" by simp lemma vimageI [intro]: "f a = b \ b \ B \ a \ f -` B" unfolding vimage_def by blast lemma vimageI2: "f a \ A \ a \ f -` A" unfolding vimage_def by fast lemma vimageE [elim!]: "a \ f -` B \ (\x. f a = x \ x \ B \ P) \ P" unfolding vimage_def by blast lemma vimageD: "a \ f -` A \ f a \ A" unfolding vimage_def by fast lemma vimage_empty [simp]: "f -` {} = {}" by blast lemma vimage_Compl: "f -` (- A) = - (f -` A)" by blast lemma vimage_Un [simp]: "f -` (A \ B) = (f -` A) \ (f -` B)" by blast lemma vimage_Int [simp]: "f -` (A \ B) = (f -` A) \ (f -` B)" by fast lemma vimage_Collect_eq [simp]: "f -` Collect P = {y. P (f y)}" by blast lemma vimage_Collect: "(\x. P (f x) = Q x) \ f -` (Collect P) = Collect Q" by blast lemma vimage_insert: "f -` (insert a B) = (f -` {a}) \ (f -` B)" \ \NOT suitable for rewriting because of the recurrence of \{a}\.\ by blast lemma vimage_Diff: "f -` (A - B) = (f -` A) - (f -` B)" by blast lemma vimage_UNIV [simp]: "f -` UNIV = UNIV" by blast lemma vimage_mono: "A \ B \ f -` A \ f -` B" \ \monotonicity\ by blast lemma vimage_image_eq: "f -` (f ` A) = {y. \x\A. f x = f y}" by (blast intro: sym) lemma image_vimage_subset: "f ` (f -` A) \ A" by blast lemma image_vimage_eq [simp]: "f ` (f -` A) = A \ range f" by blast lemma image_subset_iff_subset_vimage: "f ` A \ B \ A \ f -` B" by blast lemma subset_vimage_iff: "A \ f -` B \ (\x\A. f x \ B)" by auto lemma vimage_const [simp]: "((\x. c) -` A) = (if c \ A then UNIV else {})" by auto lemma vimage_if [simp]: "((\x. if x \ B then c else d) -` A) = (if c \ A then (if d \ A then UNIV else B) else if d \ A then - B else {})" by (auto simp add: vimage_def) lemma vimage_inter_cong: "(\ w. w \ S \ f w = g w) \ f -` y \ S = g -` y \ S" by auto lemma vimage_ident [simp]: "(\x. x) -` Y = Y" by blast subsubsection \Singleton sets\ definition is_singleton :: "'a set \ bool" where "is_singleton A \ (\x. A = {x})" lemma is_singletonI [simp, intro!]: "is_singleton {x}" unfolding is_singleton_def by simp lemma is_singletonI': "A \ {} \ (\x y. x \ A \ y \ A \ x = y) \ is_singleton A" unfolding is_singleton_def by blast lemma is_singletonE: "is_singleton A \ (\x. A = {x} \ P) \ P" unfolding is_singleton_def by blast subsubsection \Getting the contents of a singleton set\ definition the_elem :: "'a set \ 'a" where "the_elem X = (THE x. X = {x})" lemma the_elem_eq [simp]: "the_elem {x} = x" by (simp add: the_elem_def) lemma is_singleton_the_elem: "is_singleton A \ A = {the_elem A}" by (auto simp: is_singleton_def) lemma the_elem_image_unique: assumes "A \ {}" and *: "\y. y \ A \ f y = f x" shows "the_elem (f ` A) = f x" unfolding the_elem_def proof (rule the1_equality) from \A \ {}\ obtain y where "y \ A" by auto with * have "f x = f y" by simp with \y \ A\ have "f x \ f ` A" by blast with * show "f ` A = {f x}" by auto then show "\!x. f ` A = {x}" by auto qed subsubsection \Least value operator\ lemma Least_mono: "mono f \ \x\S. \y\S. x \ y \ (LEAST y. y \ f ` S) = f (LEAST x. x \ S)" for f :: "'a::order \ 'b::order" \ \Courtesy of Stephan Merz\ apply clarify apply (erule_tac P = "\x. x \ S" in LeastI2_order) apply fast apply (rule LeastI2_order) apply (auto elim: monoD intro!: order_antisym) done subsubsection \Monad operation\ definition bind :: "'a set \ ('a \ 'b set) \ 'b set" where "bind A f = {x. \B \ f`A. x \ B}" hide_const (open) bind lemma bind_bind: "Set.bind (Set.bind A B) C = Set.bind A (\x. Set.bind (B x) C)" for A :: "'a set" by (auto simp: bind_def) lemma empty_bind [simp]: "Set.bind {} f = {}" by (simp add: bind_def) lemma nonempty_bind_const: "A \ {} \ Set.bind A (\_. B) = B" by (auto simp: bind_def) lemma bind_const: "Set.bind A (\_. B) = (if A = {} then {} else B)" by (auto simp: bind_def) lemma bind_singleton_conv_image: "Set.bind A (\x. {f x}) = f ` A" by (auto simp: bind_def) subsubsection \Operations for execution\ definition is_empty :: "'a set \ bool" where [code_abbrev]: "is_empty A \ A = {}" hide_const (open) is_empty definition remove :: "'a \ 'a set \ 'a set" where [code_abbrev]: "remove x A = A - {x}" hide_const (open) remove lemma member_remove [simp]: "x \ Set.remove y A \ x \ A \ x \ y" by (simp add: remove_def) definition filter :: "('a \ bool) \ 'a set \ 'a set" where [code_abbrev]: "filter P A = {a \ A. P a}" hide_const (open) filter lemma member_filter [simp]: "x \ Set.filter P A \ x \ A \ P x" by (simp add: filter_def) instantiation set :: (equal) equal begin definition "HOL.equal A B \ A \ B \ B \ A" instance by standard (auto simp add: equal_set_def) end text \Misc\ definition pairwise :: "('a \ 'a \ bool) \ 'a set \ bool" where "pairwise R S \ (\x \ S. \y \ S. x \ y \ R x y)" lemma pairwise_alt: "pairwise R S \ (\x\S. \y\S-{x}. R x y)" by (auto simp add: pairwise_def) lemma pairwise_trivial [simp]: "pairwise (\i j. j \ i) I" by (auto simp: pairwise_def) lemma pairwiseI [intro?]: "pairwise R S" if "\x y. x \ S \ y \ S \ x \ y \ R x y" using that by (simp add: pairwise_def) lemma pairwiseD: "R x y" and "R y x" if "pairwise R S" "x \ S" and "y \ S" and "x \ y" using that by (simp_all add: pairwise_def) lemma pairwise_empty [simp]: "pairwise P {}" by (simp add: pairwise_def) lemma pairwise_singleton [simp]: "pairwise P {A}" by (simp add: pairwise_def) lemma pairwise_insert: "pairwise r (insert x s) \ (\y. y \ s \ y \ x \ r x y \ r y x) \ pairwise r s" by (force simp: pairwise_def) lemma pairwise_subset: "pairwise P S \ T \ S \ pairwise P T" by (force simp: pairwise_def) lemma pairwise_mono: "\pairwise P A; \x y. P x y \ Q x y; B \ A\ \ pairwise Q B" by (fastforce simp: pairwise_def) lemma pairwise_imageI: "pairwise P (f ` A)" if "\x y. x \ A \ y \ A \ x \ y \ f x \ f y \ P (f x) (f y)" using that by (auto intro: pairwiseI) lemma pairwise_image: "pairwise r (f ` s) \ pairwise (\x y. (f x \ f y) \ r (f x) (f y)) s" by (force simp: pairwise_def) definition disjnt :: "'a set \ 'a set \ bool" where "disjnt A B \ A \ B = {}" lemma disjnt_self_iff_empty [simp]: "disjnt S S \ S = {}" by (auto simp: disjnt_def) lemma disjnt_iff: "disjnt A B \ (\x. \ (x \ A \ x \ B))" by (force simp: disjnt_def) lemma disjnt_sym: "disjnt A B \ disjnt B A" using disjnt_iff by blast lemma disjnt_empty1 [simp]: "disjnt {} A" and disjnt_empty2 [simp]: "disjnt A {}" by (auto simp: disjnt_def) lemma disjnt_insert1 [simp]: "disjnt (insert a X) Y \ a \ Y \ disjnt X Y" by (simp add: disjnt_def) lemma disjnt_insert2 [simp]: "disjnt Y (insert a X) \ a \ Y \ disjnt Y X" by (simp add: disjnt_def) lemma disjnt_subset1 : "\disjnt X Y; Z \ X\ \ disjnt Z Y" by (auto simp: disjnt_def) lemma disjnt_subset2 : "\disjnt X Y; Z \ Y\ \ disjnt X Z" by (auto simp: disjnt_def) lemma disjnt_Un1 [simp]: "disjnt (A \ B) C \ disjnt A C \ disjnt B C" by (auto simp: disjnt_def) lemma disjnt_Un2 [simp]: "disjnt C (A \ B) \ disjnt C A \ disjnt C B" by (auto simp: disjnt_def) lemma disjoint_image_subset: "\pairwise disjnt \; \X. X \ \ \ f X \ X\ \ pairwise disjnt (f `\)" unfolding disjnt_def pairwise_def by fast lemma pairwise_disjnt_iff: "pairwise disjnt \ \ (\x. \\<^sub>\\<^sub>1 X. X \ \ \ x \ X)" by (auto simp: Uniq_def disjnt_iff pairwise_def) lemma disjnt_insert: \<^marker>\contributor \Lars Hupel\\ \disjnt (insert x M) N\ if \x \ N\ \disjnt M N\ using that by (simp add: disjnt_def) lemma Int_emptyI: "(\x. x \ A \ x \ B \ False) \ A \ B = {}" by blast lemma in_image_insert_iff: assumes "\C. C \ B \ x \ C" shows "A \ insert x ` B \ x \ A \ A - {x} \ B" (is "?P \ ?Q") proof assume ?P then show ?Q using assms by auto next assume ?Q then have "x \ A" and "A - {x} \ B" by simp_all from \A - {x} \ B\ have "insert x (A - {x}) \ insert x ` B" by (rule imageI) also from \x \ A\ have "insert x (A - {x}) = A" by auto finally show ?P . qed hide_const (open) member not_member lemmas equalityI = subset_antisym lemmas set_mp = subsetD lemmas set_rev_mp = rev_subsetD ML \ val Ball_def = @{thm Ball_def} val Bex_def = @{thm Bex_def} val CollectD = @{thm CollectD} val CollectE = @{thm CollectE} val CollectI = @{thm CollectI} val Collect_conj_eq = @{thm Collect_conj_eq} val Collect_mem_eq = @{thm Collect_mem_eq} val IntD1 = @{thm IntD1} val IntD2 = @{thm IntD2} val IntE = @{thm IntE} val IntI = @{thm IntI} val Int_Collect = @{thm Int_Collect} val UNIV_I = @{thm UNIV_I} val UNIV_witness = @{thm UNIV_witness} val UnE = @{thm UnE} val UnI1 = @{thm UnI1} val UnI2 = @{thm UnI2} val ballE = @{thm ballE} val ballI = @{thm ballI} val bexCI = @{thm bexCI} val bexE = @{thm bexE} val bexI = @{thm bexI} val bex_triv = @{thm bex_triv} val bspec = @{thm bspec} val contra_subsetD = @{thm contra_subsetD} val equalityCE = @{thm equalityCE} val equalityD1 = @{thm equalityD1} val equalityD2 = @{thm equalityD2} val equalityE = @{thm equalityE} val equalityI = @{thm equalityI} val imageE = @{thm imageE} val imageI = @{thm imageI} val image_Un = @{thm image_Un} val image_insert = @{thm image_insert} val insert_commute = @{thm insert_commute} val insert_iff = @{thm insert_iff} val mem_Collect_eq = @{thm mem_Collect_eq} val rangeE = @{thm rangeE} val rangeI = @{thm rangeI} val range_eqI = @{thm range_eqI} val subsetCE = @{thm subsetCE} val subsetD = @{thm subsetD} val subsetI = @{thm subsetI} val subset_refl = @{thm subset_refl} val subset_trans = @{thm subset_trans} val vimageD = @{thm vimageD} val vimageE = @{thm vimageE} val vimageI = @{thm vimageI} val vimageI2 = @{thm vimageI2} val vimage_Collect = @{thm vimage_Collect} val vimage_Int = @{thm vimage_Int} val vimage_Un = @{thm vimage_Un} \ end diff --git a/src/HOL/Set_Interval.thy b/src/HOL/Set_Interval.thy --- a/src/HOL/Set_Interval.thy +++ b/src/HOL/Set_Interval.thy @@ -1,2589 +1,2564 @@ (* Title: HOL/Set_Interval.thy Author: Tobias Nipkow, Clemens Ballarin, Jeremy Avigad lessThan, greaterThan, atLeast, atMost and two-sided intervals Modern convention: Ixy stands for an interval where x and y describe the lower and upper bound and x,y : {c,o,i} where c = closed, o = open, i = infinite. Examples: Ico = {_ ..< _} and Ici = {_ ..} *) section \Set intervals\ theory Set_Interval imports Divides begin (* Belongs in Finite_Set but 2 is not available there *) lemma card_2_iff: "card S = 2 \ (\x y. S = {x,y} \ x \ y)" by (auto simp: card_Suc_eq numeral_eq_Suc) lemma card_2_iff': "card S = 2 \ (\x\S. \y\S. x \ y \ (\z\S. z = x \ z = y))" by (auto simp: card_Suc_eq numeral_eq_Suc) context ord begin definition lessThan :: "'a => 'a set" ("(1{..<_})") where "{.. 'a set" ("(1{.._})") where "{..u} == {x. x \ u}" definition greaterThan :: "'a => 'a set" ("(1{_<..})") where "{l<..} == {x. l 'a set" ("(1{_..})") where "{l..} == {x. l\x}" definition greaterThanLessThan :: "'a => 'a => 'a set" ("(1{_<..<_})") where "{l<.. 'a => 'a set" ("(1{_..<_})") where "{l.. 'a => 'a set" ("(1{_<.._})") where "{l<..u} == {l<..} Int {..u}" definition atLeastAtMost :: "'a => 'a => 'a set" ("(1{_.._})") where "{l..u} == {l..} Int {..u}" end text\A note of warning when using \<^term>\{.. on type \<^typ>\nat\: it is equivalent to \<^term>\{0::nat.. but some lemmas involving \<^term>\{m.. may not exist in \<^term>\{..-form as well.\ syntax (ASCII) "_UNION_le" :: "'a => 'a => 'b set => 'b set" ("(3UN _<=_./ _)" [0, 0, 10] 10) "_UNION_less" :: "'a => 'a => 'b set => 'b set" ("(3UN _<_./ _)" [0, 0, 10] 10) "_INTER_le" :: "'a => 'a => 'b set => 'b set" ("(3INT _<=_./ _)" [0, 0, 10] 10) "_INTER_less" :: "'a => 'a => 'b set => 'b set" ("(3INT _<_./ _)" [0, 0, 10] 10) syntax (latex output) "_UNION_le" :: "'a \ 'a => 'b set => 'b set" ("(3\(\unbreakable\_ \ _)/ _)" [0, 0, 10] 10) "_UNION_less" :: "'a \ 'a => 'b set => 'b set" ("(3\(\unbreakable\_ < _)/ _)" [0, 0, 10] 10) "_INTER_le" :: "'a \ 'a => 'b set => 'b set" ("(3\(\unbreakable\_ \ _)/ _)" [0, 0, 10] 10) "_INTER_less" :: "'a \ 'a => 'b set => 'b set" ("(3\(\unbreakable\_ < _)/ _)" [0, 0, 10] 10) syntax "_UNION_le" :: "'a => 'a => 'b set => 'b set" ("(3\_\_./ _)" [0, 0, 10] 10) "_UNION_less" :: "'a => 'a => 'b set => 'b set" ("(3\_<_./ _)" [0, 0, 10] 10) "_INTER_le" :: "'a => 'a => 'b set => 'b set" ("(3\_\_./ _)" [0, 0, 10] 10) "_INTER_less" :: "'a => 'a => 'b set => 'b set" ("(3\_<_./ _)" [0, 0, 10] 10) translations "\i\n. A" \ "\i\{..n}. A" "\i "\i\{..i\n. A" \ "\i\{..n}. A" "\i "\i\{..Various equivalences\ lemma (in ord) lessThan_iff [iff]: "(i \ lessThan k) = (i greaterThan k) = (k atLeast k) = (k<=i)" by (simp add: atLeast_def) lemma Compl_atLeast [simp]: "!!k:: 'a::linorder. -atLeast k = lessThan k" by (auto simp add: lessThan_def atLeast_def) lemma (in ord) atMost_iff [iff]: "(i \ atMost k) = (i<=k)" by (simp add: atMost_def) lemma atMost_Int_atLeast: "!!n:: 'a::order. atMost n Int atLeast n = {n}" by (blast intro: order_antisym) lemma (in linorder) lessThan_Int_lessThan: "{ a <..} \ { b <..} = { max a b <..}" by auto lemma (in linorder) greaterThan_Int_greaterThan: "{..< a} \ {..< b} = {..< min a b}" by auto subsection \Logical Equivalences for Set Inclusion and Equality\ lemma atLeast_empty_triv [simp]: "{{}..} = UNIV" by auto lemma atMost_UNIV_triv [simp]: "{..UNIV} = UNIV" by auto lemma atLeast_subset_iff [iff]: "(atLeast x \ atLeast y) = (y \ (x::'a::preorder))" by (blast intro: order_trans) lemma atLeast_eq_iff [iff]: "(atLeast x = atLeast y) = (x = (y::'a::order))" by (blast intro: order_antisym order_trans) lemma greaterThan_subset_iff [iff]: "(greaterThan x \ greaterThan y) = (y \ (x::'a::linorder))" unfolding greaterThan_def by (auto simp: linorder_not_less [symmetric]) lemma greaterThan_eq_iff [iff]: "(greaterThan x = greaterThan y) = (x = (y::'a::linorder))" by (auto simp: elim!: equalityE) lemma atMost_subset_iff [iff]: "(atMost x \ atMost y) = (x \ (y::'a::preorder))" by (blast intro: order_trans) lemma atMost_eq_iff [iff]: "(atMost x = atMost y) = (x = (y::'a::order))" by (blast intro: order_antisym order_trans) lemma lessThan_subset_iff [iff]: "(lessThan x \ lessThan y) = (x \ (y::'a::linorder))" unfolding lessThan_def by (auto simp: linorder_not_less [symmetric]) lemma lessThan_eq_iff [iff]: "(lessThan x = lessThan y) = (x = (y::'a::linorder))" by (auto simp: elim!: equalityE) lemma lessThan_strict_subset_iff: fixes m n :: "'a::linorder" shows "{.. m < n" by (metis leD lessThan_subset_iff linorder_linear not_less_iff_gr_or_eq psubset_eq) lemma (in linorder) Ici_subset_Ioi_iff: "{a ..} \ {b <..} \ b < a" by auto lemma (in linorder) Iic_subset_Iio_iff: "{.. a} \ {..< b} \ a < b" by auto lemma (in preorder) Ioi_le_Ico: "{a <..} \ {a ..}" by (auto intro: less_imp_le) subsection \Two-sided intervals\ context ord begin lemma greaterThanLessThan_iff [simp]: "(i \ {l<.. i < u)" by (simp add: greaterThanLessThan_def) lemma atLeastLessThan_iff [simp]: "(i \ {l.. i \ i < u)" by (simp add: atLeastLessThan_def) lemma greaterThanAtMost_iff [simp]: "(i \ {l<..u}) = (l < i \ i \ u)" by (simp add: greaterThanAtMost_def) lemma atLeastAtMost_iff [simp]: "(i \ {l..u}) = (l \ i \ i \ u)" by (simp add: atLeastAtMost_def) text \The above four lemmas could be declared as iffs. Unfortunately this breaks many proofs. Since it only helps blast, it is better to leave them alone.\ lemma greaterThanLessThan_eq: "{ a <..< b} = { a <..} \ {..< b }" by auto lemma (in order) atLeastLessThan_eq_atLeastAtMost_diff: "{a..Emptyness, singletons, subset\ context preorder begin lemma atLeastatMost_empty_iff[simp]: "{a..b} = {} \ (\ a \ b)" by auto (blast intro: order_trans) lemma atLeastatMost_empty_iff2[simp]: "{} = {a..b} \ (\ a \ b)" by auto (blast intro: order_trans) lemma atLeastLessThan_empty_iff[simp]: "{a.. (\ a < b)" by auto (blast intro: le_less_trans) lemma atLeastLessThan_empty_iff2[simp]: "{} = {a.. (\ a < b)" by auto (blast intro: le_less_trans) lemma greaterThanAtMost_empty_iff[simp]: "{k<..l} = {} \ \ k < l" by auto (blast intro: less_le_trans) lemma greaterThanAtMost_empty_iff2[simp]: "{} = {k<..l} \ \ k < l" by auto (blast intro: less_le_trans) lemma atLeastatMost_subset_iff[simp]: "{a..b} \ {c..d} \ (\ a \ b) \ c \ a \ b \ d" unfolding atLeastAtMost_def atLeast_def atMost_def by (blast intro: order_trans) lemma atLeastatMost_psubset_iff: "{a..b} < {c..d} \ ((\ a \ b) \ c \ a \ b \ d \ (c < a \ b < d)) \ c \ d" by(simp add: psubset_eq set_eq_iff less_le_not_le)(blast intro: order_trans) lemma atLeastAtMost_subseteq_atLeastLessThan_iff: "{a..b} \ {c ..< d} \ (a \ b \ c \ a \ b < d)" by auto (blast intro: local.order_trans local.le_less_trans elim: )+ lemma Icc_subset_Ici_iff[simp]: "{l..h} \ {l'..} = (\ l\h \ l\l')" by(auto simp: subset_eq intro: order_trans) lemma Icc_subset_Iic_iff[simp]: "{l..h} \ {..h'} = (\ l\h \ h\h')" by(auto simp: subset_eq intro: order_trans) lemma not_Ici_eq_empty[simp]: "{l..} \ {}" by(auto simp: set_eq_iff) lemma not_Iic_eq_empty[simp]: "{..h} \ {}" by(auto simp: set_eq_iff) lemmas not_empty_eq_Ici_eq_empty[simp] = not_Ici_eq_empty[symmetric] lemmas not_empty_eq_Iic_eq_empty[simp] = not_Iic_eq_empty[symmetric] end context order begin lemma atLeastatMost_empty[simp]: "b < a \ {a..b} = {}" by(auto simp: atLeastAtMost_def atLeast_def atMost_def) lemma atLeastLessThan_empty[simp]: "b \ a \ {a.. k ==> {k<..l} = {}" by(auto simp:greaterThanAtMost_def greaterThan_def atMost_def) lemma greaterThanLessThan_empty[simp]:"l \ k ==> {k<.. {a .. b} = {a}" by simp lemma Icc_eq_Icc[simp]: "{l..h} = {l'..h'} = (l=l' \ h=h' \ \ l\h \ \ l'\h')" by (simp add: order_class.order.eq_iff) (auto intro: order_trans) lemma atLeastAtMost_singleton_iff[simp]: "{a .. b} = {c} \ a = b \ b = c" proof assume "{a..b} = {c}" hence *: "\ (\ a \ b)" unfolding atLeastatMost_empty_iff[symmetric] by simp with \{a..b} = {c}\ have "c \ a \ b \ c" by auto with * show "a = b \ b = c" by auto qed simp end context no_top begin (* also holds for no_bot but no_top should suffice *) lemma not_UNIV_le_Icc[simp]: "\ UNIV \ {l..h}" using gt_ex[of h] by(auto simp: subset_eq less_le_not_le) lemma not_UNIV_le_Iic[simp]: "\ UNIV \ {..h}" using gt_ex[of h] by(auto simp: subset_eq less_le_not_le) lemma not_Ici_le_Icc[simp]: "\ {l..} \ {l'..h'}" using gt_ex[of h'] by(auto simp: subset_eq less_le)(blast dest:antisym_conv intro: order_trans) lemma not_Ici_le_Iic[simp]: "\ {l..} \ {..h'}" using gt_ex[of h'] by(auto simp: subset_eq less_le)(blast dest:antisym_conv intro: order_trans) end context no_bot begin lemma not_UNIV_le_Ici[simp]: "\ UNIV \ {l..}" using lt_ex[of l] by(auto simp: subset_eq less_le_not_le) lemma not_Iic_le_Icc[simp]: "\ {..h} \ {l'..h'}" using lt_ex[of l'] by(auto simp: subset_eq less_le)(blast dest:antisym_conv intro: order_trans) lemma not_Iic_le_Ici[simp]: "\ {..h} \ {l'..}" using lt_ex[of l'] by(auto simp: subset_eq less_le)(blast dest:antisym_conv intro: order_trans) end context no_top begin (* also holds for no_bot but no_top should suffice *) lemma not_UNIV_eq_Icc[simp]: "\ UNIV = {l'..h'}" using gt_ex[of h'] by(auto simp: set_eq_iff less_le_not_le) lemmas not_Icc_eq_UNIV[simp] = not_UNIV_eq_Icc[symmetric] lemma not_UNIV_eq_Iic[simp]: "\ UNIV = {..h'}" using gt_ex[of h'] by(auto simp: set_eq_iff less_le_not_le) lemmas not_Iic_eq_UNIV[simp] = not_UNIV_eq_Iic[symmetric] lemma not_Icc_eq_Ici[simp]: "\ {l..h} = {l'..}" unfolding atLeastAtMost_def using not_Ici_le_Iic[of l'] by blast lemmas not_Ici_eq_Icc[simp] = not_Icc_eq_Ici[symmetric] (* also holds for no_bot but no_top should suffice *) lemma not_Iic_eq_Ici[simp]: "\ {..h} = {l'..}" using not_Ici_le_Iic[of l' h] by blast lemmas not_Ici_eq_Iic[simp] = not_Iic_eq_Ici[symmetric] end context no_bot begin lemma not_UNIV_eq_Ici[simp]: "\ UNIV = {l'..}" using lt_ex[of l'] by(auto simp: set_eq_iff less_le_not_le) lemmas not_Ici_eq_UNIV[simp] = not_UNIV_eq_Ici[symmetric] lemma not_Icc_eq_Iic[simp]: "\ {l..h} = {..h'}" unfolding atLeastAtMost_def using not_Iic_le_Ici[of h'] by blast lemmas not_Iic_eq_Icc[simp] = not_Icc_eq_Iic[symmetric] end context dense_linorder begin lemma greaterThanLessThan_empty_iff[simp]: "{ a <..< b } = {} \ b \ a" using dense[of a b] by (cases "a < b") auto lemma greaterThanLessThan_empty_iff2[simp]: "{} = { a <..< b } \ b \ a" using dense[of a b] by (cases "a < b") auto lemma atLeastLessThan_subseteq_atLeastAtMost_iff: "{a ..< b} \ { c .. d } \ (a < b \ c \ a \ b \ d)" using dense[of "max a d" "b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanAtMost_subseteq_atLeastAtMost_iff: "{a <.. b} \ { c .. d } \ (a < b \ c \ a \ b \ d)" using dense[of "a" "min c b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanLessThan_subseteq_atLeastAtMost_iff: "{a <..< b} \ { c .. d } \ (a < b \ c \ a \ b \ d)" using dense[of "a" "min c b"] dense[of "max a d" "b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanLessThan_subseteq_greaterThanLessThan: "{a <..< b} \ {c <..< d} \ (a < b \ a \ c \ b \ d)" using dense[of "a" "min c b"] dense[of "max a d" "b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanAtMost_subseteq_atLeastLessThan_iff: "{a <.. b} \ { c ..< d } \ (a < b \ c \ a \ b < d)" using dense[of "a" "min c b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanLessThan_subseteq_atLeastLessThan_iff: "{a <..< b} \ { c ..< d } \ (a < b \ c \ a \ b \ d)" using dense[of "a" "min c b"] dense[of "max a d" "b"] by (force simp: subset_eq Ball_def not_less[symmetric]) lemma greaterThanLessThan_subseteq_greaterThanAtMost_iff: "{a <..< b} \ { c <.. d } \ (a < b \ c \ a \ b \ d)" using dense[of "a" "min c b"] dense[of "max a d" "b"] by (force simp: subset_eq Ball_def not_less[symmetric]) end context no_top begin lemma greaterThan_non_empty[simp]: "{x <..} \ {}" using gt_ex[of x] by auto end context no_bot begin lemma lessThan_non_empty[simp]: "{..< x} \ {}" using lt_ex[of x] by auto end lemma (in linorder) atLeastLessThan_subset_iff: "{a.. {c.. b \ a \ c\a \ b\d" apply (auto simp:subset_eq Ball_def not_le) apply(frule_tac x=a in spec) apply(erule_tac x=d in allE) apply (auto simp: ) done lemma atLeastLessThan_inj: fixes a b c d :: "'a::linorder" assumes eq: "{a ..< b} = {c ..< d}" and "a < b" "c < d" shows "a = c" "b = d" using assms by (metis atLeastLessThan_subset_iff eq less_le_not_le antisym_conv2 subset_refl)+ lemma atLeastLessThan_eq_iff: fixes a b c d :: "'a::linorder" assumes "a < b" "c < d" shows "{a ..< b} = {c ..< d} \ a = c \ b = d" using atLeastLessThan_inj assms by auto lemma (in linorder) Ioc_inj: \{a <.. b} = {c <.. d} \ (b \ a \ d \ c) \ a = c \ b = d\ (is \?P \ ?Q\) proof assume ?Q then show ?P by auto next assume ?P then have \a < x \ x \ b \ c < x \ x \ d\ for x by (simp add: set_eq_iff) from this [of a] this [of b] this [of c] this [of d] show ?Q by auto qed lemma (in order) Iio_Int_singleton: "{.. {x} = (if x < k then {x} else {})" by auto lemma (in linorder) Ioc_subset_iff: "{a<..b} \ {c<..d} \ (b \ a \ c \ a \ b \ d)" by (auto simp: subset_eq Ball_def) (metis less_le not_less) lemma (in order_bot) atLeast_eq_UNIV_iff: "{x..} = UNIV \ x = bot" by (auto simp: set_eq_iff intro: le_bot) lemma (in order_top) atMost_eq_UNIV_iff: "{..x} = UNIV \ x = top" by (auto simp: set_eq_iff intro: top_le) lemma (in bounded_lattice) atLeastAtMost_eq_UNIV_iff: "{x..y} = UNIV \ (x = bot \ y = top)" by (auto simp: set_eq_iff intro: top_le le_bot) lemma Iio_eq_empty_iff: "{..< n::'a::{linorder, order_bot}} = {} \ n = bot" by (auto simp: set_eq_iff not_less le_bot) lemma lessThan_empty_iff: "{..< n::nat} = {} \ n = 0" by (simp add: Iio_eq_empty_iff bot_nat_def) lemma mono_image_least: assumes f_mono: "mono f" and f_img: "f ` {m ..< n} = {m' ..< n'}" "m < n" shows "f m = m'" proof - from f_img have "{m' ..< n'} \ {}" by (metis atLeastLessThan_empty_iff image_is_empty) with f_img have "m' \ f ` {m ..< n}" by auto then obtain k where "f k = m'" "m \ k" by auto moreover have "m' \ f m" using f_img by auto ultimately show "f m = m'" using f_mono by (auto elim: monoE[where x=m and y=k]) qed subsection \Infinite intervals\ context dense_linorder begin lemma infinite_Ioo: assumes "a < b" shows "\ finite {a<.. {}" using \a < b\ by auto ultimately have "a < Max {a <..< b}" "Max {a <..< b} < b" using Max_in[of "{a <..< b}"] by auto then obtain x where "Max {a <..< b} < x" "x < b" using dense[of "Max {a<.. {a <..< b}" using \a < Max {a <..< b}\ by auto then have "x \ Max {a <..< b}" using fin by auto with \Max {a <..< b} < x\ show False by auto qed lemma infinite_Icc: "a < b \ \ finite {a .. b}" using greaterThanLessThan_subseteq_atLeastAtMost_iff[of a b a b] infinite_Ioo[of a b] by (auto dest: finite_subset) lemma infinite_Ico: "a < b \ \ finite {a ..< b}" using greaterThanLessThan_subseteq_atLeastLessThan_iff[of a b a b] infinite_Ioo[of a b] by (auto dest: finite_subset) lemma infinite_Ioc: "a < b \ \ finite {a <.. b}" using greaterThanLessThan_subseteq_greaterThanAtMost_iff[of a b a b] infinite_Ioo[of a b] by (auto dest: finite_subset) lemma infinite_Ioo_iff [simp]: "infinite {a<.. a < b" using not_less_iff_gr_or_eq by (fastforce simp: infinite_Ioo) lemma infinite_Icc_iff [simp]: "infinite {a .. b} \ a < b" using not_less_iff_gr_or_eq by (fastforce simp: infinite_Icc) lemma infinite_Ico_iff [simp]: "infinite {a.. a < b" using not_less_iff_gr_or_eq by (fastforce simp: infinite_Ico) lemma infinite_Ioc_iff [simp]: "infinite {a<..b} \ a < b" using not_less_iff_gr_or_eq by (fastforce simp: infinite_Ioc) end lemma infinite_Iio: "\ finite {..< a :: 'a :: {no_bot, linorder}}" proof assume "finite {..< a}" then have *: "\x. x < a \ Min {..< a} \ x" by auto obtain x where "x < a" using lt_ex by auto obtain y where "y < Min {..< a}" using lt_ex by auto also have "Min {..< a} \ x" using \x < a\ by fact also note \x < a\ finally have "Min {..< a} \ y" by fact with \y < Min {..< a}\ show False by auto qed lemma infinite_Iic: "\ finite {.. a :: 'a :: {no_bot, linorder}}" using infinite_Iio[of a] finite_subset[of "{..< a}" "{.. a}"] by (auto simp: subset_eq less_imp_le) lemma infinite_Ioi: "\ finite {a :: 'a :: {no_top, linorder} <..}" proof assume "finite {a <..}" then have *: "\x. a < x \ x \ Max {a <..}" by auto obtain y where "Max {a <..} < y" using gt_ex by auto obtain x where x: "a < x" using gt_ex by auto also from x have "x \ Max {a <..}" by fact also note \Max {a <..} < y\ finally have "y \ Max { a <..}" by fact with \Max {a <..} < y\ show False by auto qed lemma infinite_Ici: "\ finite {a :: 'a :: {no_top, linorder} ..}" using infinite_Ioi[of a] finite_subset[of "{a <..}" "{a ..}"] by (auto simp: subset_eq less_imp_le) subsubsection \Intersection\ context linorder begin lemma Int_atLeastAtMost[simp]: "{a..b} Int {c..d} = {max a c .. min b d}" by auto lemma Int_atLeastAtMostR1[simp]: "{..b} Int {c..d} = {c .. min b d}" by auto lemma Int_atLeastAtMostR2[simp]: "{a..} Int {c..d} = {max a c .. d}" by auto lemma Int_atLeastAtMostL1[simp]: "{a..b} Int {..d} = {a .. min b d}" by auto lemma Int_atLeastAtMostL2[simp]: "{a..b} Int {c..} = {max a c .. b}" by auto lemma Int_atLeastLessThan[simp]: "{a.. {..b} = {.. min a b}" by (auto simp: min_def) lemma Ioc_disjoint: "{a<..b} \ {c<..d} = {} \ b \ a \ d \ c \ b \ c \ d \ a" by auto end context complete_lattice begin lemma shows Sup_atLeast[simp]: "Sup {x ..} = top" and Sup_greaterThanAtLeast[simp]: "x < top \ Sup {x <..} = top" and Sup_atMost[simp]: "Sup {.. y} = y" and Sup_atLeastAtMost[simp]: "x \ y \ Sup { x .. y} = y" and Sup_greaterThanAtMost[simp]: "x < y \ Sup { x <.. y} = y" by (auto intro!: Sup_eqI) lemma shows Inf_atMost[simp]: "Inf {.. x} = bot" and Inf_atMostLessThan[simp]: "top < x \ Inf {..< x} = bot" and Inf_atLeast[simp]: "Inf {x ..} = x" and Inf_atLeastAtMost[simp]: "x \ y \ Inf { x .. y} = x" and Inf_atLeastLessThan[simp]: "x < y \ Inf { x ..< y} = x" by (auto intro!: Inf_eqI) end lemma fixes x y :: "'a :: {complete_lattice, dense_linorder}" shows Sup_lessThan[simp]: "Sup {..< y} = y" and Sup_atLeastLessThan[simp]: "x < y \ Sup { x ..< y} = y" and Sup_greaterThanLessThan[simp]: "x < y \ Sup { x <..< y} = y" and Inf_greaterThan[simp]: "Inf {x <..} = x" and Inf_greaterThanAtMost[simp]: "x < y \ Inf { x <.. y} = x" and Inf_greaterThanLessThan[simp]: "x < y \ Inf { x <..< y} = x" by (auto intro!: Inf_eqI Sup_eqI intro: dense_le dense_le_bounded dense_ge dense_ge_bounded) subsection \Intervals of natural numbers\ subsubsection \The Constant \<^term>\lessThan\\ lemma lessThan_0 [simp]: "lessThan (0::nat) = {}" by (simp add: lessThan_def) lemma lessThan_Suc: "lessThan (Suc k) = insert k (lessThan k)" by (simp add: lessThan_def less_Suc_eq, blast) text \The following proof is convenient in induction proofs where new elements get indices at the beginning. So it is used to transform \<^term>\{.. to \<^term>\0::nat\ and \<^term>\{..< n}\.\ lemma zero_notin_Suc_image [simp]: "0 \ Suc ` A" by auto lemma lessThan_Suc_eq_insert_0: "{..m::nat. lessThan m) = UNIV" by blast subsubsection \The Constant \<^term>\greaterThan\\ lemma greaterThan_0: "greaterThan 0 = range Suc" unfolding greaterThan_def by (blast dest: gr0_conv_Suc [THEN iffD1]) lemma greaterThan_Suc: "greaterThan (Suc k) = greaterThan k - {Suc k}" unfolding greaterThan_def by (auto elim: linorder_neqE) lemma INT_greaterThan_UNIV: "(\m::nat. greaterThan m) = {}" by blast subsubsection \The Constant \<^term>\atLeast\\ lemma atLeast_0 [simp]: "atLeast (0::nat) = UNIV" by (unfold atLeast_def UNIV_def, simp) lemma atLeast_Suc: "atLeast (Suc k) = atLeast k - {k}" unfolding atLeast_def by (auto simp: order_le_less Suc_le_eq) lemma atLeast_Suc_greaterThan: "atLeast (Suc k) = greaterThan k" by (auto simp add: greaterThan_def atLeast_def less_Suc_eq_le) lemma UN_atLeast_UNIV: "(\m::nat. atLeast m) = UNIV" by blast subsubsection \The Constant \<^term>\atMost\\ lemma atMost_0 [simp]: "atMost (0::nat) = {0}" by (simp add: atMost_def) lemma atMost_Suc: "atMost (Suc k) = insert (Suc k) (atMost k)" unfolding atMost_def by (auto simp add: less_Suc_eq order_le_less) lemma UN_atMost_UNIV: "(\m::nat. atMost m) = UNIV" by blast subsubsection \The Constant \<^term>\atLeastLessThan\\ text\The orientation of the following 2 rules is tricky. The lhs is defined in terms of the rhs. Hence the chosen orientation makes sense in this theory --- the reverse orientation complicates proofs (eg nontermination). But outside, when the definition of the lhs is rarely used, the opposite orientation seems preferable because it reduces a specific concept to a more general one.\ lemma atLeast0LessThan [code_abbrev]: "{0::nat..The Constant \<^term>\atLeastAtMost\\ lemma Icc_eq_insert_lb_nat: "m \ n \ {m..n} = insert m {Suc m..n}" by auto lemma atLeast0_atMost_Suc: "{0..Suc n} = insert (Suc n) {0..n}" by (simp add: atLeast0AtMost atMost_Suc) lemma atLeast0_atMost_Suc_eq_insert_0: "{0..Suc n} = insert 0 (Suc ` {0..n})" by (simp add: atLeast0AtMost atMost_Suc_eq_insert_0) subsubsection \Intervals of nats with \<^term>\Suc\\ text\Not a simprule because the RHS is too messy.\ lemma atLeastLessThanSuc: "{m.. n then insert n {m.. Suc n \ {m..Suc n} = insert (Suc n) {m..n}" by auto lemma atLeastAtMost_insertL: "m \ n \ insert m {Suc m..n} = {m ..n}" by auto text \The analogous result is useful on \<^typ>\int\:\ (* here, because we don't have an own int section *) lemma atLeastAtMostPlus1_int_conv: "m \ 1+n \ {m..1+n} = insert (1+n) {m..n::int}" by (auto intro: set_eqI) lemma atLeastLessThan_add_Un: "i \ j \ {i.. {j..Intervals and numerals\ lemma lessThan_nat_numeral: \ \Evaluation for specific numerals\ "lessThan (numeral k :: nat) = insert (pred_numeral k) (lessThan (pred_numeral k))" by (simp add: numeral_eq_Suc lessThan_Suc) lemma atMost_nat_numeral: \ \Evaluation for specific numerals\ "atMost (numeral k :: nat) = insert (numeral k) (atMost (pred_numeral k))" by (simp add: numeral_eq_Suc atMost_Suc) lemma atLeastLessThan_nat_numeral: \ \Evaluation for specific numerals\ "atLeastLessThan m (numeral k :: nat) = (if m \ (pred_numeral k) then insert (pred_numeral k) (atLeastLessThan m (pred_numeral k)) else {})" by (simp add: numeral_eq_Suc atLeastLessThanSuc) subsubsection \Image\ context linordered_semidom begin lemma image_add_atLeast[simp]: "plus k ` {i..} = {k + i..}" proof - have "n = k + (n - k)" if "i + k \ n" for n proof - have "n = (n - (k + i)) + (k + i)" using that by (metis add_commute le_add_diff_inverse) then show "n = k + (n - k)" by (metis local.add_diff_cancel_left' add_assoc add_commute) qed then show ?thesis by (fastforce simp: add_le_imp_le_diff add.commute) qed lemma image_add_atLeastAtMost [simp]: "plus k ` {i..j} = {i + k..j + k}" (is "?A = ?B") proof show "?A \ ?B" by (auto simp add: ac_simps) next show "?B \ ?A" proof fix n assume "n \ ?B" then have "i \ n - k" by (simp add: add_le_imp_le_diff) have "n = n - k + k" proof - from \n \ ?B\ have "n = n - (i + k) + (i + k)" by simp also have "\ = n - k - i + i + k" by (simp add: algebra_simps) also have "\ = n - k + k" using \i \ n - k\ by simp finally show ?thesis . qed moreover have "n - k \ {i..j}" using \n \ ?B\ by (auto simp: add_le_imp_le_diff add_le_add_imp_diff_le) ultimately show "n \ ?A" by (simp add: ac_simps) qed qed lemma image_add_atLeastAtMost' [simp]: "(\n. n + k) ` {i..j} = {i + k..j + k}" by (simp add: add.commute [of _ k]) lemma image_add_atLeastLessThan [simp]: "plus k ` {i..n. n + k) ` {i.. uminus ` {x<..}" by (rule imageI) (simp add: *) thus "y \ uminus ` {x<..}" by simp next fix y assume "y \ -x" have "- (-y) \ uminus ` {x..}" by (rule imageI) (insert \y \ -x\[THEN le_imp_neg_le], simp) thus "y \ uminus ` {x..}" by simp qed simp_all lemma fixes x :: 'a shows image_uminus_lessThan[simp]: "uminus ` {.. = {c - b<..c - a}" by simp finally show ?thesis by simp qed lemma image_minus_const_greaterThanAtMost[simp]: fixes a b c::"'a::linordered_idom" shows "(-) c ` {a<..b} = {c - b.. = {c - b.. = {..c - a}" by simp finally show ?thesis by simp qed lemma image_minus_const_AtMost[simp]: fixes b c::"'a::linordered_idom" shows "(-) c ` {..b} = {c - b..}" proof - have "(-) c ` {..b} = (+) c ` uminus ` {..b}" unfolding image_image by simp also have "\ = {c - b..}" by simp finally show ?thesis by simp qed lemma image_minus_const_atLeastAtMost' [simp]: "(\t. t-d)`{a..b} = {a-d..b-d}" for d::"'a::linordered_idom" by (metis (no_types, lifting) diff_conv_add_uminus image_add_atLeastAtMost' image_cong) context linordered_field begin lemma image_mult_atLeastAtMost [simp]: "((*) d ` {a..b}) = {d*a..d*b}" if "d>0" using that by (auto simp: field_simps mult_le_cancel_right intro: rev_image_eqI [where x="x/d" for x]) lemma image_divide_atLeastAtMost [simp]: "((\c. c / d) ` {a..b}) = {a/d..b/d}" if "d>0" proof - from that have "inverse d > 0" by simp with image_mult_atLeastAtMost [of "inverse d" a b] have "(*) (inverse d) ` {a..b} = {inverse d * a..inverse d * b}" by blast moreover have "(*) (inverse d) = (\c. c / d)" by (simp add: fun_eq_iff field_simps) ultimately show ?thesis by simp qed lemma image_mult_atLeastAtMost_if: "(*) c ` {x .. y} = (if c > 0 then {c * x .. c * y} else if x \ y then {c * y .. c * x} else {})" proof (cases "c = 0 \ x > y") case True then show ?thesis by auto next case False then have "x \ y" by auto from False consider "c < 0"| "c > 0" by (auto simp add: neq_iff) then show ?thesis proof cases case 1 have "(*) c ` {x..y} = {c * y..c * x}" proof (rule set_eqI) fix d from 1 have "inj (\z. z / c)" by (auto intro: injI) then have "d \ (*) c ` {x..y} \ d / c \ (\z. z div c) ` (*) c ` {x..y}" by (subst inj_image_mem_iff) simp_all also have "\ \ d / c \ {x..y}" using 1 by (simp add: image_image) also have "\ \ d \ {c * y..c * x}" by (auto simp add: field_simps 1) finally show "d \ (*) c ` {x..y} \ d \ {c * y..c * x}" . qed with \x \ y\ show ?thesis by auto qed (simp add: mult_left_mono_neg) qed lemma image_mult_atLeastAtMost_if': "(\x. x * c) ` {x..y} = (if x \ y then if c > 0 then {x * c .. y * c} else {y * c .. x * c} else {})" using image_mult_atLeastAtMost_if [of c x y] by (auto simp add: ac_simps) lemma image_affinity_atLeastAtMost: "((\x. m * x + c) ` {a..b}) = (if {a..b} = {} then {} else if 0 \ m then {m * a + c .. m * b + c} else {m * b + c .. m * a + c})" proof - have *: "(\x. m * x + c) = ((\x. x + c) \ (*) m)" by (simp add: fun_eq_iff) show ?thesis by (simp only: * image_comp [symmetric] image_mult_atLeastAtMost_if) (auto simp add: mult_le_cancel_left) qed lemma image_affinity_atLeastAtMost_diff: "((\x. m*x - c) ` {a..b}) = (if {a..b}={} then {} else if 0 \ m then {m*a - c .. m*b - c} else {m*b - c .. m*a - c})" using image_affinity_atLeastAtMost [of m "-c" a b] by simp lemma image_affinity_atLeastAtMost_div: "((\x. x/m + c) ` {a..b}) = (if {a..b}={} then {} else if 0 \ m then {a/m + c .. b/m + c} else {b/m + c .. a/m + c})" using image_affinity_atLeastAtMost [of "inverse m" c a b] by (simp add: field_class.field_divide_inverse algebra_simps inverse_eq_divide) lemma image_affinity_atLeastAtMost_div_diff: "((\x. x/m - c) ` {a..b}) = (if {a..b}={} then {} else if 0 \ m then {a/m - c .. b/m - c} else {b/m - c .. a/m - c})" using image_affinity_atLeastAtMost_diff [of "inverse m" c a b] by (simp add: field_class.field_divide_inverse algebra_simps inverse_eq_divide) end lemma atLeast1_lessThan_eq_remove0: "{Suc 0..x. x + (l::int)) ` {0..i. i - c) ` {x ..< y} = (if c < y then {x - c ..< y - c} else if x < y then {0} else {})" (is "_ = ?right") proof safe fix a assume a: "a \ ?right" show "a \ (\i. i - c) ` {x ..< y}" proof cases assume "c < y" with a show ?thesis by (auto intro!: image_eqI[of _ _ "a + c"]) next assume "\ c < y" with a show ?thesis by (auto intro!: image_eqI[of _ _ x] split: if_split_asm) qed qed auto lemma image_int_atLeastLessThan: "int ` {a..Finiteness\ lemma finite_lessThan [iff]: fixes k :: nat shows "finite {..A bounded set of natural numbers is finite.\ lemma bounded_nat_set_is_finite: "(\i\N. i < (n::nat)) \ finite N" by (rule finite_subset [OF _ finite_lessThan]) auto text \A set of natural numbers is finite iff it is bounded.\ lemma finite_nat_set_iff_bounded: "finite(N::nat set) = (\m. \n\N. n?F\, simplified less_Suc_eq_le[symmetric]] by blast next assume ?B show ?F using \?B\ by(blast intro:bounded_nat_set_is_finite) qed lemma finite_nat_set_iff_bounded_le: "finite(N::nat set) = (\m. \n\N. n\m)" unfolding finite_nat_set_iff_bounded by (blast dest:less_imp_le_nat le_imp_less_Suc) lemma finite_less_ub: "!!f::nat=>nat. (!!n. n \ f n) ==> finite {n. f n \ u}" by (rule_tac B="{..u}" in finite_subset, auto intro: order_trans) lemma bounded_Max_nat: fixes P :: "nat \ bool" assumes x: "P x" and M: "\x. P x \ x \ M" obtains m where "P m" "\x. P x \ x \ m" proof - have "finite {x. P x}" using M finite_nat_set_iff_bounded_le by auto then have "Max {x. P x} \ {x. P x}" using Max_in x by auto then show ?thesis by (simp add: \finite {x. P x}\ that) qed text\Any subset of an interval of natural numbers the size of the subset is exactly that interval.\ lemma subset_card_intvl_is_intvl: assumes "A \ {k.. A" by auto with insert have "A \ {k..Proving Inclusions and Equalities between Unions\ lemma UN_le_eq_Un0: "(\i\n::nat. M i) = (\i\{1..n}. M i) \ M 0" (is "?A = ?B") proof show "?A \ ?B" proof fix x assume "x \ ?A" then obtain i where i: "i\n" "x \ M i" by auto show "x \ ?B" proof(cases i) case 0 with i show ?thesis by simp next case (Suc j) with i show ?thesis by auto qed qed next show "?B \ ?A" by fastforce qed lemma UN_le_add_shift: "(\i\n::nat. M(i+k)) = (\i\{k..n+k}. M i)" (is "?A = ?B") proof show "?A \ ?B" by fastforce next show "?B \ ?A" proof fix x assume "x \ ?B" then obtain i where i: "i \ {k..n+k}" "x \ M(i)" by auto hence "i-k\n \ x \ M((i-k)+k)" by auto thus "x \ ?A" by blast qed qed lemma UN_le_add_shift_strict: "(\ii\{k.. ?A" proof fix x assume "x \ ?B" then obtain i where i: "i \ {k.. M(i)" by auto then have "i - k < n \ x \ M((i-k) + k)" by auto then show "x \ ?A" using UN_le_add_shift by blast qed qed (fastforce) lemma UN_UN_finite_eq: "(\n::nat. \i\{0..n. A n)" by (auto simp add: atLeast0LessThan) lemma UN_finite_subset: "(\n::nat. (\i\{0.. C) \ (\n. A n) \ C" by (subst UN_UN_finite_eq [symmetric]) blast lemma UN_finite2_subset: assumes "\n::nat. (\i\{0.. (\i\{0..n. A n) \ (\n. B n)" proof (rule UN_finite_subset, rule) fix n and a from assms have "(\i\{0.. (\i\{0.. (\i\{0.. (\i\{0.. (\i. B i)" by (auto simp add: UN_UN_finite_eq) qed lemma UN_finite2_eq: "(\n::nat. (\i\{0..i\{0.. (\n. A n) = (\n. B n)" apply (rule subset_antisym [OF UN_finite_subset UN_finite2_subset]) apply auto apply (force simp add: atLeastLessThan_add_Un [of 0])+ done subsubsection \Cardinality\ lemma card_lessThan [simp]: "card {..x. x + l) ` {.. {0.. {0..n}" shows "finite N" using assms finite_atLeastAtMost by (rule finite_subset) lemma ex_bij_betw_nat_finite: "finite M \ \h. bij_betw h {0.. \h. bij_betw h M {0.. finite B \ card A = card B \ \h. bij_betw h A B" apply(drule ex_bij_betw_finite_nat) apply(drule ex_bij_betw_nat_finite) apply(auto intro!:bij_betw_trans) done lemma ex_bij_betw_nat_finite_1: "finite M \ \h. bij_betw h {1 .. card M} M" by (rule finite_same_card_bij) auto lemma bij_betw_iff_card: assumes "finite A" "finite B" shows "(\f. bij_betw f A B) \ (card A = card B)" proof assume "card A = card B" moreover obtain f where "bij_betw f A {0 ..< card A}" using assms ex_bij_betw_finite_nat by blast moreover obtain g where "bij_betw g {0 ..< card B} B" using assms ex_bij_betw_nat_finite by blast ultimately have "bij_betw (g \ f) A B" by (auto simp: bij_betw_trans) thus "(\f. bij_betw f A B)" by blast qed (auto simp: bij_betw_same_card) lemma subset_eq_atLeast0_lessThan_card: fixes n :: nat assumes "N \ {0.. n" proof - from assms finite_lessThan have "card N \ card {0..Relational version of @{thm [source] card_inj_on_le}:\ lemma card_le_if_inj_on_rel: assumes "finite B" "\a. a \ A \ \b. b\B \ r a b" "\a1 a2 b. \ a1 \ A; a2 \ A; b \ B; r a1 b; r a2 b \ \ a1 = a2" shows "card A \ card B" proof - let ?P = "\a b. b \ B \ r a b" let ?f = "\a. SOME b. ?P a b" have 1: "?f ` A \ B" by (auto intro: someI2_ex[OF assms(2)]) have "inj_on ?f A" proof (auto simp: inj_on_def) fix a1 a2 assume asms: "a1 \ A" "a2 \ A" "?f a1 = ?f a2" have 0: "?f a1 \ B" using "1" \a1 \ A\ by blast have 1: "r a1 (?f a1)" using someI_ex[OF assms(2)[OF \a1 \ A\]] by blast have 2: "r a2 (?f a1)" using someI_ex[OF assms(2)[OF \a2 \ A\]] asms(3) by auto show "a1 = a2" using assms(3)[OF asms(1,2) 0 1 2] . qed with 1 show ?thesis using card_inj_on_le[of ?f A B] assms(1) by simp qed lemma inj_on_funpow_least: \<^marker>\contributor \Lars Noschinski\\ \inj_on (\k. (f ^^ k) s) {0.. if \(f ^^ n) s = s\ \\m. 0 < m \ m < n \ (f ^^ m) s \ s\ proof - { fix k l assume A: "k < n" "l < n" "k \ l" "(f ^^ k) s = (f ^^ l) s" define k' l' where "k' = min k l" and "l' = max k l" with A have A': "k' < l'" "(f ^^ k') s = (f ^^ l') s" "l' < n" by (auto simp: min_def max_def) have "s = (f ^^ ((n - l') + l')) s" using that \l' < n\ by simp also have "\ = (f ^^ (n - l')) ((f ^^ l') s)" by (simp add: funpow_add) also have "(f ^^ l') s = (f ^^ k') s" by (simp add: A') also have "(f ^^ (n - l')) \ = (f ^^ (n - l' + k')) s" by (simp add: funpow_add) finally have "(f ^^ (n - l' + k')) s = s" by simp moreover have "n - l' + k' < n" "0 < n - l' + k'"using A' by linarith+ ultimately have False using that(2) by auto } then show ?thesis by (intro inj_onI) auto qed subsection \Intervals of integers\ lemma atLeastLessThanPlusOne_atLeastAtMost_int: "{l..Finiteness\ lemma image_atLeastZeroLessThan_int: "0 \ u ==> {(0::int).. u") case True then show ?thesis by (auto simp: image_atLeastZeroLessThan_int) qed auto lemma finite_atLeastLessThan_int [iff]: "finite {l..Cardinality\ lemma card_atLeastZeroLessThan_int: "card {(0::int).. u") case True then show ?thesis by (auto simp: image_atLeastZeroLessThan_int card_image inj_on_def) qed auto lemma card_atLeastLessThan_int [simp]: "card {l.. k < (i::nat)}" proof - have "{k. P k \ k < i} \ {.. M" shows "card {k \ M. k < Suc i} \ 0" proof - from zero_in_M have "{k \ M. k < Suc i} \ {}" by auto with finite_M_bounded_by_nat show ?thesis by (auto simp add: card_eq_0_iff) qed lemma card_less_Suc2: assumes "0 \ M" shows "card {k. Suc k \ M \ k < i} = card {k \ M. k < Suc i}" proof - have *: "\j \ M; j < Suc i\ \ j - Suc 0 < i \ Suc (j - Suc 0) \ M \ Suc 0 \ j" for j by (cases j) (use assms in auto) show ?thesis proof (rule card_bij_eq) show "inj_on Suc {k. Suc k \ M \ k < i}" by force show "inj_on (\x. x - Suc 0) {k \ M. k < Suc i}" by (rule inj_on_diff_nat) (use * in blast) qed (use * in auto) qed lemma card_less_Suc: assumes "0 \ M" shows "Suc (card {k. Suc k \ M \ k < i}) = card {k \ M. k < Suc i}" proof - have "Suc (card {k. Suc k \ M \ k < i}) = Suc (card {k. Suc k \ M - {0} \ k < i})" by simp also have "\ = Suc (card {k \ M - {0}. k < Suc i})" apply (subst card_less_Suc2) using assms by auto also have "\ = Suc (card ({k \ M. k < Suc i} - {0}))" by (force intro: arg_cong [where f=card]) also have "\ = card (insert 0 ({k \ M. k < Suc i} - {0}))" by (simp add: card.insert_remove) also have "... = card {k \ M. k < Suc i}" using assms by (force simp add: intro: arg_cong [where f=card]) finally show ?thesis. qed lemma card_le_Suc_Max: "finite S \ card S \ Suc (Max S)" proof (rule classical) assume "finite S" and "\ Suc (Max S) \ card S" then have "Suc (Max S) < card S" by simp with `finite S` have "S \ {0..Max S}" by auto hence "card S \ card {0..Max S}" by (intro card_mono; auto) thus "card S \ Suc (Max S)" by simp qed subsection \Lemmas useful with the summation operator sum\ text \For examples, see Algebra/poly/UnivPoly2.thy\ subsubsection \Disjoint Unions\ text \Singletons and open intervals\ lemma ivl_disj_un_singleton: "{l::'a::linorder} Un {l<..} = {l..}" "{.. {l} Un {l<.. {l<.. u ==> {l} Un {l<..u} = {l..u}" "(l::'a::linorder) \ u ==> {l..One- and two-sided intervals\ lemma ivl_disj_un_one: "(l::'a::linorder) < u ==> {..l} Un {l<.. u ==> {.. u ==> {..l} Un {l<..u} = {..u}" "(l::'a::linorder) \ u ==> {.. u ==> {l<..u} Un {u<..} = {l<..}" "(l::'a::linorder) < u ==> {l<.. u ==> {l..u} Un {u<..} = {l..}" "(l::'a::linorder) \ u ==> {l..Two- and two-sided intervals\ lemma ivl_disj_un_two: "[| (l::'a::linorder) < m; m \ u |] ==> {l<.. m; m < u |] ==> {l<..m} Un {m<.. m; m \ u |] ==> {l.. m; m < u |] ==> {l..m} Un {m<.. u |] ==> {l<.. m; m \ u |] ==> {l<..m} Un {m<..u} = {l<..u}" "[| (l::'a::linorder) \ m; m \ u |] ==> {l.. m; m \ u |] ==> {l..m} Un {m<..u} = {l..u}" by auto lemma ivl_disj_un_two_touch: "[| (l::'a::linorder) < m; m < u |] ==> {l<..m} Un {m.. m; m < u |] ==> {l..m} Un {m.. u |] ==> {l<..m} Un {m..u} = {l<..u}" "[| (l::'a::linorder) \ m; m \ u |] ==> {l..m} Un {m..u} = {l..u}" by auto lemmas ivl_disj_un = ivl_disj_un_singleton ivl_disj_un_one ivl_disj_un_two ivl_disj_un_two_touch subsubsection \Disjoint Intersections\ text \One- and two-sided intervals\ lemma ivl_disj_int_one: "{..l::'a::order} Int {l<..Two- and two-sided intervals\ lemma ivl_disj_int_two: "{l::'a::order<..Some Differences\ lemma ivl_diff[simp]: "i \ n \ {i..Some Subset Conditions\ lemma ivl_subset [simp]: "({i.. {m.. i \ m \ i \ j \ (n::'a::linorder))" using linorder_class.le_less_linear[of i n] apply (auto simp: linorder_not_le) apply (force intro: leI)+ done lemma obtain_subset_with_card_n: assumes "n \ card S" obtains T where "T \ S" "card T = n" "finite T" proof - obtain n' where "card S = n + n'" by (metis assms le_add_diff_inverse) with that show thesis proof (induct n' arbitrary: S) case 0 then show ?case by (cases "finite S") auto next case Suc then show ?case by (simp add: card_Suc_eq) (metis subset_insertI2) qed qed subsection \Generic big monoid operation over intervals\ context semiring_char_0 begin lemma inj_on_of_nat [simp]: "inj_on of_nat N" by rule simp lemma bij_betw_of_nat [simp]: "bij_betw of_nat N A \ of_nat ` N = A" by (simp add: bij_betw_def) end context comm_monoid_set begin lemma atLeastLessThan_reindex: "F g {h m.. h) {m.. h) {m..n}" if "bij_betw h {m..n} {h m..h n}" for m n ::nat proof - from that have "inj_on h {m..n}" and "h ` {m..n} = {h m..h n}" by (simp_all add: bij_betw_def) then show ?thesis using reindex [of h "{m..n}" g] by simp qed lemma atLeastLessThan_shift_bounds: "F g {m + k.. plus k) {m.. plus k) {m..n}" for m n k :: nat using atLeastAtMost_reindex [of "plus k" m n g] by (simp add: ac_simps) lemma atLeast_Suc_lessThan_Suc_shift: "F g {Suc m.. Suc) {m.. Suc) {m..n}" using atLeastAtMost_shift_bounds [of _ _ 1] by (simp add: plus_1_eq_Suc) lemma atLeast_int_lessThan_int_shift: "F g {int m.. int) {m.. int) {m..n}" by (rule atLeastAtMost_reindex) (simp add: image_int_atLeastAtMost) lemma atLeast0_lessThan_Suc: "F g {0..* g n" by (simp add: atLeast0_lessThan_Suc ac_simps) lemma atLeast0_atMost_Suc: "F g {0..Suc n} = F g {0..n} \<^bold>* g (Suc n)" by (simp add: atLeast0_atMost_Suc ac_simps) lemma atLeast0_lessThan_Suc_shift: "F g {0..* F (g \ Suc) {0..* F (g \ Suc) {0..n}" by (simp add: atLeast0_atMost_Suc_eq_insert_0 atLeast_Suc_atMost_Suc_shift) lemma atLeast_Suc_lessThan: "F g {m..* F g {Suc m..* F g {Suc m..n}" if "m \ n" proof - from that have "{m..n} = insert m {Suc m..n}" by auto then show ?thesis by simp qed lemma ivl_cong: "a = c \ b = d \ (\x. c \ x \ x < d \ g x = h x) \ F g {a.. plus m) {0.. n") simp_all lemma atLeastAtMost_shift_0: fixes m n p :: nat assumes "m \ n" shows "F g {m..n} = F (g \ plus m) {0..n - m}" using assms atLeastAtMost_shift_bounds [of g 0 m "n - m"] by simp lemma atLeastLessThan_concat: fixes m n p :: nat shows "m \ n \ n \ p \ F g {m..* F g {n..i. g (m + n - Suc i)) {n..i. g (m + n - i)) {n..m}" by (rule reindex_bij_witness [where i="\i. m + n - i" and j="\i. m + n - i"]) auto lemma atLeastLessThan_rev_at_least_Suc_atMost: "F g {n..i. g (m + n - i)) {Suc n..m}" unfolding atLeastLessThan_rev [of g n m] by (cases m) (simp_all add: atLeast_Suc_atMost_Suc_shift atLeastLessThanSuc_atLeastAtMost) end subsection \Summation indexed over intervals\ syntax (ASCII) "_from_to_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(SUM _ = _.._./ _)" [0,0,0,10] 10) "_from_upto_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(SUM _ = _..<_./ _)" [0,0,0,10] 10) "_upt_sum" :: "idt \ 'a \ 'b \ 'b" ("(SUM _<_./ _)" [0,0,10] 10) "_upto_sum" :: "idt \ 'a \ 'b \ 'b" ("(SUM _<=_./ _)" [0,0,10] 10) syntax (latex_sum output) "_from_to_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\sum_{\_ = _\<^latex>\}^{\_\<^latex>\}$\ _)" [0,0,0,10] 10) "_from_upto_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\sum_{\_ = _\<^latex>\}^{<\_\<^latex>\}$\ _)" [0,0,0,10] 10) "_upt_sum" :: "idt \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\sum_{\_ < _\<^latex>\}$\ _)" [0,0,10] 10) "_upto_sum" :: "idt \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\sum_{\_ \ _\<^latex>\}$\ _)" [0,0,10] 10) syntax "_from_to_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\_ = _.._./ _)" [0,0,0,10] 10) "_from_upto_sum" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\_ = _..<_./ _)" [0,0,0,10] 10) "_upt_sum" :: "idt \ 'a \ 'b \ 'b" ("(3\_<_./ _)" [0,0,10] 10) "_upto_sum" :: "idt \ 'a \ 'b \ 'b" ("(3\_\_./ _)" [0,0,10] 10) translations "\x=a..b. t" == "CONST sum (\x. t) {a..b}" "\x=a..x. t) {a..i\n. t" == "CONST sum (\i. t) {..n}" "\ii. t) {..The above introduces some pretty alternative syntaxes for summation over intervals: \begin{center} \begin{tabular}{lll} Old & New & \LaTeX\\ @{term[source]"\x\{a..b}. e"} & \<^term>\\x=a..b. e\ & @{term[mode=latex_sum]"\x=a..b. e"}\\ @{term[source]"\x\{a..\\x=a.. & @{term[mode=latex_sum]"\x=a..x\{..b}. e"} & \<^term>\\x\b. e\ & @{term[mode=latex_sum]"\x\b. e"}\\ @{term[source]"\x\{..\\x & @{term[mode=latex_sum]"\xlatex_sum\ (e.g.\ via \mode = latex_sum\ in antiquotations). It is not the default \LaTeX\ output because it only works well with italic-style formulae, not tt-style. Note that for uniformity on \<^typ>\nat\ it is better to use \<^term>\\x::nat=0.. rather than \\x: \sum\ may not provide all lemmas available for \<^term>\{m.. also in the special form for \<^term>\{...\ text\This congruence rule should be used for sums over intervals as the standard theorem @{text[source]sum.cong} does not work well with the simplifier who adds the unsimplified premise \<^term>\x\B\ to the context.\ context comm_monoid_set begin lemma zero_middle: assumes "1 \ p" "k \ p" shows "F (\j. if j < k then g j else if j = k then \<^bold>1 else h (j - Suc 0)) {..p} = F (\j. if j < k then g j else h j) {..p - Suc 0}" (is "?lhs = ?rhs") proof - have [simp]: "{..p - Suc 0} \ {j. j < k} = {.. - {j. j < k} = {k..p - Suc 0}" using assms by auto have "?lhs = F g {..* F (\j. if j = k then \<^bold>1 else h (j - Suc 0)) {k..p}" using union_disjoint [of "{.. = F g {..* F (\j. h (j - Suc 0)) {Suc k..p}" by (simp add: atLeast_Suc_atMost [of k p] assms) also have "\ = F g {..* F h {k .. p - Suc 0}" using reindex [of Suc "{k..p - Suc 0}"] assms by simp also have "\ = ?rhs" by (simp add: If_cases) finally show ?thesis . qed lemma atMost_Suc [simp]: "F g {..Suc n} = F g {..n} \<^bold>* g (Suc n)" by (simp add: atMost_Suc ac_simps) lemma lessThan_Suc [simp]: "F g {..* g n" by (simp add: lessThan_Suc ac_simps) lemma cl_ivl_Suc [simp]: "F g {m..Suc n} = (if Suc n < m then \<^bold>1 else F g {m..n} \<^bold>* g(Suc n))" by (auto simp: ac_simps atLeastAtMostSuc_conv) lemma op_ivl_Suc [simp]: "F g {m..1 else F g {m..* g(n))" by (auto simp: ac_simps atLeastLessThanSuc) lemma head: fixes n :: nat assumes mn: "m \ n" shows "F g {m..n} = g m \<^bold>* F g {m<..n}" (is "?lhs = ?rhs") proof - from mn have "{m..n} = {m} \ {m<..n}" by (auto intro: ivl_disj_un_singleton) hence "?lhs = F g ({m} \ {m<..n})" by (simp add: atLeast0LessThan) also have "\ = ?rhs" by simp finally show ?thesis . qed lemma last_plus: fixes n::nat shows "m \ n \ F g {m..n} = g n \<^bold>* F g {m..1 else F g {m..* g(n))" by (simp add: commute last_plus) lemma ub_add_nat: assumes "(m::nat) \ n + 1" shows "F g {m..n + p} = F g {m..n} \<^bold>* F g {n + 1..n + p}" proof- have "{m .. n+p} = {m..n} \ {n+1..n+p}" using \m \ n+1\ by auto thus ?thesis by (auto simp: ivl_disj_int union_disjoint atLeastSucAtMost_greaterThanAtMost) qed lemma nat_group: fixes k::nat shows "F (\m. F g {m * k ..< m*k + k}) {.. 0" by auto then show ?thesis by (induct n) (simp_all add: atLeastLessThan_concat add.commute atLeast0LessThan[symmetric]) qed auto lemma triangle_reindex: fixes n :: nat shows "F (\(i,j). g i j) {(i,j). i+j < n} = F (\k. F (\i. g i (k - i)) {..k}) {..(i,j). g i j) {(i,j). i+j \ n} = F (\k. F (\i. g i (k - i)) {..k}) {..n}" using triangle_reindex [of g "Suc n"] by (simp only: Nat.less_Suc_eq_le lessThan_Suc_atMost) lemma nat_diff_reindex: "F (\i. g (n - Suc i)) {..i. g(i + k)){m..i. g(i + k)){m..n::nat}" by (rule reindex_bij_witness[where i="\i. i + k" and j="\i. i - k"]) auto corollary shift_bounds_cl_Suc_ivl: "F g {Suc m..Suc n} = F (\i. g(Suc i)){m..n}" by (simp add: shift_bounds_cl_nat_ivl[where k="Suc 0", simplified]) corollary Suc_reindex_ivl: "m \ n \ F g {m..n} \<^bold>* g (Suc n) = g m \<^bold>* F (\i. g (Suc i)) {m..n}" by (simp add: assoc atLeast_Suc_atMost flip: shift_bounds_cl_Suc_ivl) corollary shift_bounds_Suc_ivl: "F g {Suc m..i. g(Suc i)){m..* F (\i. g (Suc i)) {..n}" proof (induct n) case 0 show ?case by simp next case (Suc n) note IH = this have "F g {..Suc (Suc n)} = F g {..Suc n} \<^bold>* g (Suc (Suc n))" by (rule atMost_Suc) also have "F g {..Suc n} = g 0 \<^bold>* F (\i. g (Suc i)) {..n}" by (rule IH) also have "g 0 \<^bold>* F (\i. g (Suc i)) {..n} \<^bold>* g (Suc (Suc n)) = g 0 \<^bold>* (F (\i. g (Suc i)) {..n} \<^bold>* g (Suc (Suc n)))" by (rule assoc) also have "F (\i. g (Suc i)) {..n} \<^bold>* g (Suc (Suc n)) = F (\i. g (Suc i)) {..Suc n}" by (rule atMost_Suc [symmetric]) finally show ?case . qed lemma lessThan_Suc_shift: "F g {..* F (\i. g (Suc i)) {..* F (\i. g (Suc i)) {..i. F (\j. a i j) {0..j. F (\i. a i j) {Suc j..n}) {0..i. F (\j. a i j) {..j. F (\i. a i j) {Suc j..n}) {..k. g (Suc k)) {.. = F (\k. g (Suc k)) {.. b \ F g {a..* g b" by (simp add: atLeastLessThanSuc commute) lemma nat_ivl_Suc': assumes "m \ Suc n" shows "F g {m..Suc n} = g (Suc n) \<^bold>* F g {m..n}" proof - from assms have "{m..Suc n} = insert (Suc n) {m..n}" by auto also have "F g \ = g (Suc n) \<^bold>* F g {m..n}" by simp finally show ?thesis . qed lemma in_pairs: "F g {2*m..Suc(2*n)} = F (\i. g(2*i) \<^bold>* g(Suc(2*i))) {m..n}" proof (induction n) case 0 show ?case by (cases "m=0") auto next case (Suc n) then show ?case by (auto simp: assoc split: if_split_asm) qed lemma in_pairs_0: "F g {..Suc(2*n)} = F (\i. g(2*i) \<^bold>* g(Suc(2*i))) {..n}" using in_pairs [of _ 0 n] by (simp add: atLeast0AtMost) end lemma card_sum_le_nat_sum: "\ {0.. \ S" proof (cases "finite S") case True then show ?thesis proof (induction "card S" arbitrary: S) case (Suc x) then have "Max S \ x" using card_le_Suc_Max by fastforce let ?S' = "S - {Max S}" from Suc have "Max S \ S" by (auto intro: Max_in) hence cards: "card S = Suc (card ?S')" using `finite S` by (intro card.remove; auto) hence "\ {0.. \ ?S'" using Suc by (intro Suc; auto) hence "\ {0.. \ ?S' + Max S" using `Max S \ x` by simp also have "... = \ S" using sum.remove[OF `finite S` `Max S \ S`, where g="\x. x"] by simp finally show ?case using cards Suc by auto qed simp qed simp lemma sum_natinterval_diff: fixes f:: "nat \ ('a::ab_group_add)" shows "sum (\k. f k - f(k + 1)) {(m::nat) .. n} = (if m \ n then f m - f(n + 1) else 0)" by (induct n, auto simp add: algebra_simps not_le le_Suc_eq) lemma sum_diff_nat_ivl: fixes f :: "nat \ 'a::ab_group_add" shows "\ m \ n; n \ p \ \ sum f {m..x. Q x \ P x \ (\xxxk = 0..k = 0..k = Suc 0..k = Suc 0..k = 0..Shifting bounds\ context comm_monoid_add begin context fixes f :: "nat \ 'a" assumes "f 0 = 0" begin lemma sum_shift_lb_Suc0_0_upt: "sum f {Suc 0..f 0 = 0\ by simp qed lemma sum_shift_lb_Suc0_0: "sum f {Suc 0..k} = sum f {0..k}" proof (cases k) case 0 with \f 0 = 0\ show ?thesis by simp next case (Suc k) moreover have "{0..Suc k} = insert 0 {Suc 0..Suc k}" by auto ultimately show ?thesis using \f 0 = 0\ by simp qed end end lemma sum_Suc_diff: fixes f :: "nat \ 'a::ab_group_add" assumes "m \ Suc n" shows "(\i = m..n. f(Suc i) - f i) = f (Suc n) - f m" using assms by (induct n) (auto simp: le_Suc_eq) lemma sum_Suc_diff': fixes f :: "nat \ 'a::ab_group_add" assumes "m \ n" shows "(\i = m..Telescoping\ lemma sum_telescope: fixes f::"nat \ 'a::ab_group_add" shows "sum (\i. f i - f (Suc i)) {.. i} = f 0 - f (Suc i)" by (induct i) simp_all lemma sum_telescope'': assumes "m \ n" shows "(\k\{Suc m..n}. f k - f (k - 1)) = f n - (f m :: 'a :: ab_group_add)" by (rule dec_induct[OF assms]) (simp_all add: algebra_simps) lemma sum_lessThan_telescope: "(\nnThe formula for geometric sums\ lemma sum_power2: "(\i=0.. 1" shows "(\i 0" by simp_all moreover have "(\iy \ 0\) ultimately show ?thesis by simp qed lemma diff_power_eq_sum: fixes y :: "'a::{comm_ring,monoid_mult}" shows "x ^ (Suc n) - y ^ (Suc n) = (x - y) * (\pppp \\COMPLEX_POLYFUN\ in HOL Light\ fixes x :: "'a::{comm_ring,monoid_mult}" shows "x^n - y^n = (x - y) * (\iiiii\n. x^i) = 1 - x^Suc n" by (simp only: one_diff_power_eq lessThan_Suc_atMost) lemma sum_power_shift: fixes x :: "'a::{comm_ring,monoid_mult}" assumes "m \ n" shows "(\i=m..n. x^i) = x^m * (\i\n-m. x^i)" proof - have "(\i=m..n. x^i) = x^m * (\i=m..n. x^(i-m))" by (simp add: sum_distrib_left power_add [symmetric]) also have "(\i=m..n. x^(i-m)) = (\i\n-m. x^i)" using \m \ n\ by (intro sum.reindex_bij_witness[where j="\i. i - m" and i="\i. i + m"]) auto finally show ?thesis . qed lemma sum_gp_multiplied: fixes x :: "'a::{comm_ring,monoid_mult}" assumes "m \ n" shows "(1 - x) * (\i=m..n. x^i) = x^m - x^Suc n" proof - have "(1 - x) * (\i=m..n. x^i) = x^m * (1 - x) * (\i\n-m. x^i)" by (metis mult.assoc mult.commute assms sum_power_shift) also have "... =x^m * (1 - x^Suc(n-m))" by (metis mult.assoc sum_gp_basic) also have "... = x^m - x^Suc n" using assms by (simp add: algebra_simps) (metis le_add_diff_inverse power_add) finally show ?thesis . qed lemma sum_gp: fixes x :: "'a::{comm_ring,division_ring}" shows "(\i=m..n. x^i) = (if n < m then 0 else if x = 1 then of_nat((n + 1) - m) else (x^m - x^Suc n) / (1 - x))" using sum_gp_multiplied [of m n x] apply auto by (metis eq_iff_diff_eq_0 mult.commute nonzero_divide_eq_eq) subsubsection\Geometric progressions\ lemma sum_gp0: fixes x :: "'a::{comm_ring,division_ring}" shows "(\i\n. x^i) = (if x = 1 then of_nat(n + 1) else (1 - x^Suc n) / (1 - x))" using sum_gp_basic[of x n] by (simp add: mult.commute field_split_simps) lemma sum_power_add: fixes x :: "'a::{comm_ring,monoid_mult}" shows "(\i\I. x^(m+i)) = x^m * (\i\I. x^i)" by (simp add: sum_distrib_left power_add) lemma sum_gp_offset: fixes x :: "'a::{comm_ring,division_ring}" shows "(\i=m..m+n. x^i) = (if x = 1 then of_nat n + 1 else x^m * (1 - x^Suc n) / (1 - x))" using sum_gp [of x m "m+n"] by (auto simp: power_add algebra_simps) lemma sum_gp_strict: fixes x :: "'a::{comm_ring,division_ring}" shows "(\iThe formulae for arithmetic sums\ context comm_semiring_1 begin lemma double_gauss_sum: "2 * (\i = 0..n. of_nat i) = of_nat n * (of_nat n + 1)" by (induct n) (simp_all add: sum.atLeast0_atMost_Suc algebra_simps left_add_twice) lemma double_gauss_sum_from_Suc_0: "2 * (\i = Suc 0..n. of_nat i) = of_nat n * (of_nat n + 1)" proof - have "sum of_nat {Suc 0..n} = sum of_nat (insert 0 {Suc 0..n})" by simp also have "\ = sum of_nat {0..n}" by (cases n) (simp_all add: atLeast0_atMost_Suc_eq_insert_0) finally show ?thesis by (simp add: double_gauss_sum) qed lemma double_arith_series: "2 * (\i = 0..n. a + of_nat i * d) = (of_nat n + 1) * (2 * a + of_nat n * d)" proof - have "(\i = 0..n. a + of_nat i * d) = ((\i = 0..n. a) + (\i = 0..n. of_nat i * d))" by (rule sum.distrib) also have "\ = (of_nat (Suc n) * a + d * (\i = 0..n. of_nat i))" by (simp add: sum_distrib_left algebra_simps) finally show ?thesis by (simp add: algebra_simps double_gauss_sum left_add_twice) qed end context unique_euclidean_semiring_with_nat begin lemma gauss_sum: "(\i = 0..n. of_nat i) = of_nat n * (of_nat n + 1) div 2" using double_gauss_sum [of n, symmetric] by simp lemma gauss_sum_from_Suc_0: "(\i = Suc 0..n. of_nat i) = of_nat n * (of_nat n + 1) div 2" using double_gauss_sum_from_Suc_0 [of n, symmetric] by simp lemma arith_series: "(\i = 0..n. a + of_nat i * d) = (of_nat n + 1) * (2 * a + of_nat n * d) div 2" using double_arith_series [of a d n, symmetric] by simp end lemma gauss_sum_nat: "\{0..n} = (n * Suc n) div 2" using gauss_sum [of n, where ?'a = nat] by simp lemma arith_series_nat: "(\i = 0..n. a + i * d) = Suc n * (2 * a + n * d) div 2" using arith_series [of a d n] by simp lemma Sum_Icc_int: "\{m..n} = (n * (n + 1) - m * (m - 1)) div 2" if "m \ n" for m n :: int using that proof (induct i \ "nat (n - m)" arbitrary: m n) case 0 then have "m = n" by arith then show ?case by (simp add: algebra_simps mult_2 [symmetric]) next case (Suc i) have 0: "i = nat((n-1) - m)" "m \ n-1" using Suc(2,3) by arith+ have "\ {m..n} = \ {m..1+(n-1)}" by simp also have "\ = \ {m..n-1} + n" using \m \ n\ by(subst atLeastAtMostPlus1_int_conv) simp_all also have "\ = ((n-1)*(n-1+1) - m*(m-1)) div 2 + n" by(simp add: Suc(1)[OF 0]) also have "\ = ((n-1)*(n-1+1) - m*(m-1) + 2*n) div 2" by simp also have "\ = (n*(n+1) - m*(m-1)) div 2" by (simp add: algebra_simps mult_2_right) finally show ?case . qed lemma Sum_Icc_nat: "\{m..n} = (n * (n + 1) - m * (m - 1)) div 2" for m n :: nat proof (cases "m \ n") case True then have *: "m * (m - 1) \ n * (n + 1)" by (meson diff_le_self order_trans le_add1 mult_le_mono) have "int (\{m..n}) = (\{int m..int n})" by (simp add: sum.atLeast_int_atMost_int_shift) also have "\ = (int n * (int n + 1) - int m * (int m - 1)) div 2" using \m \ n\ by (simp add: Sum_Icc_int) also have "\ = int ((n * (n + 1) - m * (m - 1)) div 2)" using le_square * by (simp add: algebra_simps of_nat_div of_nat_diff) finally show ?thesis by (simp only: of_nat_eq_iff) next case False then show ?thesis by (auto dest: less_imp_Suc_add simp add: not_le algebra_simps) qed lemma Sum_Ico_nat: "\{m..Division remainder\ lemma range_mod: fixes n :: nat assumes "n > 0" shows "range (\m. m mod n) = {0.. ?A \ m \ ?B" proof assume "m \ ?A" with assms show "m \ ?B" by auto next assume "m \ ?B" moreover have "m mod n \ ?A" by (rule rangeI) ultimately show "m \ ?A" by simp qed qed subsection \Products indexed over intervals\ syntax (ASCII) "_from_to_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(PROD _ = _.._./ _)" [0,0,0,10] 10) "_from_upto_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(PROD _ = _..<_./ _)" [0,0,0,10] 10) "_upt_prod" :: "idt \ 'a \ 'b \ 'b" ("(PROD _<_./ _)" [0,0,10] 10) "_upto_prod" :: "idt \ 'a \ 'b \ 'b" ("(PROD _<=_./ _)" [0,0,10] 10) syntax (latex_prod output) "_from_to_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\prod_{\_ = _\<^latex>\}^{\_\<^latex>\}$\ _)" [0,0,0,10] 10) "_from_upto_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\prod_{\_ = _\<^latex>\}^{<\_\<^latex>\}$\ _)" [0,0,0,10] 10) "_upt_prod" :: "idt \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\prod_{\_ < _\<^latex>\}$\ _)" [0,0,10] 10) "_upto_prod" :: "idt \ 'a \ 'b \ 'b" ("(3\<^latex>\$\\prod_{\_ \ _\<^latex>\}$\ _)" [0,0,10] 10) syntax "_from_to_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\_ = _.._./ _)" [0,0,0,10] 10) "_from_upto_prod" :: "idt \ 'a \ 'a \ 'b \ 'b" ("(3\_ = _..<_./ _)" [0,0,0,10] 10) "_upt_prod" :: "idt \ 'a \ 'b \ 'b" ("(3\_<_./ _)" [0,0,10] 10) "_upto_prod" :: "idt \ 'a \ 'b \ 'b" ("(3\_\_./ _)" [0,0,10] 10) translations "\x=a..b. t" \ "CONST prod (\x. t) {a..b}" "\x=a.. "CONST prod (\x. t) {a..i\n. t" \ "CONST prod (\i. t) {..n}" "\i "CONST prod (\i. t) {..{int i..int (i+j)}" by (induct j) (auto simp add: atLeastAtMostSuc_conv atLeastAtMostPlus1_int_conv) lemma prod_int_eq: "prod int {i..j} = \{int i..int j}" proof (cases "i \ j") case True then show ?thesis by (metis le_iff_add prod_int_plus_eq) next case False then show ?thesis by auto qed subsection \Efficient folding over intervals\ function fold_atLeastAtMost_nat where [simp del]: "fold_atLeastAtMost_nat f a (b::nat) acc = (if a > b then acc else fold_atLeastAtMost_nat f (a+1) b (f a acc))" by pat_completeness auto termination by (relation "measure (\(_,a,b,_). Suc b - a)") auto lemma fold_atLeastAtMost_nat: assumes "comp_fun_commute f" shows "fold_atLeastAtMost_nat f a b acc = Finite_Set.fold f acc {a..b}" using assms proof (induction f a b acc rule: fold_atLeastAtMost_nat.induct, goal_cases) case (1 f a b acc) interpret comp_fun_commute f by fact show ?case proof (cases "a > b") case True thus ?thesis by (subst fold_atLeastAtMost_nat.simps) auto next case False with 1 show ?thesis by (subst fold_atLeastAtMost_nat.simps) (auto simp: atLeastAtMost_insertL[symmetric] fold_fun_left_comm) qed qed lemma sum_atLeastAtMost_code: "sum f {a..b} = fold_atLeastAtMost_nat (\a acc. f a + acc) a b 0" proof - have "comp_fun_commute (\a. (+) (f a))" by unfold_locales (auto simp: o_def add_ac) thus ?thesis by (simp add: sum.eq_fold fold_atLeastAtMost_nat o_def) qed lemma prod_atLeastAtMost_code: "prod f {a..b} = fold_atLeastAtMost_nat (\a acc. f a * acc) a b 1" proof - have "comp_fun_commute (\a. (*) (f a))" by unfold_locales (auto simp: o_def mult_ac) thus ?thesis by (simp add: prod.eq_fold fold_atLeastAtMost_nat o_def) qed (* TODO: Add support for folding over more kinds of intervals here *) end diff --git a/src/HOL/String.thy b/src/HOL/String.thy --- a/src/HOL/String.thy +++ b/src/HOL/String.thy @@ -1,753 +1,753 @@ (* Author: Tobias Nipkow, Florian Haftmann, TU Muenchen *) section \Character and string types\ theory String -imports Enum +imports Enum Bit_Operations Code_Numeral begin subsection \Strings as list of bytes\ text \ When modelling strings, we follow the approach given in \<^url>\https://utf8everywhere.org/\: \<^item> Strings are a list of bytes (8 bit). \<^item> Byte values from 0 to 127 are US-ASCII. \<^item> Byte values from 128 to 255 are uninterpreted blobs. \ subsubsection \Bytes as datatype\ datatype char = Char (digit0: bool) (digit1: bool) (digit2: bool) (digit3: bool) (digit4: bool) (digit5: bool) (digit6: bool) (digit7: bool) context comm_semiring_1 begin definition of_char :: \char \ 'a\ where \of_char c = horner_sum of_bool 2 [digit0 c, digit1 c, digit2 c, digit3 c, digit4 c, digit5 c, digit6 c, digit7 c]\ lemma of_char_Char [simp]: \of_char (Char b0 b1 b2 b3 b4 b5 b6 b7) = horner_sum of_bool 2 [b0, b1, b2, b3, b4, b5, b6, b7]\ by (simp add: of_char_def) end context unique_euclidean_semiring_with_bit_shifts begin definition char_of :: \'a \ char\ where \char_of n = Char (odd n) (bit n 1) (bit n 2) (bit n 3) (bit n 4) (bit n 5) (bit n 6) (bit n 7)\ lemma char_of_take_bit_eq: \char_of (take_bit n m) = char_of m\ if \n \ 8\ using that by (simp add: char_of_def bit_take_bit_iff) lemma char_of_char [simp]: \char_of (of_char c) = c\ by (simp only: of_char_def char_of_def bit_horner_sum_bit_iff) simp lemma char_of_comp_of_char [simp]: "char_of \ of_char = id" by (simp add: fun_eq_iff) lemma inj_of_char: \inj of_char\ proof (rule injI) fix c d assume "of_char c = of_char d" then have "char_of (of_char c) = char_of (of_char d)" by simp then show "c = d" by simp qed lemma of_char_eqI: \c = d\ if \of_char c = of_char d\ using that inj_of_char by (simp add: inj_eq) lemma of_char_eq_iff [simp]: \of_char c = of_char d \ c = d\ by (auto intro: of_char_eqI) lemma of_char_of [simp]: \of_char (char_of a) = a mod 256\ proof - have \[0..<8] = [0, Suc 0, 2, 3, 4, 5, 6, 7 :: nat]\ by (simp add: upt_eq_Cons_conv) then have \[odd a, bit a 1, bit a 2, bit a 3, bit a 4, bit a 5, bit a 6, bit a 7] = map (bit a) [0..<8]\ by simp then have \of_char (char_of a) = take_bit 8 a\ by (simp only: char_of_def of_char_def char.sel horner_sum_bit_eq_take_bit) then show ?thesis by (simp add: take_bit_eq_mod) qed lemma char_of_mod_256 [simp]: \char_of (n mod 256) = char_of n\ by (rule of_char_eqI) simp lemma of_char_mod_256 [simp]: \of_char c mod 256 = of_char c\ proof - have \of_char (char_of (of_char c)) mod 256 = of_char (char_of (of_char c))\ by (simp only: of_char_of) simp then show ?thesis by simp qed lemma char_of_quasi_inj [simp]: \char_of m = char_of n \ m mod 256 = n mod 256\ (is \?P \ ?Q\) proof assume ?Q then show ?P by (auto intro: of_char_eqI) next assume ?P then have \of_char (char_of m) = of_char (char_of n)\ by simp then show ?Q by simp qed lemma char_of_eq_iff: \char_of n = c \ take_bit 8 n = of_char c\ by (auto intro: of_char_eqI simp add: take_bit_eq_mod) lemma char_of_nat [simp]: \char_of (of_nat n) = char_of n\ by (simp add: char_of_def String.char_of_def drop_bit_of_nat bit_simps) end lemma inj_on_char_of_nat [simp]: "inj_on char_of {0::nat..<256}" by (rule inj_onI) simp lemma nat_of_char_less_256 [simp]: "of_char c < (256 :: nat)" proof - have "of_char c mod (256 :: nat) < 256" by arith then show ?thesis by simp qed lemma range_nat_of_char: "range of_char = {0::nat..<256}" proof (rule; rule) fix n :: nat assume "n \ range of_char" then show "n \ {0..<256}" by auto next fix n :: nat assume "n \ {0..<256}" then have "n = of_char (char_of n)" by simp then show "n \ range of_char" by (rule range_eqI) qed lemma UNIV_char_of_nat: "UNIV = char_of ` {0::nat..<256}" proof - have "range (of_char :: char \ nat) = of_char ` char_of ` {0::nat..<256}" by (auto simp add: range_nat_of_char intro!: image_eqI) with inj_of_char [where ?'a = nat] show ?thesis by (simp add: inj_image_eq_iff) qed lemma card_UNIV_char: "card (UNIV :: char set) = 256" by (auto simp add: UNIV_char_of_nat card_image) context includes lifting_syntax integer.lifting natural.lifting begin lemma [transfer_rule]: \(pcr_integer ===> (=)) char_of char_of\ by (unfold char_of_def) transfer_prover lemma [transfer_rule]: \((=) ===> pcr_integer) of_char of_char\ by (unfold of_char_def) transfer_prover lemma [transfer_rule]: \(pcr_natural ===> (=)) char_of char_of\ by (unfold char_of_def) transfer_prover lemma [transfer_rule]: \((=) ===> pcr_natural) of_char of_char\ by (unfold of_char_def) transfer_prover end lifting_update integer.lifting lifting_forget integer.lifting lifting_update natural.lifting lifting_forget natural.lifting syntax "_Char" :: "str_position \ char" ("CHR _") "_Char_ord" :: "num_const \ char" ("CHR _") type_synonym string = "char list" syntax "_String" :: "str_position \ string" ("_") ML_file \Tools/string_syntax.ML\ instantiation char :: enum begin definition "Enum.enum = [ CHR 0x00, CHR 0x01, CHR 0x02, CHR 0x03, CHR 0x04, CHR 0x05, CHR 0x06, CHR 0x07, CHR 0x08, CHR 0x09, CHR ''\'', CHR 0x0B, CHR 0x0C, CHR 0x0D, CHR 0x0E, CHR 0x0F, CHR 0x10, CHR 0x11, CHR 0x12, CHR 0x13, CHR 0x14, CHR 0x15, CHR 0x16, CHR 0x17, CHR 0x18, CHR 0x19, CHR 0x1A, CHR 0x1B, CHR 0x1C, CHR 0x1D, CHR 0x1E, CHR 0x1F, CHR '' '', CHR ''!'', CHR 0x22, CHR ''#'', CHR ''$'', CHR ''%'', CHR ''&'', CHR 0x27, CHR ''('', CHR '')'', CHR ''*'', CHR ''+'', CHR '','', CHR ''-'', CHR ''.'', CHR ''/'', CHR ''0'', CHR ''1'', CHR ''2'', CHR ''3'', CHR ''4'', CHR ''5'', CHR ''6'', CHR ''7'', CHR ''8'', CHR ''9'', CHR '':'', CHR '';'', CHR ''<'', CHR ''='', CHR ''>'', CHR ''?'', CHR ''@'', CHR ''A'', CHR ''B'', CHR ''C'', CHR ''D'', CHR ''E'', CHR ''F'', CHR ''G'', CHR ''H'', CHR ''I'', CHR ''J'', CHR ''K'', CHR ''L'', CHR ''M'', CHR ''N'', CHR ''O'', CHR ''P'', CHR ''Q'', CHR ''R'', CHR ''S'', CHR ''T'', CHR ''U'', CHR ''V'', CHR ''W'', CHR ''X'', CHR ''Y'', CHR ''Z'', CHR ''['', CHR 0x5C, CHR '']'', CHR ''^'', CHR ''_'', CHR 0x60, CHR ''a'', CHR ''b'', CHR ''c'', CHR ''d'', CHR ''e'', CHR ''f'', CHR ''g'', CHR ''h'', CHR ''i'', CHR ''j'', CHR ''k'', CHR ''l'', CHR ''m'', CHR ''n'', CHR ''o'', CHR ''p'', CHR ''q'', CHR ''r'', CHR ''s'', CHR ''t'', CHR ''u'', CHR ''v'', CHR ''w'', CHR ''x'', CHR ''y'', CHR ''z'', CHR ''{'', CHR ''|'', CHR ''}'', CHR ''~'', CHR 0x7F, CHR 0x80, CHR 0x81, CHR 0x82, CHR 0x83, CHR 0x84, CHR 0x85, CHR 0x86, CHR 0x87, CHR 0x88, CHR 0x89, CHR 0x8A, CHR 0x8B, CHR 0x8C, CHR 0x8D, CHR 0x8E, CHR 0x8F, CHR 0x90, CHR 0x91, CHR 0x92, CHR 0x93, CHR 0x94, CHR 0x95, CHR 0x96, CHR 0x97, CHR 0x98, CHR 0x99, CHR 0x9A, CHR 0x9B, CHR 0x9C, CHR 0x9D, CHR 0x9E, CHR 0x9F, CHR 0xA0, CHR 0xA1, CHR 0xA2, CHR 0xA3, CHR 0xA4, CHR 0xA5, CHR 0xA6, CHR 0xA7, CHR 0xA8, CHR 0xA9, CHR 0xAA, CHR 0xAB, CHR 0xAC, CHR 0xAD, CHR 0xAE, CHR 0xAF, CHR 0xB0, CHR 0xB1, CHR 0xB2, CHR 0xB3, CHR 0xB4, CHR 0xB5, CHR 0xB6, CHR 0xB7, CHR 0xB8, CHR 0xB9, CHR 0xBA, CHR 0xBB, CHR 0xBC, CHR 0xBD, CHR 0xBE, CHR 0xBF, CHR 0xC0, CHR 0xC1, CHR 0xC2, CHR 0xC3, CHR 0xC4, CHR 0xC5, CHR 0xC6, CHR 0xC7, CHR 0xC8, CHR 0xC9, CHR 0xCA, CHR 0xCB, CHR 0xCC, CHR 0xCD, CHR 0xCE, CHR 0xCF, CHR 0xD0, CHR 0xD1, CHR 0xD2, CHR 0xD3, CHR 0xD4, CHR 0xD5, CHR 0xD6, CHR 0xD7, CHR 0xD8, CHR 0xD9, CHR 0xDA, CHR 0xDB, CHR 0xDC, CHR 0xDD, CHR 0xDE, CHR 0xDF, CHR 0xE0, CHR 0xE1, CHR 0xE2, CHR 0xE3, CHR 0xE4, CHR 0xE5, CHR 0xE6, CHR 0xE7, CHR 0xE8, CHR 0xE9, CHR 0xEA, CHR 0xEB, CHR 0xEC, CHR 0xED, CHR 0xEE, CHR 0xEF, CHR 0xF0, CHR 0xF1, CHR 0xF2, CHR 0xF3, CHR 0xF4, CHR 0xF5, CHR 0xF6, CHR 0xF7, CHR 0xF8, CHR 0xF9, CHR 0xFA, CHR 0xFB, CHR 0xFC, CHR 0xFD, CHR 0xFE, CHR 0xFF]" definition "Enum.enum_all P \ list_all P (Enum.enum :: char list)" definition "Enum.enum_ex P \ list_ex P (Enum.enum :: char list)" lemma enum_char_unfold: "Enum.enum = map char_of [0..<256]" proof - have "map (of_char :: char \ nat) Enum.enum = [0..<256]" by (simp add: enum_char_def of_char_def upt_conv_Cons_Cons numeral_2_eq_2 [symmetric]) then have "map char_of (map (of_char :: char \ nat) Enum.enum) = map char_of [0..<256]" by simp then show ?thesis by simp qed instance proof show UNIV: "UNIV = set (Enum.enum :: char list)" by (simp add: enum_char_unfold UNIV_char_of_nat atLeast0LessThan) show "distinct (Enum.enum :: char list)" by (auto simp add: enum_char_unfold distinct_map intro: inj_onI) show "\P. Enum.enum_all P \ Ball (UNIV :: char set) P" by (simp add: UNIV enum_all_char_def list_all_iff) show "\P. Enum.enum_ex P \ Bex (UNIV :: char set) P" by (simp add: UNIV enum_ex_char_def list_ex_iff) qed end lemma linorder_char: "class.linorder (\c d. of_char c \ (of_char d :: nat)) (\c d. of_char c < (of_char d :: nat))" by standard auto text \Optimized version for execution\ definition char_of_integer :: "integer \ char" where [code_abbrev]: "char_of_integer = char_of" definition integer_of_char :: "char \ integer" where [code_abbrev]: "integer_of_char = of_char" lemma char_of_integer_code [code]: "char_of_integer k = (let (q0, b0) = bit_cut_integer k; (q1, b1) = bit_cut_integer q0; (q2, b2) = bit_cut_integer q1; (q3, b3) = bit_cut_integer q2; (q4, b4) = bit_cut_integer q3; (q5, b5) = bit_cut_integer q4; (q6, b6) = bit_cut_integer q5; (_, b7) = bit_cut_integer q6 in Char b0 b1 b2 b3 b4 b5 b6 b7)" by (simp add: bit_cut_integer_def char_of_integer_def char_of_def div_mult2_numeral_eq bit_iff_odd_drop_bit drop_bit_eq_div) lemma integer_of_char_code [code]: "integer_of_char (Char b0 b1 b2 b3 b4 b5 b6 b7) = ((((((of_bool b7 * 2 + of_bool b6) * 2 + of_bool b5) * 2 + of_bool b4) * 2 + of_bool b3) * 2 + of_bool b2) * 2 + of_bool b1) * 2 + of_bool b0" by (simp add: integer_of_char_def of_char_def) subsection \Strings as dedicated type for target language code generation\ subsubsection \Logical specification\ context begin qualified definition ascii_of :: "char \ char" where "ascii_of c = Char (digit0 c) (digit1 c) (digit2 c) (digit3 c) (digit4 c) (digit5 c) (digit6 c) False" qualified lemma ascii_of_Char [simp]: "ascii_of (Char b0 b1 b2 b3 b4 b5 b6 b7) = Char b0 b1 b2 b3 b4 b5 b6 False" by (simp add: ascii_of_def) qualified lemma not_digit7_ascii_of [simp]: "\ digit7 (ascii_of c)" by (simp add: ascii_of_def) qualified lemma ascii_of_idem: "ascii_of c = c" if "\ digit7 c" using that by (cases c) simp qualified lemma char_of_ascii_of [simp]: "of_char (ascii_of c) = take_bit 7 (of_char c :: nat)" by (cases c) (simp only: ascii_of_Char of_char_Char take_bit_horner_sum_bit_eq, simp) qualified typedef literal = "{cs. \c\set cs. \ digit7 c}" morphisms explode Abs_literal proof show "[] \ {cs. \c\set cs. \ digit7 c}" by simp qed qualified setup_lifting type_definition_literal qualified lift_definition implode :: "string \ literal" is "map ascii_of" by auto qualified lemma implode_explode_eq [simp]: "String.implode (String.explode s) = s" proof transfer fix cs show "map ascii_of cs = cs" if "\c\set cs. \ digit7 c" using that by (induction cs) (simp_all add: ascii_of_idem) qed qualified lemma explode_implode_eq [simp]: "String.explode (String.implode cs) = map ascii_of cs" by transfer rule end subsubsection \Syntactic representation\ text \ Logical ground representations for literals are: \<^enum> \0\ for the empty literal; \<^enum> \Literal b0 \ b6 s\ for a literal starting with one character and continued by another literal. Syntactic representations for literals are: \<^enum> Printable text as string prefixed with \STR\; \<^enum> A single ascii value as numerical hexadecimal value prefixed with \STR\. \ instantiation String.literal :: zero begin context begin qualified lift_definition zero_literal :: String.literal is Nil by simp instance .. end end context begin qualified abbreviation (output) empty_literal :: String.literal where "empty_literal \ 0" qualified lift_definition Literal :: "bool \ bool \ bool \ bool \ bool \ bool \ bool \ String.literal \ String.literal" is "\b0 b1 b2 b3 b4 b5 b6 cs. Char b0 b1 b2 b3 b4 b5 b6 False # cs" by auto qualified lemma Literal_eq_iff [simp]: "Literal b0 b1 b2 b3 b4 b5 b6 s = Literal c0 c1 c2 c3 c4 c5 c6 t \ (b0 \ c0) \ (b1 \ c1) \ (b2 \ c2) \ (b3 \ c3) \ (b4 \ c4) \ (b5 \ c5) \ (b6 \ c6) \ s = t" by transfer simp qualified lemma empty_neq_Literal [simp]: "empty_literal \ Literal b0 b1 b2 b3 b4 b5 b6 s" by transfer simp qualified lemma Literal_neq_empty [simp]: "Literal b0 b1 b2 b3 b4 b5 b6 s \ empty_literal" by transfer simp end code_datatype "0 :: String.literal" String.Literal syntax "_Literal" :: "str_position \ String.literal" ("STR _") "_Ascii" :: "num_const \ String.literal" ("STR _") ML_file \Tools/literal.ML\ subsubsection \Operations\ instantiation String.literal :: plus begin context begin qualified lift_definition plus_literal :: "String.literal \ String.literal \ String.literal" is "(@)" by auto instance .. end end instance String.literal :: monoid_add by (standard; transfer) simp_all instantiation String.literal :: size begin context includes literal.lifting begin lift_definition size_literal :: "String.literal \ nat" is length . end instance .. end instantiation String.literal :: equal begin context begin qualified lift_definition equal_literal :: "String.literal \ String.literal \ bool" is HOL.equal . instance by (standard; transfer) (simp add: equal) end end instantiation String.literal :: linorder begin context begin qualified lift_definition less_eq_literal :: "String.literal \ String.literal \ bool" is "ord.lexordp_eq (\c d. of_char c < (of_char d :: nat))" . qualified lift_definition less_literal :: "String.literal \ String.literal \ bool" is "ord.lexordp (\c d. of_char c < (of_char d :: nat))" . instance proof - from linorder_char interpret linorder "ord.lexordp_eq (\c d. of_char c < (of_char d :: nat))" "ord.lexordp (\c d. of_char c < (of_char d :: nat)) :: string \ string \ bool" by (rule linorder.lexordp_linorder) show "PROP ?thesis" by (standard; transfer) (simp_all add: less_le_not_le linear) qed end end lemma infinite_literal: "infinite (UNIV :: String.literal set)" proof - define S where "S = range (\n. replicate n CHR ''A'')" have "inj_on String.implode S" proof (rule inj_onI) fix cs ds assume "String.implode cs = String.implode ds" then have "String.explode (String.implode cs) = String.explode (String.implode ds)" by simp moreover assume "cs \ S" and "ds \ S" ultimately show "cs = ds" by (auto simp add: S_def) qed moreover have "infinite S" by (auto simp add: S_def dest: finite_range_imageI [of _ length]) ultimately have "infinite (String.implode ` S)" by (simp add: finite_image_iff) then show ?thesis by (auto intro: finite_subset) qed subsubsection \Executable conversions\ context begin qualified lift_definition asciis_of_literal :: "String.literal \ integer list" is "map of_char" . qualified lemma asciis_of_zero [simp, code]: "asciis_of_literal 0 = []" by transfer simp qualified lemma asciis_of_Literal [simp, code]: "asciis_of_literal (String.Literal b0 b1 b2 b3 b4 b5 b6 s) = of_char (Char b0 b1 b2 b3 b4 b5 b6 False) # asciis_of_literal s " by transfer simp qualified lift_definition literal_of_asciis :: "integer list \ String.literal" is "map (String.ascii_of \ char_of)" by auto qualified lemma literal_of_asciis_Nil [simp, code]: "literal_of_asciis [] = 0" by transfer simp qualified lemma literal_of_asciis_Cons [simp, code]: "literal_of_asciis (k # ks) = (case char_of k of Char b0 b1 b2 b3 b4 b5 b6 b7 \ String.Literal b0 b1 b2 b3 b4 b5 b6 (literal_of_asciis ks))" by (simp add: char_of_def) (transfer, simp add: char_of_def) qualified lemma literal_of_asciis_of_literal [simp]: "literal_of_asciis (asciis_of_literal s) = s" proof transfer fix cs assume "\c\set cs. \ digit7 c" then show "map (String.ascii_of \ char_of) (map of_char cs) = cs" by (induction cs) (simp_all add: String.ascii_of_idem) qed qualified lemma explode_code [code]: "String.explode s = map char_of (asciis_of_literal s)" by transfer simp qualified lemma implode_code [code]: "String.implode cs = literal_of_asciis (map of_char cs)" by transfer simp qualified lemma equal_literal [code]: "HOL.equal (String.Literal b0 b1 b2 b3 b4 b5 b6 s) (String.Literal a0 a1 a2 a3 a4 a5 a6 r) \ (b0 \ a0) \ (b1 \ a1) \ (b2 \ a2) \ (b3 \ a3) \ (b4 \ a4) \ (b5 \ a5) \ (b6 \ a6) \ (s = r)" by (simp add: equal) end subsubsection \Technical code generation setup\ text \Alternative constructor for generated computations\ context begin qualified definition Literal' :: "bool \ bool \ bool \ bool \ bool \ bool \ bool \ String.literal \ String.literal" where [simp]: "Literal' = String.Literal" lemma [code]: \Literal' b0 b1 b2 b3 b4 b5 b6 s = String.literal_of_asciis [foldr (\b k. of_bool b + k * 2) [b0, b1, b2, b3, b4, b5, b6] 0] + s\ proof - have \foldr (\b k. of_bool b + k * 2) [b0, b1, b2, b3, b4, b5, b6] 0 = of_char (Char b0 b1 b2 b3 b4 b5 b6 False)\ by simp moreover have \Literal' b0 b1 b2 b3 b4 b5 b6 s = String.literal_of_asciis [of_char (Char b0 b1 b2 b3 b4 b5 b6 False)] + s\ by (unfold Literal'_def) (transfer, simp only: list.simps comp_apply char_of_char, simp) ultimately show ?thesis by simp qed lemma [code_computation_unfold]: "String.Literal = Literal'" by simp end code_reserved SML string String Char List code_reserved OCaml string String Char List code_reserved Haskell Prelude code_reserved Scala string code_printing type_constructor String.literal \ (SML) "string" and (OCaml) "string" and (Haskell) "String" and (Scala) "String" | constant "STR ''''" \ (SML) "\"\"" and (OCaml) "\"\"" and (Haskell) "\"\"" and (Scala) "\"\"" setup \ fold Literal.add_code ["SML", "OCaml", "Haskell", "Scala"] \ code_printing constant "(+) :: String.literal \ String.literal \ String.literal" \ (SML) infixl 18 "^" and (OCaml) infixr 6 "^" and (Haskell) infixr 5 "++" and (Scala) infixl 7 "+" | constant String.literal_of_asciis \ (SML) "!(String.implode/ o List.map (fn k => if 0 <= k andalso k < 128 then (Char.chr o IntInf.toInt) k else raise Fail \"Non-ASCII character in literal\"))" and (OCaml) "!(let xs = _ and chr k = let l = Z.to'_int k in if 0 <= l && l < 128 then Char.chr l else failwith \"Non-ASCII character in literal\" in String.init (List.length xs) (List.nth (List.map chr xs)))" and (Haskell) "map/ (let chr k | (0 <= k && k < 128) = Prelude.toEnum k :: Prelude.Char in chr . Prelude.fromInteger)" and (Scala) "\"\"/ ++/ _.map((k: BigInt) => if (BigInt(0) <= k && k < BigInt(128)) k.charValue else sys.error(\"Non-ASCII character in literal\"))" | constant String.asciis_of_literal \ (SML) "!(List.map (fn c => let val k = Char.ord c in if k < 128 then IntInf.fromInt k else raise Fail \"Non-ASCII character in literal\" end) /o String.explode)" and (OCaml) "!(let s = _ in let rec exp i l = if i < 0 then l else exp (i - 1) (let k = Char.code (String.get s i) in if k < 128 then Z.of'_int k :: l else failwith \"Non-ASCII character in literal\") in exp (String.length s - 1) [])" and (Haskell) "map/ (let ord k | (k < 128) = Prelude.toInteger k in ord . (Prelude.fromEnum :: Prelude.Char -> Prelude.Int))" and (Scala) "!(_.toList.map(c => { val k: Int = c.toInt; if (k < 128) BigInt(k) else sys.error(\"Non-ASCII character in literal\") }))" | class_instance String.literal :: equal \ (Haskell) - | constant "HOL.equal :: String.literal \ String.literal \ bool" \ (SML) "!((_ : string) = _)" and (OCaml) "!((_ : string) = _)" and (Haskell) infix 4 "==" and (Scala) infixl 5 "==" | constant "(\) :: String.literal \ String.literal \ bool" \ (SML) "!((_ : string) <= _)" and (OCaml) "!((_ : string) <= _)" and (Haskell) infix 4 "<=" \ \Order operations for \<^typ>\String.literal\ work in Haskell only if no type class instance needs to be generated, because String = [Char] in Haskell and \<^typ>\char list\ need not have the same order as \<^typ>\String.literal\.\ and (Scala) infixl 4 "<=" and (Eval) infixl 6 "<=" | constant "(<) :: String.literal \ String.literal \ bool" \ (SML) "!((_ : string) < _)" and (OCaml) "!((_ : string) < _)" and (Haskell) infix 4 "<" and (Scala) infixl 4 "<" and (Eval) infixl 6 "<" subsubsection \Code generation utility\ setup \Sign.map_naming (Name_Space.mandatory_path "Code")\ definition abort :: "String.literal \ (unit \ 'a) \ 'a" where [simp]: "abort _ f = f ()" declare [[code drop: Code.abort]] lemma abort_cong: "msg = msg' \ Code.abort msg f = Code.abort msg' f" by simp setup \Sign.map_naming Name_Space.parent_path\ setup \Code_Simp.map_ss (Simplifier.add_cong @{thm Code.abort_cong})\ code_printing constant Code.abort \ (SML) "!(raise/ Fail/ _)" and (OCaml) "failwith" and (Haskell) "!(error/ ::/ forall a./ String -> (() -> a) -> a)" and (Scala) "!{/ sys.error((_));/ ((_)).apply(())/ }" subsubsection \Finally\ lifting_update literal.lifting lifting_forget literal.lifting end diff --git a/src/HOL/ex/Meson_Test.thy b/src/HOL/ex/Meson_Test.thy --- a/src/HOL/ex/Meson_Test.thy +++ b/src/HOL/ex/Meson_Test.thy @@ -1,2900 +1,2900 @@ section \Meson test cases\ theory Meson_Test imports Main begin text \ WARNING: there are many potential conflicts between variables used below and constants declared in HOL! \ -hide_const (open) implies union inter subset quotient sum +hide_const (open) implies union inter subset quotient sum or text \ Test data for the MESON proof procedure (Excludes the equality problems 51, 52, 56, 58) \ subsection \Interactive examples\ lemma problem_25: "(\x. P x) & (\x. L x --> ~ (M x & R x)) & (\x. P x --> (M x & L x)) & ((\x. P x --> Q x) | (\x. P x & R x)) --> (\x. Q x & P x)" apply (rule ccontr) ML_prf \ val ctxt = \<^context>; val prem25 = Thm.assume \<^cprop>\\ ?thesis\; val nnf25 = Meson.make_nnf Meson.simp_options_all_true ctxt prem25; val xsko25 = Meson.skolemize Meson.simp_options_all_true ctxt nnf25; \ apply (tactic \cut_tac xsko25 1 THEN REPEAT (eresolve_tac \<^context> [exE] 1)\) ML_val \ val ctxt = \<^context>; val [_, sko25] = #prems (#1 (Subgoal.focus ctxt 1 NONE (#goal @{Isar.goal}))); val clauses25 = Meson.make_clauses ctxt [sko25]; (*7 clauses*) val horns25 = Meson.make_horns clauses25; (*16 Horn clauses*) val go25 :: _ = Meson.gocls clauses25; val ctxt' = fold Thm.declare_hyps (maps Thm.chyps_of (go25 :: horns25)) ctxt; Goal.prove ctxt' [] [] \<^prop>\False\ (fn _ => resolve_tac ctxt' [go25] 1 THEN Meson.depth_prolog_tac ctxt' horns25); \ oops lemma problem_26: "((\x. p x) = (\x. q x)) & (\x. \y. p x & q y --> (r x = s y)) --> ((\x. p x --> r x) = (\x. q x --> s x))" apply (rule ccontr) ML_prf \ val ctxt = \<^context>; val prem26 = Thm.assume \<^cprop>\\ ?thesis\ val nnf26 = Meson.make_nnf Meson.simp_options_all_true ctxt prem26; val xsko26 = Meson.skolemize Meson.simp_options_all_true ctxt nnf26; \ apply (tactic \cut_tac xsko26 1 THEN REPEAT (eresolve_tac \<^context> [exE] 1)\) ML_val \ val ctxt = \<^context>; val [_, sko26] = #prems (#1 (Subgoal.focus ctxt 1 NONE (#goal @{Isar.goal}))); val clauses26 = Meson.make_clauses ctxt [sko26]; val _ = \<^assert> (length clauses26 = 9); val horns26 = Meson.make_horns clauses26; val _ = \<^assert> (length horns26 = 24); val go26 :: _ = Meson.gocls clauses26; val ctxt' = fold Thm.declare_hyps (maps Thm.chyps_of (go26 :: horns26)) ctxt; Goal.prove ctxt' [] [] \<^prop>\False\ (fn _ => resolve_tac ctxt' [go26] 1 THEN Meson.depth_prolog_tac ctxt' horns26); (*7 ms*) (*Proof is of length 107!!*) \ oops lemma problem_43: \ \NOW PROVED AUTOMATICALLY!!\ (*16 Horn clauses*) "(\x. \y. q x y = (\z. p z x = (p z y::bool))) --> (\x. (\y. q x y = (q y x::bool)))" apply (rule ccontr) ML_prf \ val ctxt = \<^context>; val prem43 = Thm.assume \<^cprop>\\ ?thesis\; val nnf43 = Meson.make_nnf Meson.simp_options_all_true ctxt prem43; val xsko43 = Meson.skolemize Meson.simp_options_all_true ctxt nnf43; \ apply (tactic \cut_tac xsko43 1 THEN REPEAT (eresolve_tac \<^context> [exE] 1)\) ML_val \ val ctxt = \<^context>; val [_, sko43] = #prems (#1 (Subgoal.focus ctxt 1 NONE (#goal @{Isar.goal}))); val clauses43 = Meson.make_clauses ctxt [sko43]; val _ = \<^assert> (length clauses43 = 6); val horns43 = Meson.make_horns clauses43; val _ = \<^assert> (length horns43 = 16); val go43 :: _ = Meson.gocls clauses43; val ctxt' = fold Thm.declare_hyps (maps Thm.chyps_of (go43 :: horns43)) ctxt; Goal.prove ctxt' [] [] \<^prop>\False\ (fn _ => resolve_tac ctxt' [go43] 1 THEN Meson.best_prolog_tac ctxt' Meson.size_of_subgoals horns43); (*7ms*) \ oops (* #1 (q x xa ==> ~ q x xa) ==> q xa x #2 (q xa x ==> ~ q xa x) ==> q x xa #3 (~ q x xa ==> q x xa) ==> ~ q xa x #4 (~ q xa x ==> q xa x) ==> ~ q x xa #5 [| ~ q ?U ?V ==> q ?U ?V; ~ p ?W ?U ==> p ?W ?U |] ==> p ?W ?V #6 [| ~ p ?W ?U ==> p ?W ?U; p ?W ?V ==> ~ p ?W ?V |] ==> ~ q ?U ?V #7 [| p ?W ?V ==> ~ p ?W ?V; ~ q ?U ?V ==> q ?U ?V |] ==> ~ p ?W ?U #8 [| ~ q ?U ?V ==> q ?U ?V; ~ p ?W ?V ==> p ?W ?V |] ==> p ?W ?U #9 [| ~ p ?W ?V ==> p ?W ?V; p ?W ?U ==> ~ p ?W ?U |] ==> ~ q ?U ?V #10 [| p ?W ?U ==> ~ p ?W ?U; ~ q ?U ?V ==> q ?U ?V |] ==> ~ p ?W ?V #11 [| p (xb ?U ?V) ?U ==> ~ p (xb ?U ?V) ?U; p (xb ?U ?V) ?V ==> ~ p (xb ?U ?V) ?V |] ==> q ?U ?V #12 [| p (xb ?U ?V) ?V ==> ~ p (xb ?U ?V) ?V; q ?U ?V ==> ~ q ?U ?V |] ==> p (xb ?U ?V) ?U #13 [| q ?U ?V ==> ~ q ?U ?V; p (xb ?U ?V) ?U ==> ~ p (xb ?U ?V) ?U |] ==> p (xb ?U ?V) ?V #14 [| ~ p (xb ?U ?V) ?U ==> p (xb ?U ?V) ?U; ~ p (xb ?U ?V) ?V ==> p (xb ?U ?V) ?V |] ==> q ?U ?V #15 [| ~ p (xb ?U ?V) ?V ==> p (xb ?U ?V) ?V; q ?U ?V ==> ~ q ?U ?V |] ==> ~ p (xb ?U ?V) ?U #16 [| q ?U ?V ==> ~ q ?U ?V; ~ p (xb ?U ?V) ?U ==> p (xb ?U ?V) ?U |] ==> ~ p (xb ?U ?V) ?V And here is the proof! (Unkn is the start state after use of goal clause) [Unkn, Res ([Thm "#14"], false, 1), Res ([Thm "#5"], false, 1), Res ([Thm "#1"], false, 1), Asm 1, Res ([Thm "#13"], false, 1), Asm 2, Asm 1, Res ([Thm "#13"], false, 1), Asm 1, Res ([Thm "#10"], false, 1), Res ([Thm "#16"], false, 1), Asm 2, Asm 1, Res ([Thm "#1"], false, 1), Asm 1, Res ([Thm "#14"], false, 1), Res ([Thm "#5"], false, 1), Res ([Thm "#2"], false, 1), Asm 1, Res ([Thm "#13"], false, 1), Asm 2, Asm 1, Res ([Thm "#8"], false, 1), Res ([Thm "#2"], false, 1), Asm 1, Res ([Thm "#12"], false, 1), Asm 2, Asm 1] : lderiv list *) text \ MORE and MUCH HARDER test data for the MESON proof procedure (courtesy John Harrison). \ (* ========================================================================= *) (* 100 problems selected from the TPTP library *) (* ========================================================================= *) (* * Original timings for John Harrison's MESON_TAC. * Timings below on a 600MHz Pentium III (perch) * Some timings below refer to griffon, which is a dual 2.5GHz Power Mac G5. * * A few variable names have been changed to avoid clashing with constants. * * Changed numeric constants e.g. 0, 1, 2... to num0, num1, num2... * * Here's a list giving typical CPU times, as well as common names and * literature references. * * BOO003-1 34.6 B2 part 1 [McCharen, et al., 1976]; Lemma proved [Overbeek, et al., 1976]; prob2_part1.ver1.in [ANL] * BOO004-1 36.7 B2 part 2 [McCharen, et al., 1976]; Lemma proved [Overbeek, et al., 1976]; prob2_part2.ver1 [ANL] * BOO005-1 47.4 B3 part 1 [McCharen, et al., 1976]; B5 [McCharen, et al., 1976]; Lemma proved [Overbeek, et al., 1976]; prob3_part1.ver1.in [ANL] * BOO006-1 48.4 B3 part 2 [McCharen, et al., 1976]; B6 [McCharen, et al., 1976]; Lemma proved [Overbeek, et al., 1976]; prob3_part2.ver1 [ANL] * BOO011-1 19.0 B7 [McCharen, et al., 1976]; prob7.ver1 [ANL] * CAT001-3 45.2 C1 [McCharen, et al., 1976]; p1.ver3.in [ANL] * CAT003-3 10.5 C3 [McCharen, et al., 1976]; p3.ver3.in [ANL] * CAT005-1 480.1 C5 [McCharen, et al., 1976]; p5.ver1.in [ANL] * CAT007-1 11.9 C7 [McCharen, et al., 1976]; p7.ver1.in [ANL] * CAT018-1 81.3 p18.ver1.in [ANL] * COL001-2 16.0 C1 [Wos & McCune, 1988] * COL023-1 5.1 [McCune & Wos, 1988] * COL032-1 15.8 [McCune & Wos, 1988] * COL052-2 13.2 bird4.ver2.in [ANL] * COL075-2 116.9 [Jech, 1994] * COM001-1 1.7 shortburst [Wilson & Minker, 1976] * COM002-1 4.4 burstall [Wilson & Minker, 1976] * COM002-2 7.4 * COM003-2 22.1 [Brushi, 1991] * COM004-1 45.1 * GEO003-1 71.7 T3 [McCharen, et al., 1976]; t3.ver1.in [ANL] * GEO017-2 78.8 D4.1 [Quaife, 1989] * GEO027-3 181.5 D10.1 [Quaife, 1989] * GEO058-2 104.0 R4 [Quaife, 1989] * GEO079-1 2.4 GEOMETRY THEOREM [Slagle, 1967] * GRP001-1 47.8 CADE-11 Competition 1 [Overbeek, 1990]; G1 [McCharen, et al., 1976]; THEOREM 1 [Lusk & McCune, 1993]; wos10 [Wilson & Minker, 1976]; xsquared.ver1.in [ANL]; [Robinson, 1963] * GRP008-1 50.4 Problem 4 [Wos, 1965]; wos4 [Wilson & Minker, 1976] * GRP013-1 40.2 Problem 11 [Wos, 1965]; wos11 [Wilson & Minker, 1976] * GRP037-3 43.8 Problem 17 [Wos, 1965]; wos17 [Wilson & Minker, 1976] * GRP031-2 3.2 ls23 [Lawrence & Starkey, 1974]; ls23 [Wilson & Minker, 1976] * GRP034-4 2.5 ls26 [Lawrence & Starkey, 1974]; ls26 [Wilson & Minker, 1976] * GRP047-2 11.7 [Veroff, 1992] * GRP130-1 170.6 Bennett QG8 [TPTP]; QG8 [Slaney, 1993] * GRP156-1 48.7 ax_mono1c [Schulz, 1995] * GRP168-1 159.1 p01a [Schulz, 1995] * HEN003-3 39.9 HP3 [McCharen, et al., 1976] * HEN007-2 125.7 H7 [McCharen, et al., 1976] * HEN008-4 62.0 H8 [McCharen, et al., 1976] * HEN009-5 136.3 H9 [McCharen, et al., 1976]; hp9.ver3.in [ANL] * HEN012-3 48.5 new.ver2.in [ANL] * LCL010-1 370.9 EC-73 [McCune & Wos, 1992]; ec_yq.in [OTTER] * LCL077-2 51.6 morgan.two.ver1.in [ANL] * LCL082-1 14.6 IC-1.1 [Wos, et al., 1990]; IC-65 [McCune & Wos, 1992]; ls2 [SETHEO]; S1 [Pfenning, 1988] * LCL111-1 585.6 CADE-11 Competition 6 [Overbeek, 1990]; mv25.in [OTTER]; MV-57 [McCune & Wos, 1992]; mv.in part 2 [OTTER]; ovb6 [SETHEO]; THEOREM 6 [Lusk & McCune, 1993] * LCL143-1 10.9 Lattice structure theorem 2 [Bonacina, 1991] * LCL182-1 271.6 Problem 2.16 [Whitehead & Russell, 1927] * LCL200-1 12.0 Problem 2.46 [Whitehead & Russell, 1927] * LCL215-1 214.4 Problem 2.62 [Whitehead & Russell, 1927]; Problem 2.63 [Whitehead & Russell, 1927] * LCL230-2 0.2 Pelletier 5 [Pelletier, 1986] * LDA003-1 68.5 Problem 3 [Jech, 1993] * MSC002-1 9.2 DBABHP [Michie, et al., 1972]; DBABHP [Wilson & Minker, 1976] * MSC003-1 3.2 HASPARTS-T1 [Wilson & Minker, 1976] * MSC004-1 9.3 HASPARTS-T2 [Wilson & Minker, 1976] * MSC005-1 1.8 Problem 5.1 [Plaisted, 1982] * MSC006-1 39.0 nonob.lop [SETHEO] * NUM001-1 14.0 Chang-Lee-10a [Chang, 1970]; ls28 [Lawrence & Starkey, 1974]; ls28 [Wilson & Minker, 1976] * NUM021-1 52.3 ls65 [Lawrence & Starkey, 1974]; ls65 [Wilson & Minker, 1976] * NUM024-1 64.6 ls75 [Lawrence & Starkey, 1974]; ls75 [Wilson & Minker, 1976] * NUM180-1 621.2 LIM2.1 [Quaife] * NUM228-1 575.9 TRECDEF4 cor. [Quaife] * PLA002-1 37.4 Problem 5.7 [Plaisted, 1982] * PLA006-1 7.2 [Segre & Elkan, 1994] * PLA017-1 484.8 [Segre & Elkan, 1994] * PLA022-1 19.1 [Segre & Elkan, 1994] * PLA022-2 19.7 [Segre & Elkan, 1994] * PRV001-1 10.3 PV1 [McCharen, et al., 1976] * PRV003-1 3.9 E2 [McCharen, et al., 1976]; v2.lop [SETHEO] * PRV005-1 4.3 E4 [McCharen, et al., 1976]; v4.lop [SETHEO] * PRV006-1 6.0 E5 [McCharen, et al., 1976]; v5.lop [SETHEO] * PRV009-1 2.2 Hoares FIND [Bledsoe, 1977]; Problem 5.5 [Plaisted, 1982] * PUZ012-1 3.5 Boxes-of-fruit [Wos, 1988]; Boxes-of-fruit [Wos, et al., 1992]; boxes.ver1.in [ANL] * PUZ020-1 56.6 knightknave.in [ANL] * PUZ025-1 58.4 Problem 35 [Smullyan, 1978]; tandl35.ver1.in [ANL] * PUZ029-1 5.1 pigs.ver1.in [ANL] * RNG001-3 82.4 EX6-T? [Wilson & Minker, 1976]; ex6.lop [SETHEO]; Example 6a [Fleisig, et al., 1974]; FEX6T1 [SPRFN]; FEX6T2 [SPRFN] * RNG001-5 399.8 Problem 21 [Wos, 1965]; wos21 [Wilson & Minker, 1976] * RNG011-5 8.4 CADE-11 Competition Eq-10 [Overbeek, 1990]; PROBLEM 10 [Zhang, 1993]; THEOREM EQ-10 [Lusk & McCune, 1993] * RNG023-6 9.1 [Stevens, 1987] * RNG028-2 9.3 PROOF III [Anantharaman & Hsiang, 1990] * RNG038-2 16.2 Problem 27 [Wos, 1965]; wos27 [Wilson & Minker, 1976] * RNG040-2 180.5 Problem 29 [Wos, 1965]; wos29 [Wilson & Minker, 1976] * RNG041-1 35.8 Problem 30 [Wos, 1965]; wos30 [Wilson & Minker, 1976] * ROB010-1 205.0 Lemma 3.3 [Winker, 1990]; RA2 [Lusk & Wos, 1992] * ROB013-1 23.6 Lemma 3.5 [Winker, 1990] * ROB016-1 15.2 Corollary 3.7 [Winker, 1990] * ROB021-1 230.4 [McCune, 1992] * SET005-1 192.2 ls108 [Lawrence & Starkey, 1974]; ls108 [Wilson & Minker, 1976] * SET009-1 10.5 ls116 [Lawrence & Starkey, 1974]; ls116 [Wilson & Minker, 1976] * SET025-4 694.7 Lemma 10 [Boyer, et al, 1986] * SET046-5 2.3 p42.in [ANL]; Pelletier 42 [Pelletier, 1986] * SET047-5 3.7 p43.in [ANL]; Pelletier 43 [Pelletier, 1986] * SYN034-1 2.8 QW [Michie, et al., 1972]; QW [Wilson & Minker, 1976] * SYN071-1 1.9 Pelletier 48 [Pelletier, 1986] * SYN349-1 61.7 Ch17N5 [Tammet, 1994] * SYN352-1 5.5 Ch18N4 [Tammet, 1994] * TOP001-2 61.1 Lemma 1a [Wick & McCune, 1989] * TOP002-2 0.4 Lemma 1b [Wick & McCune, 1989] * TOP004-1 181.6 Lemma 1d [Wick & McCune, 1989] * TOP004-2 9.0 Lemma 1d [Wick & McCune, 1989] * TOP005-2 139.8 Lemma 1e [Wick & McCune, 1989] *) abbreviation "EQU001_0_ax equal \ (\X. equal(X::'a,X)) & (\Y X. equal(X::'a,Y) --> equal(Y::'a,X)) & (\Y X Z. equal(X::'a,Y) & equal(Y::'a,Z) --> equal(X::'a,Z))" abbreviation "BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum \ (\X Y. sum(X::'a,Y,add(X::'a,Y))) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\Y X Z. sum(X::'a,Y,Z) --> sum(Y::'a,X,Z)) & (\Y X Z. product(X::'a,Y,Z) --> product(Y::'a,X,Z)) & (\X. sum(additive_identity::'a,X,X)) & (\X. sum(X::'a,additive_identity,X)) & (\X. product(multiplicative_identity::'a,X,X)) & (\X. product(X::'a,multiplicative_identity,X)) & (\Y Z X V3 V1 V2 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & product(X::'a,V3,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(X::'a,V3,V4)) & (\Y Z V3 X V1 V2 V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & product(V3::'a,X,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 V3 X V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(V3::'a,X,V4)) & (\Y Z X V3 V1 V2 V4. sum(X::'a,Y,V1) & sum(X::'a,Z,V2) & product(Y::'a,Z,V3) & sum(X::'a,V3,V4) --> product(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. sum(X::'a,Y,V1) & sum(X::'a,Z,V2) & product(Y::'a,Z,V3) & product(V1::'a,V2,V4) --> sum(X::'a,V3,V4)) & (\Y Z V3 X V1 V2 V4. sum(Y::'a,X,V1) & sum(Z::'a,X,V2) & product(Y::'a,Z,V3) & sum(V3::'a,X,V4) --> product(V1::'a,V2,V4)) & (\Y Z V1 V2 V3 X V4. sum(Y::'a,X,V1) & sum(Z::'a,X,V2) & product(Y::'a,Z,V3) & product(V1::'a,V2,V4) --> sum(V3::'a,X,V4)) & (\X. sum(INVERSE(X),X,multiplicative_identity)) & (\X. sum(X::'a,INVERSE(X),multiplicative_identity)) & (\X. product(INVERSE(X),X,additive_identity)) & (\X. product(X::'a,INVERSE(X),additive_identity)) & (\X Y U V. sum(X::'a,Y,U) & sum(X::'a,Y,V) --> equal(U::'a,V)) & (\X Y U V. product(X::'a,Y,U) & product(X::'a,Y,V) --> equal(U::'a,V))" abbreviation "BOO002_0_eq INVERSE multiply add product sum equal \ (\X Y W Z. equal(X::'a,Y) & sum(X::'a,W,Z) --> sum(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & sum(W::'a,X,Z) --> sum(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & sum(W::'a,Z,X) --> sum(W::'a,Z,Y)) & (\X Y W Z. equal(X::'a,Y) & product(X::'a,W,Z) --> product(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & product(W::'a,X,Z) --> product(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & product(W::'a,Z,X) --> product(W::'a,Z,Y)) & (\X Y W. equal(X::'a,Y) --> equal(add(X::'a,W),add(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(add(W::'a,X),add(W::'a,Y))) & (\X Y W. equal(X::'a,Y) --> equal(multiply(X::'a,W),multiply(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(multiply(W::'a,X),multiply(W::'a,Y))) & (\X Y. equal(X::'a,Y) --> equal(INVERSE(X),INVERSE(Y)))" (*51194 inferences so far. Searching to depth 13. 232.9 secs*) lemma BOO003_1: "EQU001_0_ax equal & BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum & BOO002_0_eq INVERSE multiply add product sum equal & (~product(x::'a,x,x)) --> False" by meson (*51194 inferences so far. Searching to depth 13. 204.6 secs Strange! The previous problem also has 51194 inferences at depth 13. They must be very similar!*) lemma BOO004_1: "EQU001_0_ax equal & BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum & BOO002_0_eq INVERSE multiply add product sum equal & (~sum(x::'a,x,x)) --> False" by meson (*74799 inferences so far. Searching to depth 13. 290.0 secs*) lemma BOO005_1: "EQU001_0_ax equal & BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum & BOO002_0_eq INVERSE multiply add product sum equal & (~sum(x::'a,multiplicative_identity,multiplicative_identity)) --> False" by meson (*74799 inferences so far. Searching to depth 13. 314.6 secs*) lemma BOO006_1: "EQU001_0_ax equal & BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum & BOO002_0_eq INVERSE multiply add product sum equal & (~product(x::'a,additive_identity,additive_identity)) --> False" by meson (*5 inferences so far. Searching to depth 5. 1.3 secs*) lemma BOO011_1: "EQU001_0_ax equal & BOO002_0_ax equal INVERSE multiplicative_identity additive_identity multiply product add sum & BOO002_0_eq INVERSE multiply add product sum equal & (~equal(INVERSE(additive_identity),multiplicative_identity)) --> False" by meson abbreviation "CAT003_0_ax f1 compos codomain domain equal there_exists equivalent \ (\Y X. equivalent(X::'a,Y) --> there_exists(X)) & (\X Y. equivalent(X::'a,Y) --> equal(X::'a,Y)) & (\X Y. there_exists(X) & equal(X::'a,Y) --> equivalent(X::'a,Y)) & (\X. there_exists(domain(X)) --> there_exists(X)) & (\X. there_exists(codomain(X)) --> there_exists(X)) & (\Y X. there_exists(compos(X::'a,Y)) --> there_exists(domain(X))) & (\X Y. there_exists(compos(X::'a,Y)) --> equal(domain(X),codomain(Y))) & (\X Y. there_exists(domain(X)) & equal(domain(X),codomain(Y)) --> there_exists(compos(X::'a,Y))) & (\X Y Z. equal(compos(X::'a,compos(Y::'a,Z)),compos(compos(X::'a,Y),Z))) & (\X. equal(compos(X::'a,domain(X)),X)) & (\X. equal(compos(codomain(X),X),X)) & (\X Y. equivalent(X::'a,Y) --> there_exists(Y)) & (\X Y. there_exists(X) & there_exists(Y) & equal(X::'a,Y) --> equivalent(X::'a,Y)) & (\Y X. there_exists(compos(X::'a,Y)) --> there_exists(codomain(X))) & (\X Y. there_exists(f1(X::'a,Y)) | equal(X::'a,Y)) & (\X Y. equal(X::'a,f1(X::'a,Y)) | equal(Y::'a,f1(X::'a,Y)) | equal(X::'a,Y)) & (\X Y. equal(X::'a,f1(X::'a,Y)) & equal(Y::'a,f1(X::'a,Y)) --> equal(X::'a,Y))" abbreviation "CAT003_0_eq f1 compos codomain domain equivalent there_exists equal \ (\X Y. equal(X::'a,Y) & there_exists(X) --> there_exists(Y)) & (\X Y Z. equal(X::'a,Y) & equivalent(X::'a,Z) --> equivalent(Y::'a,Z)) & (\X Z Y. equal(X::'a,Y) & equivalent(Z::'a,X) --> equivalent(Z::'a,Y)) & (\X Y. equal(X::'a,Y) --> equal(domain(X),domain(Y))) & (\X Y. equal(X::'a,Y) --> equal(codomain(X),codomain(Y))) & (\X Y Z. equal(X::'a,Y) --> equal(compos(X::'a,Z),compos(Y::'a,Z))) & (\X Z Y. equal(X::'a,Y) --> equal(compos(Z::'a,X),compos(Z::'a,Y))) & (\A B C. equal(A::'a,B) --> equal(f1(A::'a,C),f1(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(f1(F'::'a,D),f1(F'::'a,E)))" (*4007 inferences so far. Searching to depth 9. 13 secs*) lemma CAT001_3: "EQU001_0_ax equal & CAT003_0_ax f1 compos codomain domain equal there_exists equivalent & CAT003_0_eq f1 compos codomain domain equivalent there_exists equal & (there_exists(compos(a::'a,b))) & (\Y X Z. equal(compos(compos(a::'a,b),X),Y) & equal(compos(compos(a::'a,b),Z),Y) --> equal(X::'a,Z)) & (there_exists(compos(b::'a,h))) & (equal(compos(b::'a,h),compos(b::'a,g))) & (~equal(h::'a,g)) --> False" by meson (*245 inferences so far. Searching to depth 7. 1.0 secs*) lemma CAT003_3: "EQU001_0_ax equal & CAT003_0_ax f1 compos codomain domain equal there_exists equivalent & CAT003_0_eq f1 compos codomain domain equivalent there_exists equal & (there_exists(compos(a::'a,b))) & (\Y X Z. equal(compos(X::'a,compos(a::'a,b)),Y) & equal(compos(Z::'a,compos(a::'a,b)),Y) --> equal(X::'a,Z)) & (there_exists(h)) & (equal(compos(h::'a,a),compos(g::'a,a))) & (~equal(g::'a,h)) --> False" by meson abbreviation "CAT001_0_ax equal codomain domain identity_map compos product defined \ (\X Y. defined(X::'a,Y) --> product(X::'a,Y,compos(X::'a,Y))) & (\Z X Y. product(X::'a,Y,Z) --> defined(X::'a,Y)) & (\X Xy Y Z. product(X::'a,Y,Xy) & defined(Xy::'a,Z) --> defined(Y::'a,Z)) & (\Y Xy Z X Yz. product(X::'a,Y,Xy) & product(Y::'a,Z,Yz) & defined(Xy::'a,Z) --> defined(X::'a,Yz)) & (\Xy Y Z X Yz Xyz. product(X::'a,Y,Xy) & product(Xy::'a,Z,Xyz) & product(Y::'a,Z,Yz) --> product(X::'a,Yz,Xyz)) & (\Z Yz X Y. product(Y::'a,Z,Yz) & defined(X::'a,Yz) --> defined(X::'a,Y)) & (\Y X Yz Xy Z. product(Y::'a,Z,Yz) & product(X::'a,Y,Xy) & defined(X::'a,Yz) --> defined(Xy::'a,Z)) & (\Yz X Y Xy Z Xyz. product(Y::'a,Z,Yz) & product(X::'a,Yz,Xyz) & product(X::'a,Y,Xy) --> product(Xy::'a,Z,Xyz)) & (\Y X Z. defined(X::'a,Y) & defined(Y::'a,Z) & identity_map(Y) --> defined(X::'a,Z)) & (\X. identity_map(domain(X))) & (\X. identity_map(codomain(X))) & (\X. defined(X::'a,domain(X))) & (\X. defined(codomain(X),X)) & (\X. product(X::'a,domain(X),X)) & (\X. product(codomain(X),X,X)) & (\X Y. defined(X::'a,Y) & identity_map(X) --> product(X::'a,Y,Y)) & (\Y X. defined(X::'a,Y) & identity_map(Y) --> product(X::'a,Y,X)) & (\X Y Z W. product(X::'a,Y,Z) & product(X::'a,Y,W) --> equal(Z::'a,W))" abbreviation "CAT001_0_eq compos defined identity_map codomain domain product equal \ (\X Y Z W. equal(X::'a,Y) & product(X::'a,Z,W) --> product(Y::'a,Z,W)) & (\X Z Y W. equal(X::'a,Y) & product(Z::'a,X,W) --> product(Z::'a,Y,W)) & (\X Z W Y. equal(X::'a,Y) & product(Z::'a,W,X) --> product(Z::'a,W,Y)) & (\X Y. equal(X::'a,Y) --> equal(domain(X),domain(Y))) & (\X Y. equal(X::'a,Y) --> equal(codomain(X),codomain(Y))) & (\X Y. equal(X::'a,Y) & identity_map(X) --> identity_map(Y)) & (\X Y Z. equal(X::'a,Y) & defined(X::'a,Z) --> defined(Y::'a,Z)) & (\X Z Y. equal(X::'a,Y) & defined(Z::'a,X) --> defined(Z::'a,Y)) & (\X Z Y. equal(X::'a,Y) --> equal(compos(Z::'a,X),compos(Z::'a,Y))) & (\X Y Z. equal(X::'a,Y) --> equal(compos(X::'a,Z),compos(Y::'a,Z)))" (*54288 inferences so far. Searching to depth 14. 118.0 secs*) lemma CAT005_1: "EQU001_0_ax equal & CAT001_0_ax equal codomain domain identity_map compos product defined & CAT001_0_eq compos defined identity_map codomain domain product equal & (defined(a::'a,d)) & (identity_map(d)) & (~equal(domain(a),d)) --> False" by meson (*1728 inferences so far. Searching to depth 10. 5.8 secs*) lemma CAT007_1: "EQU001_0_ax equal & CAT001_0_ax equal codomain domain identity_map compos product defined & CAT001_0_eq compos defined identity_map codomain domain product equal & (equal(domain(a),codomain(b))) & (~defined(a::'a,b)) --> False" by meson (*82895 inferences so far. Searching to depth 13. 355 secs*) lemma CAT018_1: "EQU001_0_ax equal & CAT001_0_ax equal codomain domain identity_map compos product defined & CAT001_0_eq compos defined identity_map codomain domain product equal & (defined(a::'a,b)) & (defined(b::'a,c)) & (~defined(a::'a,compos(b::'a,c))) --> False" by meson (*1118 inferences so far. Searching to depth 8. 2.3 secs*) lemma COL001_2: "EQU001_0_ax equal & (\X Y Z. equal(apply(apply(apply(s::'a,X),Y),Z),apply(apply(X::'a,Z),apply(Y::'a,Z)))) & (\Y X. equal(apply(apply(k::'a,X),Y),X)) & (\X Y Z. equal(apply(apply(apply(b::'a,X),Y),Z),apply(X::'a,apply(Y::'a,Z)))) & (\X. equal(apply(i::'a,X),X)) & (\A B C. equal(A::'a,B) --> equal(apply(A::'a,C),apply(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(apply(F'::'a,D),apply(F'::'a,E))) & (\X. equal(apply(apply(apply(s::'a,apply(b::'a,X)),i),apply(apply(s::'a,apply(b::'a,X)),i)),apply(x::'a,apply(apply(apply(s::'a,apply(b::'a,X)),i),apply(apply(s::'a,apply(b::'a,X)),i))))) & (\Y. ~equal(Y::'a,apply(combinator::'a,Y))) --> False" by meson (*500 inferences so far. Searching to depth 8. 0.9 secs*) lemma COL023_1: "EQU001_0_ax equal & (\X Y Z. equal(apply(apply(apply(b::'a,X),Y),Z),apply(X::'a,apply(Y::'a,Z)))) & (\X Y Z. equal(apply(apply(apply(n::'a,X),Y),Z),apply(apply(apply(X::'a,Z),Y),Z))) & (\A B C. equal(A::'a,B) --> equal(apply(A::'a,C),apply(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(apply(F'::'a,D),apply(F'::'a,E))) & (\Y. ~equal(Y::'a,apply(combinator::'a,Y))) --> False" by meson (*3018 inferences so far. Searching to depth 10. 4.3 secs*) lemma COL032_1: "EQU001_0_ax equal & (\X. equal(apply(m::'a,X),apply(X::'a,X))) & (\Y X Z. equal(apply(apply(apply(q::'a,X),Y),Z),apply(Y::'a,apply(X::'a,Z)))) & (\A B C. equal(A::'a,B) --> equal(apply(A::'a,C),apply(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(apply(F'::'a,D),apply(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(f(G),f(H))) & (\Y. ~equal(apply(Y::'a,f(Y)),apply(f(Y),apply(Y::'a,f(Y))))) --> False" by meson (*381878 inferences so far. Searching to depth 13. 670.4 secs*) lemma COL052_2: "EQU001_0_ax equal & (\X Y W. equal(response(compos(X::'a,Y),W),response(X::'a,response(Y::'a,W)))) & (\X Y. agreeable(X) --> equal(response(X::'a,common_bird(Y)),response(Y::'a,common_bird(Y)))) & (\Z X. equal(response(X::'a,Z),response(compatible(X),Z)) --> agreeable(X)) & (\A B. equal(A::'a,B) --> equal(common_bird(A),common_bird(B))) & (\C D. equal(C::'a,D) --> equal(compatible(C),compatible(D))) & (\Q R. equal(Q::'a,R) & agreeable(Q) --> agreeable(R)) & (\A B C. equal(A::'a,B) --> equal(compos(A::'a,C),compos(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(compos(F'::'a,D),compos(F'::'a,E))) & (\G H I'. equal(G::'a,H) --> equal(response(G::'a,I'),response(H::'a,I'))) & (\J L K'. equal(J::'a,K') --> equal(response(L::'a,J),response(L::'a,K'))) & (agreeable(c)) & (~agreeable(a)) & (equal(c::'a,compos(a::'a,b))) --> False" by meson (*13201 inferences so far. Searching to depth 11. 31.9 secs*) lemma COL075_2: "EQU001_0_ax equal & (\Y X. equal(apply(apply(k::'a,X),Y),X)) & (\X Y Z. equal(apply(apply(apply(abstraction::'a,X),Y),Z),apply(apply(X::'a,apply(k::'a,Z)),apply(Y::'a,Z)))) & (\D E F'. equal(D::'a,E) --> equal(apply(D::'a,F'),apply(E::'a,F'))) & (\G I' H. equal(G::'a,H) --> equal(apply(I'::'a,G),apply(I'::'a,H))) & (\A B. equal(A::'a,B) --> equal(b(A),b(B))) & (\C D. equal(C::'a,D) --> equal(c(C),c(D))) & (\Y. ~equal(apply(apply(Y::'a,b(Y)),c(Y)),apply(b(Y),b(Y)))) --> False" by meson (*33 inferences so far. Searching to depth 7. 0.1 secs*) lemma COM001_1: "(\Goal_state Start_state. follows(Goal_state::'a,Start_state) --> succeeds(Goal_state::'a,Start_state)) & (\Goal_state Intermediate_state Start_state. succeeds(Goal_state::'a,Intermediate_state) & succeeds(Intermediate_state::'a,Start_state) --> succeeds(Goal_state::'a,Start_state)) & (\Start_state Label Goal_state. has(Start_state::'a,goto(Label)) & labels(Label::'a,Goal_state) --> succeeds(Goal_state::'a,Start_state)) & (\Start_state Condition Goal_state. has(Start_state::'a,ifthen(Condition::'a,Goal_state)) --> succeeds(Goal_state::'a,Start_state)) & (labels(loop::'a,p3)) & (has(p3::'a,ifthen(equal(register_j::'a,n),p4))) & (has(p4::'a,goto(out))) & (follows(p5::'a,p4)) & (follows(p8::'a,p3)) & (has(p8::'a,goto(loop))) & (~succeeds(p3::'a,p3)) --> False" by meson (*533 inferences so far. Searching to depth 13. 0.3 secs*) lemma COM002_1: "(\Goal_state Start_state. follows(Goal_state::'a,Start_state) --> succeeds(Goal_state::'a,Start_state)) & (\Goal_state Intermediate_state Start_state. succeeds(Goal_state::'a,Intermediate_state) & succeeds(Intermediate_state::'a,Start_state) --> succeeds(Goal_state::'a,Start_state)) & (\Start_state Label Goal_state. has(Start_state::'a,goto(Label)) & labels(Label::'a,Goal_state) --> succeeds(Goal_state::'a,Start_state)) & (\Start_state Condition Goal_state. has(Start_state::'a,ifthen(Condition::'a,Goal_state)) --> succeeds(Goal_state::'a,Start_state)) & (has(p1::'a,assign(register_j::'a,num0))) & (follows(p2::'a,p1)) & (has(p2::'a,assign(register_k::'a,num1))) & (labels(loop::'a,p3)) & (follows(p3::'a,p2)) & (has(p3::'a,ifthen(equal(register_j::'a,n),p4))) & (has(p4::'a,goto(out))) & (follows(p5::'a,p4)) & (follows(p6::'a,p3)) & (has(p6::'a,assign(register_k::'a,mtimes(num2::'a,register_k)))) & (follows(p7::'a,p6)) & (has(p7::'a,assign(register_j::'a,mplus(register_j::'a,num1)))) & (follows(p8::'a,p7)) & (has(p8::'a,goto(loop))) & (~succeeds(p3::'a,p3)) --> False" by meson (*4821 inferences so far. Searching to depth 14. 1.3 secs*) lemma COM002_2: "(\Goal_state Start_state. ~(fails(Goal_state::'a,Start_state) & follows(Goal_state::'a,Start_state))) & (\Goal_state Intermediate_state Start_state. fails(Goal_state::'a,Start_state) --> fails(Goal_state::'a,Intermediate_state) | fails(Intermediate_state::'a,Start_state)) & (\Start_state Label Goal_state. ~(fails(Goal_state::'a,Start_state) & has(Start_state::'a,goto(Label)) & labels(Label::'a,Goal_state))) & (\Start_state Condition Goal_state. ~(fails(Goal_state::'a,Start_state) & has(Start_state::'a,ifthen(Condition::'a,Goal_state)))) & (has(p1::'a,assign(register_j::'a,num0))) & (follows(p2::'a,p1)) & (has(p2::'a,assign(register_k::'a,num1))) & (labels(loop::'a,p3)) & (follows(p3::'a,p2)) & (has(p3::'a,ifthen(equal(register_j::'a,n),p4))) & (has(p4::'a,goto(out))) & (follows(p5::'a,p4)) & (follows(p6::'a,p3)) & (has(p6::'a,assign(register_k::'a,mtimes(num2::'a,register_k)))) & (follows(p7::'a,p6)) & (has(p7::'a,assign(register_j::'a,mplus(register_j::'a,num1)))) & (follows(p8::'a,p7)) & (has(p8::'a,goto(loop))) & (fails(p3::'a,p3)) --> False" by meson (*98 inferences so far. Searching to depth 10. 1.1 secs*) lemma COM003_2: "(\X Y Z. program_decides(X) & program(Y) --> decides(X::'a,Y,Z)) & (\X. program_decides(X) | program(f2(X))) & (\X. decides(X::'a,f2(X),f1(X)) --> program_decides(X)) & (\X. program_program_decides(X) --> program(X)) & (\X. program_program_decides(X) --> program_decides(X)) & (\X. program(X) & program_decides(X) --> program_program_decides(X)) & (\X. algorithm_program_decides(X) --> algorithm(X)) & (\X. algorithm_program_decides(X) --> program_decides(X)) & (\X. algorithm(X) & program_decides(X) --> algorithm_program_decides(X)) & (\Y X. program_halts2(X::'a,Y) --> program(X)) & (\X Y. program_halts2(X::'a,Y) --> halts2(X::'a,Y)) & (\X Y. program(X) & halts2(X::'a,Y) --> program_halts2(X::'a,Y)) & (\W X Y Z. halts3_outputs(X::'a,Y,Z,W) --> halts3(X::'a,Y,Z)) & (\Y Z X W. halts3_outputs(X::'a,Y,Z,W) --> outputs(X::'a,W)) & (\Y Z X W. halts3(X::'a,Y,Z) & outputs(X::'a,W) --> halts3_outputs(X::'a,Y,Z,W)) & (\Y X. program_not_halts2(X::'a,Y) --> program(X)) & (\X Y. ~(program_not_halts2(X::'a,Y) & halts2(X::'a,Y))) & (\X Y. program(X) --> program_not_halts2(X::'a,Y) | halts2(X::'a,Y)) & (\W X Y. halts2_outputs(X::'a,Y,W) --> halts2(X::'a,Y)) & (\Y X W. halts2_outputs(X::'a,Y,W) --> outputs(X::'a,W)) & (\Y X W. halts2(X::'a,Y) & outputs(X::'a,W) --> halts2_outputs(X::'a,Y,W)) & (\X W Y Z. program_halts2_halts3_outputs(X::'a,Y,Z,W) --> program_halts2(Y::'a,Z)) & (\X Y Z W. program_halts2_halts3_outputs(X::'a,Y,Z,W) --> halts3_outputs(X::'a,Y,Z,W)) & (\X Y Z W. program_halts2(Y::'a,Z) & halts3_outputs(X::'a,Y,Z,W) --> program_halts2_halts3_outputs(X::'a,Y,Z,W)) & (\X W Y Z. program_not_halts2_halts3_outputs(X::'a,Y,Z,W) --> program_not_halts2(Y::'a,Z)) & (\X Y Z W. program_not_halts2_halts3_outputs(X::'a,Y,Z,W) --> halts3_outputs(X::'a,Y,Z,W)) & (\X Y Z W. program_not_halts2(Y::'a,Z) & halts3_outputs(X::'a,Y,Z,W) --> program_not_halts2_halts3_outputs(X::'a,Y,Z,W)) & (\X W Y. program_halts2_halts2_outputs(X::'a,Y,W) --> program_halts2(Y::'a,Y)) & (\X Y W. program_halts2_halts2_outputs(X::'a,Y,W) --> halts2_outputs(X::'a,Y,W)) & (\X Y W. program_halts2(Y::'a,Y) & halts2_outputs(X::'a,Y,W) --> program_halts2_halts2_outputs(X::'a,Y,W)) & (\X W Y. program_not_halts2_halts2_outputs(X::'a,Y,W) --> program_not_halts2(Y::'a,Y)) & (\X Y W. program_not_halts2_halts2_outputs(X::'a,Y,W) --> halts2_outputs(X::'a,Y,W)) & (\X Y W. program_not_halts2(Y::'a,Y) & halts2_outputs(X::'a,Y,W) --> program_not_halts2_halts2_outputs(X::'a,Y,W)) & (\X. algorithm_program_decides(X) --> program_program_decides(c1)) & (\W Y Z. program_program_decides(W) --> program_halts2_halts3_outputs(W::'a,Y,Z,good)) & (\W Y Z. program_program_decides(W) --> program_not_halts2_halts3_outputs(W::'a,Y,Z,bad)) & (\W. program(W) & program_halts2_halts3_outputs(W::'a,f3(W),f3(W),good) & program_not_halts2_halts3_outputs(W::'a,f3(W),f3(W),bad) --> program(c2)) & (\W Y. program(W) & program_halts2_halts3_outputs(W::'a,f3(W),f3(W),good) & program_not_halts2_halts3_outputs(W::'a,f3(W),f3(W),bad) --> program_halts2_halts2_outputs(c2::'a,Y,good)) & (\W Y. program(W) & program_halts2_halts3_outputs(W::'a,f3(W),f3(W),good) & program_not_halts2_halts3_outputs(W::'a,f3(W),f3(W),bad) --> program_not_halts2_halts2_outputs(c2::'a,Y,bad)) & (\V. program(V) & program_halts2_halts2_outputs(V::'a,f4(V),good) & program_not_halts2_halts2_outputs(V::'a,f4(V),bad) --> program(c3)) & (\V Y. program(V) & program_halts2_halts2_outputs(V::'a,f4(V),good) & program_not_halts2_halts2_outputs(V::'a,f4(V),bad) & program_halts2(Y::'a,Y) --> halts2(c3::'a,Y)) & (\V Y. program(V) & program_halts2_halts2_outputs(V::'a,f4(V),good) & program_not_halts2_halts2_outputs(V::'a,f4(V),bad) --> program_not_halts2_halts2_outputs(c3::'a,Y,bad)) & (algorithm_program_decides(c4)) --> False" by meson (*2100398 inferences so far. Searching to depth 12. 1256s (21 mins) on griffon*) lemma COM004_1: "EQU001_0_ax equal & (\C D P Q X Y. failure_node(X::'a,or(C::'a,P)) & failure_node(Y::'a,or(D::'a,Q)) & contradictory(P::'a,Q) & siblings(X::'a,Y) --> failure_node(parent_of(X::'a,Y),or(C::'a,D))) & (\X. contradictory(negate(X),X)) & (\X. contradictory(X::'a,negate(X))) & (\X. siblings(left_child_of(X),right_child_of(X))) & (\D E. equal(D::'a,E) --> equal(left_child_of(D),left_child_of(E))) & (\F' G. equal(F'::'a,G) --> equal(negate(F'),negate(G))) & (\H I' J. equal(H::'a,I') --> equal(or(H::'a,J),or(I'::'a,J))) & (\K' M L. equal(K'::'a,L) --> equal(or(M::'a,K'),or(M::'a,L))) & (\N O' P. equal(N::'a,O') --> equal(parent_of(N::'a,P),parent_of(O'::'a,P))) & (\Q S' R. equal(Q::'a,R) --> equal(parent_of(S'::'a,Q),parent_of(S'::'a,R))) & (\T' U. equal(T'::'a,U) --> equal(right_child_of(T'),right_child_of(U))) & (\V W X. equal(V::'a,W) & contradictory(V::'a,X) --> contradictory(W::'a,X)) & (\Y A1 Z. equal(Y::'a,Z) & contradictory(A1::'a,Y) --> contradictory(A1::'a,Z)) & (\B1 C1 D1. equal(B1::'a,C1) & failure_node(B1::'a,D1) --> failure_node(C1::'a,D1)) & (\E1 G1 F1. equal(E1::'a,F1) & failure_node(G1::'a,E1) --> failure_node(G1::'a,F1)) & (\H1 I1 J1. equal(H1::'a,I1) & siblings(H1::'a,J1) --> siblings(I1::'a,J1)) & (\K1 M1 L1. equal(K1::'a,L1) & siblings(M1::'a,K1) --> siblings(M1::'a,L1)) & (failure_node(n_left::'a,or(EMPTY::'a,atom))) & (failure_node(n_right::'a,or(EMPTY::'a,negate(atom)))) & (equal(n_left::'a,left_child_of(n))) & (equal(n_right::'a,right_child_of(n))) & (\Z. ~failure_node(Z::'a,or(EMPTY::'a,EMPTY))) --> False" oops abbreviation "GEO001_0_ax continuous lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 extension euclid2 euclid1 outer_pasch equidistant equal between \ (\X Y. between(X::'a,Y,X) --> equal(X::'a,Y)) & (\V X Y Z. between(X::'a,Y,V) & between(Y::'a,Z,V) --> between(X::'a,Y,Z)) & (\Y X V Z. between(X::'a,Y,Z) & between(X::'a,Y,V) --> equal(X::'a,Y) | between(X::'a,Z,V) | between(X::'a,V,Z)) & (\Y X. equidistant(X::'a,Y,Y,X)) & (\Z X Y. equidistant(X::'a,Y,Z,Z) --> equal(X::'a,Y)) & (\X Y Z V V2 W. equidistant(X::'a,Y,Z,V) & equidistant(X::'a,Y,V2,W) --> equidistant(Z::'a,V,V2,W)) & (\W X Z V Y. between(X::'a,W,V) & between(Y::'a,V,Z) --> between(X::'a,outer_pasch(W::'a,X,Y,Z,V),Y)) & (\W X Y Z V. between(X::'a,W,V) & between(Y::'a,V,Z) --> between(Z::'a,W,outer_pasch(W::'a,X,Y,Z,V))) & (\W X Y Z V. between(X::'a,V,W) & between(Y::'a,V,Z) --> equal(X::'a,V) | between(X::'a,Z,euclid1(W::'a,X,Y,Z,V))) & (\W X Y Z V. between(X::'a,V,W) & between(Y::'a,V,Z) --> equal(X::'a,V) | between(X::'a,Y,euclid2(W::'a,X,Y,Z,V))) & (\W X Y Z V. between(X::'a,V,W) & between(Y::'a,V,Z) --> equal(X::'a,V) | between(euclid1(W::'a,X,Y,Z,V),W,euclid2(W::'a,X,Y,Z,V))) & (\X1 Y1 X Y Z V Z1 V1. equidistant(X::'a,Y,X1,Y1) & equidistant(Y::'a,Z,Y1,Z1) & equidistant(X::'a,V,X1,V1) & equidistant(Y::'a,V,Y1,V1) & between(X::'a,Y,Z) & between(X1::'a,Y1,Z1) --> equal(X::'a,Y) | equidistant(Z::'a,V,Z1,V1)) & (\X Y W V. between(X::'a,Y,extension(X::'a,Y,W,V))) & (\X Y W V. equidistant(Y::'a,extension(X::'a,Y,W,V),W,V)) & (~between(lower_dimension_point_1::'a,lower_dimension_point_2,lower_dimension_point_3)) & (~between(lower_dimension_point_2::'a,lower_dimension_point_3,lower_dimension_point_1)) & (~between(lower_dimension_point_3::'a,lower_dimension_point_1,lower_dimension_point_2)) & (\Z X Y W V. equidistant(X::'a,W,X,V) & equidistant(Y::'a,W,Y,V) & equidistant(Z::'a,W,Z,V) --> between(X::'a,Y,Z) | between(Y::'a,Z,X) | between(Z::'a,X,Y) | equal(W::'a,V)) & (\X Y Z X1 Z1 V. equidistant(V::'a,X,V,X1) & equidistant(V::'a,Z,V,Z1) & between(V::'a,X,Z) & between(X::'a,Y,Z) --> equidistant(V::'a,Y,Z,continuous(X::'a,Y,Z,X1,Z1,V))) & (\X Y Z X1 V Z1. equidistant(V::'a,X,V,X1) & equidistant(V::'a,Z,V,Z1) & between(V::'a,X,Z) & between(X::'a,Y,Z) --> between(X1::'a,continuous(X::'a,Y,Z,X1,Z1,V),Z1))" abbreviation "GEO001_0_eq continuous extension euclid2 euclid1 outer_pasch equidistant between equal \ (\X Y W Z. equal(X::'a,Y) & between(X::'a,W,Z) --> between(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & between(W::'a,X,Z) --> between(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & between(W::'a,Z,X) --> between(W::'a,Z,Y)) & (\X Y V W Z. equal(X::'a,Y) & equidistant(X::'a,V,W,Z) --> equidistant(Y::'a,V,W,Z)) & (\X V Y W Z. equal(X::'a,Y) & equidistant(V::'a,X,W,Z) --> equidistant(V::'a,Y,W,Z)) & (\X V W Y Z. equal(X::'a,Y) & equidistant(V::'a,W,X,Z) --> equidistant(V::'a,W,Y,Z)) & (\X V W Z Y. equal(X::'a,Y) & equidistant(V::'a,W,Z,X) --> equidistant(V::'a,W,Z,Y)) & (\X Y V1 V2 V3 V4. equal(X::'a,Y) --> equal(outer_pasch(X::'a,V1,V2,V3,V4),outer_pasch(Y::'a,V1,V2,V3,V4))) & (\X V1 Y V2 V3 V4. equal(X::'a,Y) --> equal(outer_pasch(V1::'a,X,V2,V3,V4),outer_pasch(V1::'a,Y,V2,V3,V4))) & (\X V1 V2 Y V3 V4. equal(X::'a,Y) --> equal(outer_pasch(V1::'a,V2,X,V3,V4),outer_pasch(V1::'a,V2,Y,V3,V4))) & (\X V1 V2 V3 Y V4. equal(X::'a,Y) --> equal(outer_pasch(V1::'a,V2,V3,X,V4),outer_pasch(V1::'a,V2,V3,Y,V4))) & (\X V1 V2 V3 V4 Y. equal(X::'a,Y) --> equal(outer_pasch(V1::'a,V2,V3,V4,X),outer_pasch(V1::'a,V2,V3,V4,Y))) & (\A B C D E F'. equal(A::'a,B) --> equal(euclid1(A::'a,C,D,E,F'),euclid1(B::'a,C,D,E,F'))) & (\G I' H J K' L. equal(G::'a,H) --> equal(euclid1(I'::'a,G,J,K',L),euclid1(I'::'a,H,J,K',L))) & (\M O' P N Q R. equal(M::'a,N) --> equal(euclid1(O'::'a,P,M,Q,R),euclid1(O'::'a,P,N,Q,R))) & (\S' U V W T' X. equal(S'::'a,T') --> equal(euclid1(U::'a,V,W,S',X),euclid1(U::'a,V,W,T',X))) & (\Y A1 B1 C1 D1 Z. equal(Y::'a,Z) --> equal(euclid1(A1::'a,B1,C1,D1,Y),euclid1(A1::'a,B1,C1,D1,Z))) & (\E1 F1 G1 H1 I1 J1. equal(E1::'a,F1) --> equal(euclid2(E1::'a,G1,H1,I1,J1),euclid2(F1::'a,G1,H1,I1,J1))) & (\K1 M1 L1 N1 O1 P1. equal(K1::'a,L1) --> equal(euclid2(M1::'a,K1,N1,O1,P1),euclid2(M1::'a,L1,N1,O1,P1))) & (\Q1 S1 T1 R1 U1 V1. equal(Q1::'a,R1) --> equal(euclid2(S1::'a,T1,Q1,U1,V1),euclid2(S1::'a,T1,R1,U1,V1))) & (\W1 Y1 Z1 A2 X1 B2. equal(W1::'a,X1) --> equal(euclid2(Y1::'a,Z1,A2,W1,B2),euclid2(Y1::'a,Z1,A2,X1,B2))) & (\C2 E2 F2 G2 H2 D2. equal(C2::'a,D2) --> equal(euclid2(E2::'a,F2,G2,H2,C2),euclid2(E2::'a,F2,G2,H2,D2))) & (\X Y V1 V2 V3. equal(X::'a,Y) --> equal(extension(X::'a,V1,V2,V3),extension(Y::'a,V1,V2,V3))) & (\X V1 Y V2 V3. equal(X::'a,Y) --> equal(extension(V1::'a,X,V2,V3),extension(V1::'a,Y,V2,V3))) & (\X V1 V2 Y V3. equal(X::'a,Y) --> equal(extension(V1::'a,V2,X,V3),extension(V1::'a,V2,Y,V3))) & (\X V1 V2 V3 Y. equal(X::'a,Y) --> equal(extension(V1::'a,V2,V3,X),extension(V1::'a,V2,V3,Y))) & (\X Y V1 V2 V3 V4 V5. equal(X::'a,Y) --> equal(continuous(X::'a,V1,V2,V3,V4,V5),continuous(Y::'a,V1,V2,V3,V4,V5))) & (\X V1 Y V2 V3 V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,X,V2,V3,V4,V5),continuous(V1::'a,Y,V2,V3,V4,V5))) & (\X V1 V2 Y V3 V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,X,V3,V4,V5),continuous(V1::'a,V2,Y,V3,V4,V5))) & (\X V1 V2 V3 Y V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,X,V4,V5),continuous(V1::'a,V2,V3,Y,V4,V5))) & (\X V1 V2 V3 V4 Y V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,V4,X,V5),continuous(V1::'a,V2,V3,V4,Y,V5))) & (\X V1 V2 V3 V4 V5 Y. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,V4,V5,X),continuous(V1::'a,V2,V3,V4,V5,Y)))" (*179 inferences so far. Searching to depth 7. 3.9 secs*) lemma GEO003_1: "EQU001_0_ax equal & GEO001_0_ax continuous lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 extension euclid2 euclid1 outer_pasch equidistant equal between & GEO001_0_eq continuous extension euclid2 euclid1 outer_pasch equidistant between equal & (~between(a::'a,b,b)) --> False" by meson abbreviation "GEO002_ax_eq continuous euclid2 euclid1 lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 inner_pasch extension between equal equidistant \ (\Y X. equidistant(X::'a,Y,Y,X)) & (\X Y Z V V2 W. equidistant(X::'a,Y,Z,V) & equidistant(X::'a,Y,V2,W) --> equidistant(Z::'a,V,V2,W)) & (\Z X Y. equidistant(X::'a,Y,Z,Z) --> equal(X::'a,Y)) & (\X Y W V. between(X::'a,Y,extension(X::'a,Y,W,V))) & (\X Y W V. equidistant(Y::'a,extension(X::'a,Y,W,V),W,V)) & (\X1 Y1 X Y Z V Z1 V1. equidistant(X::'a,Y,X1,Y1) & equidistant(Y::'a,Z,Y1,Z1) & equidistant(X::'a,V,X1,V1) & equidistant(Y::'a,V,Y1,V1) & between(X::'a,Y,Z) & between(X1::'a,Y1,Z1) --> equal(X::'a,Y) | equidistant(Z::'a,V,Z1,V1)) & (\X Y. between(X::'a,Y,X) --> equal(X::'a,Y)) & (\U V W X Y. between(U::'a,V,W) & between(Y::'a,X,W) --> between(V::'a,inner_pasch(U::'a,V,W,X,Y),Y)) & (\V W X Y U. between(U::'a,V,W) & between(Y::'a,X,W) --> between(X::'a,inner_pasch(U::'a,V,W,X,Y),U)) & (~between(lower_dimension_point_1::'a,lower_dimension_point_2,lower_dimension_point_3)) & (~between(lower_dimension_point_2::'a,lower_dimension_point_3,lower_dimension_point_1)) & (~between(lower_dimension_point_3::'a,lower_dimension_point_1,lower_dimension_point_2)) & (\Z X Y W V. equidistant(X::'a,W,X,V) & equidistant(Y::'a,W,Y,V) & equidistant(Z::'a,W,Z,V) --> between(X::'a,Y,Z) | between(Y::'a,Z,X) | between(Z::'a,X,Y) | equal(W::'a,V)) & (\U V W X Y. between(U::'a,W,Y) & between(V::'a,W,X) --> equal(U::'a,W) | between(U::'a,V,euclid1(U::'a,V,W,X,Y))) & (\U V W X Y. between(U::'a,W,Y) & between(V::'a,W,X) --> equal(U::'a,W) | between(U::'a,X,euclid2(U::'a,V,W,X,Y))) & (\U V W X Y. between(U::'a,W,Y) & between(V::'a,W,X) --> equal(U::'a,W) | between(euclid1(U::'a,V,W,X,Y),Y,euclid2(U::'a,V,W,X,Y))) & (\U V V1 W X X1. equidistant(U::'a,V,U,V1) & equidistant(U::'a,X,U,X1) & between(U::'a,V,X) & between(V::'a,W,X) --> between(V1::'a,continuous(U::'a,V,V1,W,X,X1),X1)) & (\U V V1 W X X1. equidistant(U::'a,V,U,V1) & equidistant(U::'a,X,U,X1) & between(U::'a,V,X) & between(V::'a,W,X) --> equidistant(U::'a,W,U,continuous(U::'a,V,V1,W,X,X1))) & (\X Y W Z. equal(X::'a,Y) & between(X::'a,W,Z) --> between(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & between(W::'a,X,Z) --> between(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & between(W::'a,Z,X) --> between(W::'a,Z,Y)) & (\X Y V W Z. equal(X::'a,Y) & equidistant(X::'a,V,W,Z) --> equidistant(Y::'a,V,W,Z)) & (\X V Y W Z. equal(X::'a,Y) & equidistant(V::'a,X,W,Z) --> equidistant(V::'a,Y,W,Z)) & (\X V W Y Z. equal(X::'a,Y) & equidistant(V::'a,W,X,Z) --> equidistant(V::'a,W,Y,Z)) & (\X V W Z Y. equal(X::'a,Y) & equidistant(V::'a,W,Z,X) --> equidistant(V::'a,W,Z,Y)) & (\X Y V1 V2 V3 V4. equal(X::'a,Y) --> equal(inner_pasch(X::'a,V1,V2,V3,V4),inner_pasch(Y::'a,V1,V2,V3,V4))) & (\X V1 Y V2 V3 V4. equal(X::'a,Y) --> equal(inner_pasch(V1::'a,X,V2,V3,V4),inner_pasch(V1::'a,Y,V2,V3,V4))) & (\X V1 V2 Y V3 V4. equal(X::'a,Y) --> equal(inner_pasch(V1::'a,V2,X,V3,V4),inner_pasch(V1::'a,V2,Y,V3,V4))) & (\X V1 V2 V3 Y V4. equal(X::'a,Y) --> equal(inner_pasch(V1::'a,V2,V3,X,V4),inner_pasch(V1::'a,V2,V3,Y,V4))) & (\X V1 V2 V3 V4 Y. equal(X::'a,Y) --> equal(inner_pasch(V1::'a,V2,V3,V4,X),inner_pasch(V1::'a,V2,V3,V4,Y))) & (\A B C D E F'. equal(A::'a,B) --> equal(euclid1(A::'a,C,D,E,F'),euclid1(B::'a,C,D,E,F'))) & (\G I' H J K' L. equal(G::'a,H) --> equal(euclid1(I'::'a,G,J,K',L),euclid1(I'::'a,H,J,K',L))) & (\M O' P N Q R. equal(M::'a,N) --> equal(euclid1(O'::'a,P,M,Q,R),euclid1(O'::'a,P,N,Q,R))) & (\S' U V W T' X. equal(S'::'a,T') --> equal(euclid1(U::'a,V,W,S',X),euclid1(U::'a,V,W,T',X))) & (\Y A1 B1 C1 D1 Z. equal(Y::'a,Z) --> equal(euclid1(A1::'a,B1,C1,D1,Y),euclid1(A1::'a,B1,C1,D1,Z))) & (\E1 F1 G1 H1 I1 J1. equal(E1::'a,F1) --> equal(euclid2(E1::'a,G1,H1,I1,J1),euclid2(F1::'a,G1,H1,I1,J1))) & (\K1 M1 L1 N1 O1 P1. equal(K1::'a,L1) --> equal(euclid2(M1::'a,K1,N1,O1,P1),euclid2(M1::'a,L1,N1,O1,P1))) & (\Q1 S1 T1 R1 U1 V1. equal(Q1::'a,R1) --> equal(euclid2(S1::'a,T1,Q1,U1,V1),euclid2(S1::'a,T1,R1,U1,V1))) & (\W1 Y1 Z1 A2 X1 B2. equal(W1::'a,X1) --> equal(euclid2(Y1::'a,Z1,A2,W1,B2),euclid2(Y1::'a,Z1,A2,X1,B2))) & (\C2 E2 F2 G2 H2 D2. equal(C2::'a,D2) --> equal(euclid2(E2::'a,F2,G2,H2,C2),euclid2(E2::'a,F2,G2,H2,D2))) & (\X Y V1 V2 V3. equal(X::'a,Y) --> equal(extension(X::'a,V1,V2,V3),extension(Y::'a,V1,V2,V3))) & (\X V1 Y V2 V3. equal(X::'a,Y) --> equal(extension(V1::'a,X,V2,V3),extension(V1::'a,Y,V2,V3))) & (\X V1 V2 Y V3. equal(X::'a,Y) --> equal(extension(V1::'a,V2,X,V3),extension(V1::'a,V2,Y,V3))) & (\X V1 V2 V3 Y. equal(X::'a,Y) --> equal(extension(V1::'a,V2,V3,X),extension(V1::'a,V2,V3,Y))) & (\X Y V1 V2 V3 V4 V5. equal(X::'a,Y) --> equal(continuous(X::'a,V1,V2,V3,V4,V5),continuous(Y::'a,V1,V2,V3,V4,V5))) & (\X V1 Y V2 V3 V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,X,V2,V3,V4,V5),continuous(V1::'a,Y,V2,V3,V4,V5))) & (\X V1 V2 Y V3 V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,X,V3,V4,V5),continuous(V1::'a,V2,Y,V3,V4,V5))) & (\X V1 V2 V3 Y V4 V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,X,V4,V5),continuous(V1::'a,V2,V3,Y,V4,V5))) & (\X V1 V2 V3 V4 Y V5. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,V4,X,V5),continuous(V1::'a,V2,V3,V4,Y,V5))) & (\X V1 V2 V3 V4 V5 Y. equal(X::'a,Y) --> equal(continuous(V1::'a,V2,V3,V4,V5,X),continuous(V1::'a,V2,V3,V4,V5,Y)))" (*4272 inferences so far. Searching to depth 10. 29.4 secs*) lemma GEO017_2: "EQU001_0_ax equal & GEO002_ax_eq continuous euclid2 euclid1 lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 inner_pasch extension between equal equidistant & (equidistant(u::'a,v,w,x)) & (~equidistant(u::'a,v,x,w)) --> False" by meson (*382903 inferences so far. Searching to depth 9. 1022s (17 mins) on griffon*) lemma GEO027_3: "EQU001_0_ax equal & GEO002_ax_eq continuous euclid2 euclid1 lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 inner_pasch extension between equal equidistant & (\U V. equal(reflection(U::'a,V),extension(U::'a,V,U,V))) & (\X Y Z. equal(X::'a,Y) --> equal(reflection(X::'a,Z),reflection(Y::'a,Z))) & (\A1 C1 B1. equal(A1::'a,B1) --> equal(reflection(C1::'a,A1),reflection(C1::'a,B1))) & (\U V. equidistant(U::'a,V,U,V)) & (\W X U V. equidistant(U::'a,V,W,X) --> equidistant(W::'a,X,U,V)) & (\V U W X. equidistant(U::'a,V,W,X) --> equidistant(V::'a,U,W,X)) & (\U V X W. equidistant(U::'a,V,W,X) --> equidistant(U::'a,V,X,W)) & (\V U X W. equidistant(U::'a,V,W,X) --> equidistant(V::'a,U,X,W)) & (\W X V U. equidistant(U::'a,V,W,X) --> equidistant(W::'a,X,V,U)) & (\X W U V. equidistant(U::'a,V,W,X) --> equidistant(X::'a,W,U,V)) & (\X W V U. equidistant(U::'a,V,W,X) --> equidistant(X::'a,W,V,U)) & (\W X U V Y Z. equidistant(U::'a,V,W,X) & equidistant(W::'a,X,Y,Z) --> equidistant(U::'a,V,Y,Z)) & (\U V W. equal(V::'a,extension(U::'a,V,W,W))) & (\W X U V Y. equal(Y::'a,extension(U::'a,V,W,X)) --> between(U::'a,V,Y)) & (\U V. between(U::'a,V,reflection(U::'a,V))) & (\U V. equidistant(V::'a,reflection(U::'a,V),U,V)) & (\U V. equal(U::'a,V) --> equal(V::'a,reflection(U::'a,V))) & (\U. equal(U::'a,reflection(U::'a,U))) & (\U V. equal(V::'a,reflection(U::'a,V)) --> equal(U::'a,V)) & (\U V. equidistant(U::'a,U,V,V)) & (\V V1 U W U1 W1. equidistant(U::'a,V,U1,V1) & equidistant(V::'a,W,V1,W1) & between(U::'a,V,W) & between(U1::'a,V1,W1) --> equidistant(U::'a,W,U1,W1)) & (\U V W X. between(U::'a,V,W) & between(U::'a,V,X) & equidistant(V::'a,W,V,X) --> equal(U::'a,V) | equal(W::'a,X)) & (between(u::'a,v,w)) & (~equal(u::'a,v)) & (~equal(w::'a,extension(u::'a,v,v,w))) --> False" oops (*313884 inferences so far. Searching to depth 10. 887 secs: 15 mins.*) lemma GEO058_2: "EQU001_0_ax equal & GEO002_ax_eq continuous euclid2 euclid1 lower_dimension_point_3 lower_dimension_point_2 lower_dimension_point_1 inner_pasch extension between equal equidistant & (\U V. equal(reflection(U::'a,V),extension(U::'a,V,U,V))) & (\X Y Z. equal(X::'a,Y) --> equal(reflection(X::'a,Z),reflection(Y::'a,Z))) & (\A1 C1 B1. equal(A1::'a,B1) --> equal(reflection(C1::'a,A1),reflection(C1::'a,B1))) & (equal(v::'a,reflection(u::'a,v))) & (~equal(u::'a,v)) --> False" oops (*0 inferences so far. Searching to depth 0. 0.2 secs*) lemma GEO079_1: "(\U V W X Y Z. right_angle(U::'a,V,W) & right_angle(X::'a,Y,Z) --> eq(U::'a,V,W,X,Y,Z)) & (\U V W X Y Z. CONGRUENT(U::'a,V,W,X,Y,Z) --> eq(U::'a,V,W,X,Y,Z)) & (\V W U X. trapezoid(U::'a,V,W,X) --> parallel(V::'a,W,U,X)) & (\U V X Y. parallel(U::'a,V,X,Y) --> eq(X::'a,V,U,V,X,Y)) & (trapezoid(a::'a,b,c,d)) & (~eq(a::'a,c,b,c,a,d)) --> False" by meson abbreviation "GRP003_0_ax equal multiply INVERSE identity product \ (\X. product(identity::'a,X,X)) & (\X. product(X::'a,identity,X)) & (\X. product(INVERSE(X),X,identity)) & (\X. product(X::'a,INVERSE(X),identity)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y Z W. product(X::'a,Y,Z) & product(X::'a,Y,W) --> equal(Z::'a,W)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W))" abbreviation "GRP003_0_eq product multiply INVERSE equal \ (\X Y. equal(X::'a,Y) --> equal(INVERSE(X),INVERSE(Y))) & (\X Y W. equal(X::'a,Y) --> equal(multiply(X::'a,W),multiply(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(multiply(W::'a,X),multiply(W::'a,Y))) & (\X Y W Z. equal(X::'a,Y) & product(X::'a,W,Z) --> product(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & product(W::'a,X,Z) --> product(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & product(W::'a,Z,X) --> product(W::'a,Z,Y))" (*2032008 inferences so far. Searching to depth 16. 658s (11 mins) on griffon*) lemma GRP001_1: "EQU001_0_ax equal & GRP003_0_ax equal multiply INVERSE identity product & GRP003_0_eq product multiply INVERSE equal & (\X. product(X::'a,X,identity)) & (product(a::'a,b,c)) & (~product(b::'a,a,c)) --> False" oops (*2386 inferences so far. Searching to depth 11. 8.7 secs*) lemma GRP008_1: "EQU001_0_ax equal & GRP003_0_ax equal multiply INVERSE identity product & GRP003_0_eq product multiply INVERSE equal & (\A B. equal(A::'a,B) --> equal(h(A),h(B))) & (\C D. equal(C::'a,D) --> equal(j(C),j(D))) & (\A B. equal(A::'a,B) & q(A) --> q(B)) & (\B A C. q(A) & product(A::'a,B,C) --> product(B::'a,A,C)) & (\A. product(j(A),A,h(A)) | product(A::'a,j(A),h(A)) | q(A)) & (\A. product(j(A),A,h(A)) & product(A::'a,j(A),h(A)) --> q(A)) & (~q(identity)) --> False" by meson (*8625 inferences so far. Searching to depth 11. 20 secs*) lemma GRP013_1: "EQU001_0_ax equal & GRP003_0_ax equal multiply INVERSE identity product & GRP003_0_eq product multiply INVERSE equal & (\A. product(A::'a,A,identity)) & (product(a::'a,b,c)) & (product(INVERSE(a),INVERSE(b),d)) & (\A C B. product(INVERSE(A),INVERSE(B),C) --> product(A::'a,C,B)) & (~product(c::'a,d,identity)) --> False" by meson (*2448 inferences so far. Searching to depth 10. 7.2 secs*) lemma GRP037_3: "EQU001_0_ax equal & GRP003_0_ax equal multiply INVERSE identity product & GRP003_0_eq product multiply INVERSE equal & (\A B C. subgroup_member(A) & subgroup_member(B) & product(A::'a,INVERSE(B),C) --> subgroup_member(C)) & (\A B. equal(A::'a,B) & subgroup_member(A) --> subgroup_member(B)) & (\A. subgroup_member(A) --> product(Gidentity::'a,A,A)) & (\A. subgroup_member(A) --> product(A::'a,Gidentity,A)) & (\A. subgroup_member(A) --> product(A::'a,Ginverse(A),Gidentity)) & (\A. subgroup_member(A) --> product(Ginverse(A),A,Gidentity)) & (\A. subgroup_member(A) --> subgroup_member(Ginverse(A))) & (\A B. equal(A::'a,B) --> equal(Ginverse(A),Ginverse(B))) & (\A C D B. product(A::'a,B,C) & product(A::'a,D,C) --> equal(D::'a,B)) & (\B C D A. product(A::'a,B,C) & product(D::'a,B,C) --> equal(D::'a,A)) & (subgroup_member(a)) & (subgroup_member(Gidentity)) & (~equal(INVERSE(a),Ginverse(a))) --> False" by meson (*163 inferences so far. Searching to depth 11. 0.3 secs*) lemma GRP031_2: "(\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y Z W. product(X::'a,Y,Z) & product(X::'a,Y,W) --> equal(Z::'a,W)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\A. product(A::'a,INVERSE(A),identity)) & (\A. product(A::'a,identity,A)) & (\A. ~product(A::'a,a,identity)) --> False" by meson (*47 inferences so far. Searching to depth 11. 0.2 secs*) lemma GRP034_4: "(\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X. product(identity::'a,X,X)) & (\X. product(X::'a,identity,X)) & (\X. product(X::'a,INVERSE(X),identity)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\B A C. subgroup_member(A) & subgroup_member(B) & product(B::'a,INVERSE(A),C) --> subgroup_member(C)) & (subgroup_member(a)) & (~subgroup_member(INVERSE(a))) --> False" by meson (*3287 inferences so far. Searching to depth 14. 3.5 secs*) lemma GRP047_2: "(\X. product(identity::'a,X,X)) & (\X. product(INVERSE(X),X,identity)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y Z W. product(X::'a,Y,Z) & product(X::'a,Y,W) --> equal(Z::'a,W)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\X W Z Y. equal(X::'a,Y) & product(W::'a,Z,X) --> product(W::'a,Z,Y)) & (equal(a::'a,b)) & (~equal(multiply(c::'a,a),multiply(c::'a,b))) --> False" by meson (*25559 inferences so far. Searching to depth 19. 16.9 secs*) lemma GRP130_1_002: "(group_element(e_1)) & (group_element(e_2)) & (~equal(e_1::'a,e_2)) & (~equal(e_2::'a,e_1)) & (\X Y. group_element(X) & group_element(Y) --> product(X::'a,Y,e_1) | product(X::'a,Y,e_2)) & (\X Y W Z. product(X::'a,Y,W) & product(X::'a,Y,Z) --> equal(W::'a,Z)) & (\X Y W Z. product(X::'a,W,Y) & product(X::'a,Z,Y) --> equal(W::'a,Z)) & (\Y X W Z. product(W::'a,Y,X) & product(Z::'a,Y,X) --> equal(W::'a,Z)) & (\Z1 Z2 Y X. product(X::'a,Y,Z1) & product(X::'a,Z1,Z2) --> product(Z2::'a,Y,X)) --> False" by meson abbreviation "GRP004_0_ax INVERSE identity multiply equal \ (\X. equal(multiply(identity::'a,X),X)) & (\X. equal(multiply(INVERSE(X),X),identity)) & (\X Y Z. equal(multiply(multiply(X::'a,Y),Z),multiply(X::'a,multiply(Y::'a,Z)))) & (\A B. equal(A::'a,B) --> equal(INVERSE(A),INVERSE(B))) & (\C D E. equal(C::'a,D) --> equal(multiply(C::'a,E),multiply(D::'a,E))) & (\F' H G. equal(F'::'a,G) --> equal(multiply(H::'a,F'),multiply(H::'a,G)))" abbreviation "GRP004_2_ax multiply least_upper_bound greatest_lower_bound equal \ (\Y X. equal(greatest_lower_bound(X::'a,Y),greatest_lower_bound(Y::'a,X))) & (\Y X. equal(least_upper_bound(X::'a,Y),least_upper_bound(Y::'a,X))) & (\X Y Z. equal(greatest_lower_bound(X::'a,greatest_lower_bound(Y::'a,Z)),greatest_lower_bound(greatest_lower_bound(X::'a,Y),Z))) & (\X Y Z. equal(least_upper_bound(X::'a,least_upper_bound(Y::'a,Z)),least_upper_bound(least_upper_bound(X::'a,Y),Z))) & (\X. equal(least_upper_bound(X::'a,X),X)) & (\X. equal(greatest_lower_bound(X::'a,X),X)) & (\Y X. equal(least_upper_bound(X::'a,greatest_lower_bound(X::'a,Y)),X)) & (\Y X. equal(greatest_lower_bound(X::'a,least_upper_bound(X::'a,Y)),X)) & (\Y X Z. equal(multiply(X::'a,least_upper_bound(Y::'a,Z)),least_upper_bound(multiply(X::'a,Y),multiply(X::'a,Z)))) & (\Y X Z. equal(multiply(X::'a,greatest_lower_bound(Y::'a,Z)),greatest_lower_bound(multiply(X::'a,Y),multiply(X::'a,Z)))) & (\Y Z X. equal(multiply(least_upper_bound(Y::'a,Z),X),least_upper_bound(multiply(Y::'a,X),multiply(Z::'a,X)))) & (\Y Z X. equal(multiply(greatest_lower_bound(Y::'a,Z),X),greatest_lower_bound(multiply(Y::'a,X),multiply(Z::'a,X)))) & (\A B C. equal(A::'a,B) --> equal(greatest_lower_bound(A::'a,C),greatest_lower_bound(B::'a,C))) & (\A C B. equal(A::'a,B) --> equal(greatest_lower_bound(C::'a,A),greatest_lower_bound(C::'a,B))) & (\A B C. equal(A::'a,B) --> equal(least_upper_bound(A::'a,C),least_upper_bound(B::'a,C))) & (\A C B. equal(A::'a,B) --> equal(least_upper_bound(C::'a,A),least_upper_bound(C::'a,B))) & (\A B C. equal(A::'a,B) --> equal(multiply(A::'a,C),multiply(B::'a,C))) & (\A C B. equal(A::'a,B) --> equal(multiply(C::'a,A),multiply(C::'a,B)))" (*3468 inferences so far. Searching to depth 10. 9.1 secs*) lemma GRP156_1: "EQU001_0_ax equal & GRP004_0_ax INVERSE identity multiply equal & GRP004_2_ax multiply least_upper_bound greatest_lower_bound equal & (equal(least_upper_bound(a::'a,b),b)) & (~equal(greatest_lower_bound(multiply(a::'a,c),multiply(b::'a,c)),multiply(a::'a,c))) --> False" by meson (*4394 inferences so far. Searching to depth 10. 8.2 secs*) lemma GRP168_1: "EQU001_0_ax equal & GRP004_0_ax INVERSE identity multiply equal & GRP004_2_ax multiply least_upper_bound greatest_lower_bound equal & (equal(least_upper_bound(a::'a,b),b)) & (~equal(least_upper_bound(multiply(INVERSE(c),multiply(a::'a,c)),multiply(INVERSE(c),multiply(b::'a,c))),multiply(INVERSE(c),multiply(b::'a,c)))) --> False" by meson abbreviation "HEN002_0_ax identity Zero Divide equal mless_equal \ (\X Y. mless_equal(X::'a,Y) --> equal(Divide(X::'a,Y),Zero)) & (\X Y. equal(Divide(X::'a,Y),Zero) --> mless_equal(X::'a,Y)) & (\Y X. mless_equal(Divide(X::'a,Y),X)) & (\X Y Z. mless_equal(Divide(Divide(X::'a,Z),Divide(Y::'a,Z)),Divide(Divide(X::'a,Y),Z))) & (\X. mless_equal(Zero::'a,X)) & (\X Y. mless_equal(X::'a,Y) & mless_equal(Y::'a,X) --> equal(X::'a,Y)) & (\X. mless_equal(X::'a,identity))" abbreviation "HEN002_0_eq mless_equal Divide equal \ (\A B C. equal(A::'a,B) --> equal(Divide(A::'a,C),Divide(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(Divide(F'::'a,D),Divide(F'::'a,E))) & (\G H I'. equal(G::'a,H) & mless_equal(G::'a,I') --> mless_equal(H::'a,I')) & (\J L K'. equal(J::'a,K') & mless_equal(L::'a,J) --> mless_equal(L::'a,K'))" (*250258 inferences so far. Searching to depth 16. 406.2 secs*) lemma HEN003_3: "EQU001_0_ax equal & HEN002_0_ax identity Zero Divide equal mless_equal & HEN002_0_eq mless_equal Divide equal & (~equal(Divide(a::'a,a),Zero)) --> False" by meson (*202177 inferences so far. Searching to depth 14. 451 secs*) lemma HEN007_2: "EQU001_0_ax equal & (\X Y. mless_equal(X::'a,Y) --> quotient(X::'a,Y,Zero)) & (\X Y. quotient(X::'a,Y,Zero) --> mless_equal(X::'a,Y)) & (\Y Z X. quotient(X::'a,Y,Z) --> mless_equal(Z::'a,X)) & (\Y X V3 V2 V1 Z V4 V5. quotient(X::'a,Y,V1) & quotient(Y::'a,Z,V2) & quotient(X::'a,Z,V3) & quotient(V3::'a,V2,V4) & quotient(V1::'a,Z,V5) --> mless_equal(V4::'a,V5)) & (\X. mless_equal(Zero::'a,X)) & (\X Y. mless_equal(X::'a,Y) & mless_equal(Y::'a,X) --> equal(X::'a,Y)) & (\X. mless_equal(X::'a,identity)) & (\X Y. quotient(X::'a,Y,Divide(X::'a,Y))) & (\X Y Z W. quotient(X::'a,Y,Z) & quotient(X::'a,Y,W) --> equal(Z::'a,W)) & (\X Y W Z. equal(X::'a,Y) & quotient(X::'a,W,Z) --> quotient(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & quotient(W::'a,X,Z) --> quotient(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & quotient(W::'a,Z,X) --> quotient(W::'a,Z,Y)) & (\X Z Y. equal(X::'a,Y) & mless_equal(Z::'a,X) --> mless_equal(Z::'a,Y)) & (\X Y Z. equal(X::'a,Y) & mless_equal(X::'a,Z) --> mless_equal(Y::'a,Z)) & (\X Y W. equal(X::'a,Y) --> equal(Divide(X::'a,W),Divide(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(Divide(W::'a,X),Divide(W::'a,Y))) & (\X. quotient(X::'a,identity,Zero)) & (\X. quotient(Zero::'a,X,Zero)) & (\X. quotient(X::'a,X,Zero)) & (\X. quotient(X::'a,Zero,X)) & (\Y X Z. mless_equal(X::'a,Y) & mless_equal(Y::'a,Z) --> mless_equal(X::'a,Z)) & (\W1 X Z W2 Y. quotient(X::'a,Y,W1) & mless_equal(W1::'a,Z) & quotient(X::'a,Z,W2) --> mless_equal(W2::'a,Y)) & (mless_equal(x::'a,y)) & (quotient(z::'a,y,zQy)) & (quotient(z::'a,x,zQx)) & (~mless_equal(zQy::'a,zQx)) --> False" by meson (*60026 inferences so far. Searching to depth 12. 42.2 secs*) lemma HEN008_4: "EQU001_0_ax equal & HEN002_0_ax identity Zero Divide equal mless_equal & HEN002_0_eq mless_equal Divide equal & (\X. equal(Divide(X::'a,identity),Zero)) & (\X. equal(Divide(Zero::'a,X),Zero)) & (\X. equal(Divide(X::'a,X),Zero)) & (equal(Divide(a::'a,Zero),a)) & (\Y X Z. mless_equal(X::'a,Y) & mless_equal(Y::'a,Z) --> mless_equal(X::'a,Z)) & (\X Z Y. mless_equal(Divide(X::'a,Y),Z) --> mless_equal(Divide(X::'a,Z),Y)) & (\Y Z X. mless_equal(X::'a,Y) --> mless_equal(Divide(Z::'a,Y),Divide(Z::'a,X))) & (mless_equal(a::'a,b)) & (~mless_equal(Divide(a::'a,c),Divide(b::'a,c))) --> False" by meson (*3160 inferences so far. Searching to depth 11. 3.5 secs*) lemma HEN009_5: "EQU001_0_ax equal & (\Y X. equal(Divide(Divide(X::'a,Y),X),Zero)) & (\X Y Z. equal(Divide(Divide(Divide(X::'a,Z),Divide(Y::'a,Z)),Divide(Divide(X::'a,Y),Z)),Zero)) & (\X. equal(Divide(Zero::'a,X),Zero)) & (\X Y. equal(Divide(X::'a,Y),Zero) & equal(Divide(Y::'a,X),Zero) --> equal(X::'a,Y)) & (\X. equal(Divide(X::'a,identity),Zero)) & (\A B C. equal(A::'a,B) --> equal(Divide(A::'a,C),Divide(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(Divide(F'::'a,D),Divide(F'::'a,E))) & (\Y X Z. equal(Divide(X::'a,Y),Zero) & equal(Divide(Y::'a,Z),Zero) --> equal(Divide(X::'a,Z),Zero)) & (\X Z Y. equal(Divide(Divide(X::'a,Y),Z),Zero) --> equal(Divide(Divide(X::'a,Z),Y),Zero)) & (\Y Z X. equal(Divide(X::'a,Y),Zero) --> equal(Divide(Divide(Z::'a,Y),Divide(Z::'a,X)),Zero)) & (~equal(Divide(identity::'a,a),Divide(identity::'a,Divide(identity::'a,Divide(identity::'a,a))))) & (equal(Divide(identity::'a,a),b)) & (equal(Divide(identity::'a,b),c)) & (equal(Divide(identity::'a,c),d)) & (~equal(b::'a,d)) --> False" by meson (*970373 inferences so far. Searching to depth 17. 890.0 secs*) lemma HEN012_3: "EQU001_0_ax equal & HEN002_0_ax identity Zero Divide equal mless_equal & HEN002_0_eq mless_equal Divide equal & (~mless_equal(a::'a,a)) --> False" by meson (*1063 inferences so far. Searching to depth 20. 1.0 secs*) lemma LCL010_1: "(\X Y. is_a_theorem(equivalent(X::'a,Y)) & is_a_theorem(X) --> is_a_theorem(Y)) & (\X Z Y. is_a_theorem(equivalent(equivalent(X::'a,Y),equivalent(equivalent(X::'a,Z),equivalent(Z::'a,Y))))) & (~is_a_theorem(equivalent(equivalent(a::'a,b),equivalent(equivalent(c::'a,b),equivalent(a::'a,c))))) --> False" by meson (*2549 inferences so far. Searching to depth 12. 1.4 secs*) lemma LCL077_2: "(\X Y. is_a_theorem(implies(X,Y)) & is_a_theorem(X) --> is_a_theorem(Y)) & (\Y X. is_a_theorem(implies(X,implies(Y,X)))) & (\Y X Z. is_a_theorem(implies(implies(X,implies(Y,Z)),implies(implies(X,Y),implies(X,Z))))) & (\Y X. is_a_theorem(implies(implies(not(X),not(Y)),implies(Y,X)))) & (\X2 X1 X3. is_a_theorem(implies(X1,X2)) & is_a_theorem(implies(X2,X3)) --> is_a_theorem(implies(X1,X3))) & (~is_a_theorem(implies(not(not(a)),a))) --> False" by meson (*2036 inferences so far. Searching to depth 20. 1.5 secs*) lemma LCL082_1: "(\X Y. is_a_theorem(implies(X::'a,Y)) & is_a_theorem(X) --> is_a_theorem(Y)) & (\Y Z U X. is_a_theorem(implies(implies(implies(X::'a,Y),Z),implies(implies(Z::'a,X),implies(U::'a,X))))) & (~is_a_theorem(implies(a::'a,implies(b::'a,a)))) --> False" by meson (*1100 inferences so far. Searching to depth 13. 1.0 secs*) lemma LCL111_1: "(\X Y. is_a_theorem(implies(X,Y)) & is_a_theorem(X) --> is_a_theorem(Y)) & (\Y X. is_a_theorem(implies(X,implies(Y,X)))) & (\Y X Z. is_a_theorem(implies(implies(X,Y),implies(implies(Y,Z),implies(X,Z))))) & (\Y X. is_a_theorem(implies(implies(implies(X,Y),Y),implies(implies(Y,X),X)))) & (\Y X. is_a_theorem(implies(implies(not(X),not(Y)),implies(Y,X)))) & (~is_a_theorem(implies(implies(a,b),implies(implies(c,a),implies(c,b))))) --> False" by meson (*667 inferences so far. Searching to depth 9. 1.4 secs*) lemma LCL143_1: "(\X. equal(X,X)) & (\Y X. equal(X,Y) --> equal(Y,X)) & (\Y X Z. equal(X,Y) & equal(Y,Z) --> equal(X,Z)) & (\X. equal(implies(true,X),X)) & (\Y X Z. equal(implies(implies(X,Y),implies(implies(Y,Z),implies(X,Z))),true)) & (\Y X. equal(implies(implies(X,Y),Y),implies(implies(Y,X),X))) & (\Y X. equal(implies(implies(not(X),not(Y)),implies(Y,X)),true)) & (\A B C. equal(A,B) --> equal(implies(A,C),implies(B,C))) & (\D F' E. equal(D,E) --> equal(implies(F',D),implies(F',E))) & (\G H. equal(G,H) --> equal(not(G),not(H))) & (\X Y. equal(big_V(X,Y),implies(implies(X,Y),Y))) & (\X Y. equal(big_hat(X,Y),not(big_V(not(X),not(Y))))) & (\X Y. ordered(X,Y) --> equal(implies(X,Y),true)) & (\X Y. equal(implies(X,Y),true) --> ordered(X,Y)) & (\A B C. equal(A,B) --> equal(big_V(A,C),big_V(B,C))) & (\D F' E. equal(D,E) --> equal(big_V(F',D),big_V(F',E))) & (\G H I'. equal(G,H) --> equal(big_hat(G,I'),big_hat(H,I'))) & (\J L K'. equal(J,K') --> equal(big_hat(L,J),big_hat(L,K'))) & (\M N O'. equal(M,N) & ordered(M,O') --> ordered(N,O')) & (\P R Q. equal(P,Q) & ordered(R,P) --> ordered(R,Q)) & (ordered(x,y)) & (~ordered(implies(z,x),implies(z,y))) --> False" by meson (*5245 inferences so far. Searching to depth 12. 4.6 secs*) lemma LCL182_1: "(\A. axiom(or(not(or(A,A)),A))) & (\B A. axiom(or(not(A),or(B,A)))) & (\B A. axiom(or(not(or(A,B)),or(B,A)))) & (\B A C. axiom(or(not(or(A,or(B,C))),or(B,or(A,C))))) & (\A C B. axiom(or(not(or(not(A),B)),or(not(or(C,A)),or(C,B))))) & (\X. axiom(X) --> theorem(X)) & (\X Y. axiom(or(not(Y),X)) & theorem(Y) --> theorem(X)) & (\X Y Z. axiom(or(not(X),Y)) & theorem(or(not(Y),Z)) --> theorem(or(not(X),Z))) & (~theorem(or(not(or(not(p),q)),or(not(not(q)),not(p))))) --> False" by meson (*410 inferences so far. Searching to depth 10. 0.3 secs*) lemma LCL200_1: "(\A. axiom(or(not(or(A,A)),A))) & (\B A. axiom(or(not(A),or(B,A)))) & (\B A. axiom(or(not(or(A,B)),or(B,A)))) & (\B A C. axiom(or(not(or(A,or(B,C))),or(B,or(A,C))))) & (\A C B. axiom(or(not(or(not(A),B)),or(not(or(C,A)),or(C,B))))) & (\X. axiom(X) --> theorem(X)) & (\X Y. axiom(or(not(Y),X)) & theorem(Y) --> theorem(X)) & (\X Y Z. axiom(or(not(X),Y)) & theorem(or(not(Y),Z)) --> theorem(or(not(X),Z))) & (~theorem(or(not(not(or(p,q))),not(q)))) --> False" by meson (*5849 inferences so far. Searching to depth 12. 5.6 secs*) lemma LCL215_1: "(\A. axiom(or(not(or(A,A)),A))) & (\B A. axiom(or(not(A),or(B,A)))) & (\B A. axiom(or(not(or(A,B)),or(B,A)))) & (\B A C. axiom(or(not(or(A,or(B,C))),or(B,or(A,C))))) & (\A C B. axiom(or(not(or(not(A),B)),or(not(or(C,A)),or(C,B))))) & (\X. axiom(X) --> theorem(X)) & (\X Y. axiom(or(not(Y),X)) & theorem(Y) --> theorem(X)) & (\X Y Z. axiom(or(not(X),Y)) & theorem(or(not(Y),Z)) --> theorem(or(not(X),Z))) & (~theorem(or(not(or(not(p),q)),or(not(or(p,q)),q)))) --> False" by meson (*0 secs. Not sure that a search even starts!*) lemma LCL230_2: "(q --> p | r) & (~p) & (q) & (~r) --> False" by meson (*119585 inferences so far. Searching to depth 14. 262.4 secs*) lemma LDA003_1: "EQU001_0_ax equal & (\Y X Z. equal(f(X::'a,f(Y::'a,Z)),f(f(X::'a,Y),f(X::'a,Z)))) & (\X Y. left(X::'a,f(X::'a,Y))) & (\Y X Z. left(X::'a,Y) & left(Y::'a,Z) --> left(X::'a,Z)) & (equal(num2::'a,f(num1::'a,num1))) & (equal(num3::'a,f(num2::'a,num1))) & (equal(u::'a,f(num2::'a,num2))) & (\A B C. equal(A::'a,B) --> equal(f(A::'a,C),f(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(f(F'::'a,D),f(F'::'a,E))) & (\G H I'. equal(G::'a,H) & left(G::'a,I') --> left(H::'a,I')) & (\J L K'. equal(J::'a,K') & left(L::'a,J) --> left(L::'a,K')) & (~left(num3::'a,u)) --> False" by meson (*2392 inferences so far. Searching to depth 12. 2.2 secs*) lemma MSC002_1: "(at(something::'a,here,now)) & (\Place Situation. hand_at(Place::'a,Situation) --> hand_at(Place::'a,let_go(Situation))) & (\Place Another_place Situation. hand_at(Place::'a,Situation) --> hand_at(Another_place::'a,go(Another_place::'a,Situation))) & (\Thing Situation. ~held(Thing::'a,let_go(Situation))) & (\Situation Thing. at(Thing::'a,here,Situation) --> red(Thing)) & (\Thing Place Situation. at(Thing::'a,Place,Situation) --> at(Thing::'a,Place,let_go(Situation))) & (\Thing Place Situation. at(Thing::'a,Place,Situation) --> at(Thing::'a,Place,pick_up(Situation))) & (\Thing Place Situation. at(Thing::'a,Place,Situation) --> grabbed(Thing::'a,pick_up(go(Place::'a,let_go(Situation))))) & (\Thing Situation. red(Thing) & put(Thing::'a,there,Situation) --> answer(Situation)) & (\Place Thing Another_place Situation. at(Thing::'a,Place,Situation) & grabbed(Thing::'a,Situation) --> put(Thing::'a,Another_place,go(Another_place::'a,Situation))) & (\Thing Place Another_place Situation. at(Thing::'a,Place,Situation) --> held(Thing::'a,Situation) | at(Thing::'a,Place,go(Another_place::'a,Situation))) & (\One_place Thing Place Situation. hand_at(One_place::'a,Situation) & held(Thing::'a,Situation) --> at(Thing::'a,Place,go(Place::'a,Situation))) & (\Place Thing Situation. hand_at(Place::'a,Situation) & at(Thing::'a,Place,Situation) --> held(Thing::'a,pick_up(Situation))) & (\Situation. ~answer(Situation)) --> False" by meson (*73 inferences so far. Searching to depth 10. 0.2 secs*) lemma MSC003_1: "(\Number_of_small_parts Small_part Big_part Number_of_mid_parts Mid_part. has_parts(Big_part::'a,Number_of_mid_parts,Mid_part) --> in'(object_in(Big_part::'a,Mid_part,Small_part,Number_of_mid_parts,Number_of_small_parts),Mid_part) | has_parts(Big_part::'a,mtimes(Number_of_mid_parts::'a,Number_of_small_parts),Small_part)) & (\Big_part Mid_part Number_of_mid_parts Number_of_small_parts Small_part. has_parts(Big_part::'a,Number_of_mid_parts,Mid_part) & has_parts(object_in(Big_part::'a,Mid_part,Small_part,Number_of_mid_parts,Number_of_small_parts),Number_of_small_parts,Small_part) --> has_parts(Big_part::'a,mtimes(Number_of_mid_parts::'a,Number_of_small_parts),Small_part)) & (in'(john::'a,boy)) & (\X. in'(X::'a,boy) --> in'(X::'a,human)) & (\X. in'(X::'a,hand) --> has_parts(X::'a,num5,fingers)) & (\X. in'(X::'a,human) --> has_parts(X::'a,num2,arm)) & (\X. in'(X::'a,arm) --> has_parts(X::'a,num1,hand)) & (~has_parts(john::'a,mtimes(num2::'a,num1),hand)) --> False" by meson (*1486 inferences so far. Searching to depth 20. 1.2 secs*) lemma MSC004_1: "(\Number_of_small_parts Small_part Big_part Number_of_mid_parts Mid_part. has_parts(Big_part::'a,Number_of_mid_parts,Mid_part) --> in'(object_in(Big_part::'a,Mid_part,Small_part,Number_of_mid_parts,Number_of_small_parts),Mid_part) | has_parts(Big_part::'a,mtimes(Number_of_mid_parts::'a,Number_of_small_parts),Small_part)) & (\Big_part Mid_part Number_of_mid_parts Number_of_small_parts Small_part. has_parts(Big_part::'a,Number_of_mid_parts,Mid_part) & has_parts(object_in(Big_part::'a,Mid_part,Small_part,Number_of_mid_parts,Number_of_small_parts),Number_of_small_parts,Small_part) --> has_parts(Big_part::'a,mtimes(Number_of_mid_parts::'a,Number_of_small_parts),Small_part)) & (in'(john::'a,boy)) & (\X. in'(X::'a,boy) --> in'(X::'a,human)) & (\X. in'(X::'a,hand) --> has_parts(X::'a,num5,fingers)) & (\X. in'(X::'a,human) --> has_parts(X::'a,num2,arm)) & (\X. in'(X::'a,arm) --> has_parts(X::'a,num1,hand)) & (~has_parts(john::'a,mtimes(mtimes(num2::'a,num1),num5),fingers)) --> False" by meson (*100 inferences so far. Searching to depth 12. 0.1 secs*) lemma MSC005_1: "(value(truth::'a,truth)) & (value(falsity::'a,falsity)) & (\X Y. value(X::'a,truth) & value(Y::'a,truth) --> value(xor(X::'a,Y),falsity)) & (\X Y. value(X::'a,truth) & value(Y::'a,falsity) --> value(xor(X::'a,Y),truth)) & (\X Y. value(X::'a,falsity) & value(Y::'a,truth) --> value(xor(X::'a,Y),truth)) & (\X Y. value(X::'a,falsity) & value(Y::'a,falsity) --> value(xor(X::'a,Y),falsity)) & - (\Value. ~value(xor(xor(xor(xor(truth::'a,falsity),falsity),truth),falsity),Value)) --> False" + (\Value. ~value(xor(xor(xor(xor(truth::'a,falsity),falsity),truth),falsity),Value)) --> False" for xor by meson (*19116 inferences so far. Searching to depth 16. 15.9 secs*) lemma MSC006_1: "(\Y X Z. p(X::'a,Y) & p(Y::'a,Z) --> p(X::'a,Z)) & (\Y X Z. q(X::'a,Y) & q(Y::'a,Z) --> q(X::'a,Z)) & (\Y X. q(X::'a,Y) --> q(Y::'a,X)) & (\X Y. p(X::'a,Y) | q(X::'a,Y)) & (~p(a::'a,b)) & (~q(c::'a,d)) --> False" by meson (*1713 inferences so far. Searching to depth 10. 2.8 secs*) lemma NUM001_1: "(\A. equal(A::'a,A)) & (\B A C. equal(A::'a,B) & equal(B::'a,C) --> equal(A::'a,C)) & (\B A. equal(add(A::'a,B),add(B::'a,A))) & (\A B C. equal(add(A::'a,add(B::'a,C)),add(add(A::'a,B),C))) & (\B A. equal(subtract(add(A::'a,B),B),A)) & (\A B. equal(A::'a,subtract(add(A::'a,B),B))) & (\A C B. equal(add(subtract(A::'a,B),C),subtract(add(A::'a,C),B))) & (\A C B. equal(subtract(add(A::'a,B),C),add(subtract(A::'a,C),B))) & (\A C B D. equal(A::'a,B) & equal(C::'a,add(A::'a,D)) --> equal(C::'a,add(B::'a,D))) & (\A C D B. equal(A::'a,B) & equal(C::'a,add(D::'a,A)) --> equal(C::'a,add(D::'a,B))) & (\A C B D. equal(A::'a,B) & equal(C::'a,subtract(A::'a,D)) --> equal(C::'a,subtract(B::'a,D))) & (\A C D B. equal(A::'a,B) & equal(C::'a,subtract(D::'a,A)) --> equal(C::'a,subtract(D::'a,B))) & (~equal(add(add(a::'a,b),c),add(a::'a,add(b::'a,c)))) --> False" by meson abbreviation "NUM001_0_ax multiply successor num0 add equal \ (\A. equal(add(A::'a,num0),A)) & (\A B. equal(add(A::'a,successor(B)),successor(add(A::'a,B)))) & (\A. equal(multiply(A::'a,num0),num0)) & (\B A. equal(multiply(A::'a,successor(B)),add(multiply(A::'a,B),A))) & (\A B. equal(successor(A),successor(B)) --> equal(A::'a,B)) & (\A B. equal(A::'a,B) --> equal(successor(A),successor(B)))" abbreviation "NUM001_1_ax predecessor_of_1st_minus_2nd successor add equal mless \ (\A C B. mless(A::'a,B) & mless(C::'a,A) --> mless(C::'a,B)) & (\A B C. equal(add(successor(A),B),C) --> mless(B::'a,C)) & (\A B. mless(A::'a,B) --> equal(add(successor(predecessor_of_1st_minus_2nd(B::'a,A)),A),B))" abbreviation "NUM001_2_ax equal mless divides \ (\A B. divides(A::'a,B) --> mless(A::'a,B) | equal(A::'a,B)) & (\A B. mless(A::'a,B) --> divides(A::'a,B)) & (\A B. equal(A::'a,B) --> divides(A::'a,B))" (*20717 inferences so far. Searching to depth 11. 13.7 secs*) lemma NUM021_1: "EQU001_0_ax equal & NUM001_0_ax multiply successor num0 add equal & NUM001_1_ax predecessor_of_1st_minus_2nd successor add equal mless & NUM001_2_ax equal mless divides & (mless(b::'a,c)) & (~mless(b::'a,a)) & (divides(c::'a,a)) & (\A. ~equal(successor(A),num0)) --> False" by meson (*26320 inferences so far. Searching to depth 10. 26.4 secs*) lemma NUM024_1: "EQU001_0_ax equal & NUM001_0_ax multiply successor num0 add equal & NUM001_1_ax predecessor_of_1st_minus_2nd successor add equal mless & (\B A. equal(add(A::'a,B),add(B::'a,A))) & (\B A C. equal(add(A::'a,B),add(C::'a,B)) --> equal(A::'a,C)) & (mless(a::'a,a)) & (\A. ~equal(successor(A),num0)) --> False" by meson abbreviation "SET004_0_ax not_homomorphism2 not_homomorphism1 homomorphism compatible operation cantor diagonalise subset_relation one_to_one choice apply regular function identity_relation single_valued_class compos powerClass sum_class omega inductive successor_relation successor image' rng domain range_of INVERSE flip rot domain_of null_class restrct difference union complement intersection element_relation second first cross_product ordered_pair singleton unordered_pair equal universal_class not_subclass_element member subclass \ (\X U Y. subclass(X::'a,Y) & member(U::'a,X) --> member(U::'a,Y)) & (\X Y. member(not_subclass_element(X::'a,Y),X) | subclass(X::'a,Y)) & (\X Y. member(not_subclass_element(X::'a,Y),Y) --> subclass(X::'a,Y)) & (\X. subclass(X::'a,universal_class)) & (\X Y. equal(X::'a,Y) --> subclass(X::'a,Y)) & (\Y X. equal(X::'a,Y) --> subclass(Y::'a,X)) & (\X Y. subclass(X::'a,Y) & subclass(Y::'a,X) --> equal(X::'a,Y)) & (\X U Y. member(U::'a,unordered_pair(X::'a,Y)) --> equal(U::'a,X) | equal(U::'a,Y)) & (\X Y. member(X::'a,universal_class) --> member(X::'a,unordered_pair(X::'a,Y))) & (\X Y. member(Y::'a,universal_class) --> member(Y::'a,unordered_pair(X::'a,Y))) & (\X Y. member(unordered_pair(X::'a,Y),universal_class)) & (\X. equal(unordered_pair(X::'a,X),singleton(X))) & (\X Y. equal(unordered_pair(singleton(X),unordered_pair(X::'a,singleton(Y))),ordered_pair(X::'a,Y))) & (\V Y U X. member(ordered_pair(U::'a,V),cross_product(X::'a,Y)) --> member(U::'a,X)) & (\U X V Y. member(ordered_pair(U::'a,V),cross_product(X::'a,Y)) --> member(V::'a,Y)) & (\U V X Y. member(U::'a,X) & member(V::'a,Y) --> member(ordered_pair(U::'a,V),cross_product(X::'a,Y))) & (\X Y Z. member(Z::'a,cross_product(X::'a,Y)) --> equal(ordered_pair(first(Z),second(Z)),Z)) & (subclass(element_relation::'a,cross_product(universal_class::'a,universal_class))) & (\X Y. member(ordered_pair(X::'a,Y),element_relation) --> member(X::'a,Y)) & (\X Y. member(ordered_pair(X::'a,Y),cross_product(universal_class::'a,universal_class)) & member(X::'a,Y) --> member(ordered_pair(X::'a,Y),element_relation)) & (\Y Z X. member(Z::'a,intersection(X::'a,Y)) --> member(Z::'a,X)) & (\X Z Y. member(Z::'a,intersection(X::'a,Y)) --> member(Z::'a,Y)) & (\Z X Y. member(Z::'a,X) & member(Z::'a,Y) --> member(Z::'a,intersection(X::'a,Y))) & (\Z X. ~(member(Z::'a,complement(X)) & member(Z::'a,X))) & (\Z X. member(Z::'a,universal_class) --> member(Z::'a,complement(X)) | member(Z::'a,X)) & (\X Y. equal(complement(intersection(complement(X),complement(Y))),union(X::'a,Y))) & (\X Y. equal(intersection(complement(intersection(X::'a,Y)),complement(intersection(complement(X),complement(Y)))),difference(X::'a,Y))) & (\Xr X Y. equal(intersection(Xr::'a,cross_product(X::'a,Y)),restrct(Xr::'a,X,Y))) & (\Xr X Y. equal(intersection(cross_product(X::'a,Y),Xr),restrct(Xr::'a,X,Y))) & (\Z X. ~(equal(restrct(X::'a,singleton(Z),universal_class),null_class) & member(Z::'a,domain_of(X)))) & (\Z X. member(Z::'a,universal_class) --> equal(restrct(X::'a,singleton(Z),universal_class),null_class) | member(Z::'a,domain_of(X))) & (\X. subclass(rot(X),cross_product(cross_product(universal_class::'a,universal_class),universal_class))) & (\V W U X. member(ordered_pair(ordered_pair(U::'a,V),W),rot(X)) --> member(ordered_pair(ordered_pair(V::'a,W),U),X)) & (\U V W X. member(ordered_pair(ordered_pair(V::'a,W),U),X) & member(ordered_pair(ordered_pair(U::'a,V),W),cross_product(cross_product(universal_class::'a,universal_class),universal_class)) --> member(ordered_pair(ordered_pair(U::'a,V),W),rot(X))) & (\X. subclass(flip(X),cross_product(cross_product(universal_class::'a,universal_class),universal_class))) & (\V U W X. member(ordered_pair(ordered_pair(U::'a,V),W),flip(X)) --> member(ordered_pair(ordered_pair(V::'a,U),W),X)) & (\U V W X. member(ordered_pair(ordered_pair(V::'a,U),W),X) & member(ordered_pair(ordered_pair(U::'a,V),W),cross_product(cross_product(universal_class::'a,universal_class),universal_class)) --> member(ordered_pair(ordered_pair(U::'a,V),W),flip(X))) & (\Y. equal(domain_of(flip(cross_product(Y::'a,universal_class))),INVERSE(Y))) & (\Z. equal(domain_of(INVERSE(Z)),range_of(Z))) & (\Z X Y. equal(first(not_subclass_element(restrct(Z::'a,X,singleton(Y)),null_class)),domain(Z::'a,X,Y))) & (\Z X Y. equal(second(not_subclass_element(restrct(Z::'a,singleton(X),Y),null_class)),rng(Z::'a,X,Y))) & (\Xr X. equal(range_of(restrct(Xr::'a,X,universal_class)),image'(Xr::'a,X))) & (\X. equal(union(X::'a,singleton(X)),successor(X))) & (subclass(successor_relation::'a,cross_product(universal_class::'a,universal_class))) & (\X Y. member(ordered_pair(X::'a,Y),successor_relation) --> equal(successor(X),Y)) & (\X Y. equal(successor(X),Y) & member(ordered_pair(X::'a,Y),cross_product(universal_class::'a,universal_class)) --> member(ordered_pair(X::'a,Y),successor_relation)) & (\X. inductive(X) --> member(null_class::'a,X)) & (\X. inductive(X) --> subclass(image'(successor_relation::'a,X),X)) & (\X. member(null_class::'a,X) & subclass(image'(successor_relation::'a,X),X) --> inductive(X)) & (inductive(omega)) & (\Y. inductive(Y) --> subclass(omega::'a,Y)) & (member(omega::'a,universal_class)) & (\X. equal(domain_of(restrct(element_relation::'a,universal_class,X)),sum_class(X))) & (\X. member(X::'a,universal_class) --> member(sum_class(X),universal_class)) & (\X. equal(complement(image'(element_relation::'a,complement(X))),powerClass(X))) & (\U. member(U::'a,universal_class) --> member(powerClass(U),universal_class)) & (\Yr Xr. subclass(compos(Yr::'a,Xr),cross_product(universal_class::'a,universal_class))) & (\Z Yr Xr Y. member(ordered_pair(Y::'a,Z),compos(Yr::'a,Xr)) --> member(Z::'a,image'(Yr::'a,image'(Xr::'a,singleton(Y))))) & (\Y Z Yr Xr. member(Z::'a,image'(Yr::'a,image'(Xr::'a,singleton(Y)))) & member(ordered_pair(Y::'a,Z),cross_product(universal_class::'a,universal_class)) --> member(ordered_pair(Y::'a,Z),compos(Yr::'a,Xr))) & (\X. single_valued_class(X) --> subclass(compos(X::'a,INVERSE(X)),identity_relation)) & (\X. subclass(compos(X::'a,INVERSE(X)),identity_relation) --> single_valued_class(X)) & (\Xf. function(Xf) --> subclass(Xf::'a,cross_product(universal_class::'a,universal_class))) & (\Xf. function(Xf) --> subclass(compos(Xf::'a,INVERSE(Xf)),identity_relation)) & (\Xf. subclass(Xf::'a,cross_product(universal_class::'a,universal_class)) & subclass(compos(Xf::'a,INVERSE(Xf)),identity_relation) --> function(Xf)) & (\Xf X. function(Xf) & member(X::'a,universal_class) --> member(image'(Xf::'a,X),universal_class)) & (\X. equal(X::'a,null_class) | member(regular(X),X)) & (\X. equal(X::'a,null_class) | equal(intersection(X::'a,regular(X)),null_class)) & (\Xf Y. equal(sum_class(image'(Xf::'a,singleton(Y))),apply(Xf::'a,Y))) & (function(choice)) & (\Y. member(Y::'a,universal_class) --> equal(Y::'a,null_class) | member(apply(choice::'a,Y),Y)) & (\Xf. one_to_one(Xf) --> function(Xf)) & (\Xf. one_to_one(Xf) --> function(INVERSE(Xf))) & (\Xf. function(INVERSE(Xf)) & function(Xf) --> one_to_one(Xf)) & (equal(intersection(cross_product(universal_class::'a,universal_class),intersection(cross_product(universal_class::'a,universal_class),complement(compos(complement(element_relation),INVERSE(element_relation))))),subset_relation)) & (equal(intersection(INVERSE(subset_relation),subset_relation),identity_relation)) & (\Xr. equal(complement(domain_of(intersection(Xr::'a,identity_relation))),diagonalise(Xr))) & (\X. equal(intersection(domain_of(X),diagonalise(compos(INVERSE(element_relation),X))),cantor(X))) & (\Xf. operation(Xf) --> function(Xf)) & (\Xf. operation(Xf) --> equal(cross_product(domain_of(domain_of(Xf)),domain_of(domain_of(Xf))),domain_of(Xf))) & (\Xf. operation(Xf) --> subclass(range_of(Xf),domain_of(domain_of(Xf)))) & (\Xf. function(Xf) & equal(cross_product(domain_of(domain_of(Xf)),domain_of(domain_of(Xf))),domain_of(Xf)) & subclass(range_of(Xf),domain_of(domain_of(Xf))) --> operation(Xf)) & (\Xf1 Xf2 Xh. compatible(Xh::'a,Xf1,Xf2) --> function(Xh)) & (\Xf2 Xf1 Xh. compatible(Xh::'a,Xf1,Xf2) --> equal(domain_of(domain_of(Xf1)),domain_of(Xh))) & (\Xf1 Xh Xf2. compatible(Xh::'a,Xf1,Xf2) --> subclass(range_of(Xh),domain_of(domain_of(Xf2)))) & (\Xh Xh1 Xf1 Xf2. function(Xh) & equal(domain_of(domain_of(Xf1)),domain_of(Xh)) & subclass(range_of(Xh),domain_of(domain_of(Xf2))) --> compatible(Xh1::'a,Xf1,Xf2)) & (\Xh Xf2 Xf1. homomorphism(Xh::'a,Xf1,Xf2) --> operation(Xf1)) & (\Xh Xf1 Xf2. homomorphism(Xh::'a,Xf1,Xf2) --> operation(Xf2)) & (\Xh Xf1 Xf2. homomorphism(Xh::'a,Xf1,Xf2) --> compatible(Xh::'a,Xf1,Xf2)) & (\Xf2 Xh Xf1 X Y. homomorphism(Xh::'a,Xf1,Xf2) & member(ordered_pair(X::'a,Y),domain_of(Xf1)) --> equal(apply(Xf2::'a,ordered_pair(apply(Xh::'a,X),apply(Xh::'a,Y))),apply(Xh::'a,apply(Xf1::'a,ordered_pair(X::'a,Y))))) & (\Xh Xf1 Xf2. operation(Xf1) & operation(Xf2) & compatible(Xh::'a,Xf1,Xf2) --> member(ordered_pair(not_homomorphism1(Xh::'a,Xf1,Xf2),not_homomorphism2(Xh::'a,Xf1,Xf2)),domain_of(Xf1)) | homomorphism(Xh::'a,Xf1,Xf2)) & (\Xh Xf1 Xf2. operation(Xf1) & operation(Xf2) & compatible(Xh::'a,Xf1,Xf2) & equal(apply(Xf2::'a,ordered_pair(apply(Xh::'a,not_homomorphism1(Xh::'a,Xf1,Xf2)),apply(Xh::'a,not_homomorphism2(Xh::'a,Xf1,Xf2)))),apply(Xh::'a,apply(Xf1::'a,ordered_pair(not_homomorphism1(Xh::'a,Xf1,Xf2),not_homomorphism2(Xh::'a,Xf1,Xf2))))) --> homomorphism(Xh::'a,Xf1,Xf2))" abbreviation "SET004_0_eq subclass single_valued_class operation one_to_one member inductive homomorphism function compatible unordered_pair union sum_class successor singleton second rot restrct regular range_of rng powerClass ordered_pair not_subclass_element not_homomorphism2 not_homomorphism1 INVERSE intersection image' flip first domain_of domain difference diagonalise cross_product compos complement cantor apply equal \ (\D E F'. equal(D::'a,E) --> equal(apply(D::'a,F'),apply(E::'a,F'))) & (\G I' H. equal(G::'a,H) --> equal(apply(I'::'a,G),apply(I'::'a,H))) & (\J K'. equal(J::'a,K') --> equal(cantor(J),cantor(K'))) & (\L M. equal(L::'a,M) --> equal(complement(L),complement(M))) & (\N O' P. equal(N::'a,O') --> equal(compos(N::'a,P),compos(O'::'a,P))) & (\Q S' R. equal(Q::'a,R) --> equal(compos(S'::'a,Q),compos(S'::'a,R))) & (\T' U V. equal(T'::'a,U) --> equal(cross_product(T'::'a,V),cross_product(U::'a,V))) & (\W Y X. equal(W::'a,X) --> equal(cross_product(Y::'a,W),cross_product(Y::'a,X))) & (\Z A1. equal(Z::'a,A1) --> equal(diagonalise(Z),diagonalise(A1))) & (\B1 C1 D1. equal(B1::'a,C1) --> equal(difference(B1::'a,D1),difference(C1::'a,D1))) & (\E1 G1 F1. equal(E1::'a,F1) --> equal(difference(G1::'a,E1),difference(G1::'a,F1))) & (\H1 I1 J1 K1. equal(H1::'a,I1) --> equal(domain(H1::'a,J1,K1),domain(I1::'a,J1,K1))) & (\L1 N1 M1 O1. equal(L1::'a,M1) --> equal(domain(N1::'a,L1,O1),domain(N1::'a,M1,O1))) & (\P1 R1 S1 Q1. equal(P1::'a,Q1) --> equal(domain(R1::'a,S1,P1),domain(R1::'a,S1,Q1))) & (\T1 U1. equal(T1::'a,U1) --> equal(domain_of(T1),domain_of(U1))) & (\V1 W1. equal(V1::'a,W1) --> equal(first(V1),first(W1))) & (\X1 Y1. equal(X1::'a,Y1) --> equal(flip(X1),flip(Y1))) & (\Z1 A2 B2. equal(Z1::'a,A2) --> equal(image'(Z1::'a,B2),image'(A2::'a,B2))) & (\C2 E2 D2. equal(C2::'a,D2) --> equal(image'(E2::'a,C2),image'(E2::'a,D2))) & (\F2 G2 H2. equal(F2::'a,G2) --> equal(intersection(F2::'a,H2),intersection(G2::'a,H2))) & (\I2 K2 J2. equal(I2::'a,J2) --> equal(intersection(K2::'a,I2),intersection(K2::'a,J2))) & (\L2 M2. equal(L2::'a,M2) --> equal(INVERSE(L2),INVERSE(M2))) & (\N2 O2 P2 Q2. equal(N2::'a,O2) --> equal(not_homomorphism1(N2::'a,P2,Q2),not_homomorphism1(O2::'a,P2,Q2))) & (\R2 T2 S2 U2. equal(R2::'a,S2) --> equal(not_homomorphism1(T2::'a,R2,U2),not_homomorphism1(T2::'a,S2,U2))) & (\V2 X2 Y2 W2. equal(V2::'a,W2) --> equal(not_homomorphism1(X2::'a,Y2,V2),not_homomorphism1(X2::'a,Y2,W2))) & (\Z2 A3 B3 C3. equal(Z2::'a,A3) --> equal(not_homomorphism2(Z2::'a,B3,C3),not_homomorphism2(A3::'a,B3,C3))) & (\D3 F3 E3 G3. equal(D3::'a,E3) --> equal(not_homomorphism2(F3::'a,D3,G3),not_homomorphism2(F3::'a,E3,G3))) & (\H3 J3 K3 I3. equal(H3::'a,I3) --> equal(not_homomorphism2(J3::'a,K3,H3),not_homomorphism2(J3::'a,K3,I3))) & (\L3 M3 N3. equal(L3::'a,M3) --> equal(not_subclass_element(L3::'a,N3),not_subclass_element(M3::'a,N3))) & (\O3 Q3 P3. equal(O3::'a,P3) --> equal(not_subclass_element(Q3::'a,O3),not_subclass_element(Q3::'a,P3))) & (\R3 S3 T3. equal(R3::'a,S3) --> equal(ordered_pair(R3::'a,T3),ordered_pair(S3::'a,T3))) & (\U3 W3 V3. equal(U3::'a,V3) --> equal(ordered_pair(W3::'a,U3),ordered_pair(W3::'a,V3))) & (\X3 Y3. equal(X3::'a,Y3) --> equal(powerClass(X3),powerClass(Y3))) & (\Z3 A4 B4 C4. equal(Z3::'a,A4) --> equal(rng(Z3::'a,B4,C4),rng(A4::'a,B4,C4))) & (\D4 F4 E4 G4. equal(D4::'a,E4) --> equal(rng(F4::'a,D4,G4),rng(F4::'a,E4,G4))) & (\H4 J4 K4 I4. equal(H4::'a,I4) --> equal(rng(J4::'a,K4,H4),rng(J4::'a,K4,I4))) & (\L4 M4. equal(L4::'a,M4) --> equal(range_of(L4),range_of(M4))) & (\N4 O4. equal(N4::'a,O4) --> equal(regular(N4),regular(O4))) & (\P4 Q4 R4 S4. equal(P4::'a,Q4) --> equal(restrct(P4::'a,R4,S4),restrct(Q4::'a,R4,S4))) & (\T4 V4 U4 W4. equal(T4::'a,U4) --> equal(restrct(V4::'a,T4,W4),restrct(V4::'a,U4,W4))) & (\X4 Z4 A5 Y4. equal(X4::'a,Y4) --> equal(restrct(Z4::'a,A5,X4),restrct(Z4::'a,A5,Y4))) & (\B5 C5. equal(B5::'a,C5) --> equal(rot(B5),rot(C5))) & (\D5 E5. equal(D5::'a,E5) --> equal(second(D5),second(E5))) & (\F5 G5. equal(F5::'a,G5) --> equal(singleton(F5),singleton(G5))) & (\H5 I5. equal(H5::'a,I5) --> equal(successor(H5),successor(I5))) & (\J5 K5. equal(J5::'a,K5) --> equal(sum_class(J5),sum_class(K5))) & (\L5 M5 N5. equal(L5::'a,M5) --> equal(union(L5::'a,N5),union(M5::'a,N5))) & (\O5 Q5 P5. equal(O5::'a,P5) --> equal(union(Q5::'a,O5),union(Q5::'a,P5))) & (\R5 S5 T5. equal(R5::'a,S5) --> equal(unordered_pair(R5::'a,T5),unordered_pair(S5::'a,T5))) & (\U5 W5 V5. equal(U5::'a,V5) --> equal(unordered_pair(W5::'a,U5),unordered_pair(W5::'a,V5))) & (\X5 Y5 Z5 A6. equal(X5::'a,Y5) & compatible(X5::'a,Z5,A6) --> compatible(Y5::'a,Z5,A6)) & (\B6 D6 C6 E6. equal(B6::'a,C6) & compatible(D6::'a,B6,E6) --> compatible(D6::'a,C6,E6)) & (\F6 H6 I6 G6. equal(F6::'a,G6) & compatible(H6::'a,I6,F6) --> compatible(H6::'a,I6,G6)) & (\J6 K6. equal(J6::'a,K6) & function(J6) --> function(K6)) & (\L6 M6 N6 O6. equal(L6::'a,M6) & homomorphism(L6::'a,N6,O6) --> homomorphism(M6::'a,N6,O6)) & (\P6 R6 Q6 S6. equal(P6::'a,Q6) & homomorphism(R6::'a,P6,S6) --> homomorphism(R6::'a,Q6,S6)) & (\T6 V6 W6 U6. equal(T6::'a,U6) & homomorphism(V6::'a,W6,T6) --> homomorphism(V6::'a,W6,U6)) & (\X6 Y6. equal(X6::'a,Y6) & inductive(X6) --> inductive(Y6)) & (\Z6 A7 B7. equal(Z6::'a,A7) & member(Z6::'a,B7) --> member(A7::'a,B7)) & (\C7 E7 D7. equal(C7::'a,D7) & member(E7::'a,C7) --> member(E7::'a,D7)) & (\F7 G7. equal(F7::'a,G7) & one_to_one(F7) --> one_to_one(G7)) & (\H7 I7. equal(H7::'a,I7) & operation(H7) --> operation(I7)) & (\J7 K7. equal(J7::'a,K7) & single_valued_class(J7) --> single_valued_class(K7)) & (\L7 M7 N7. equal(L7::'a,M7) & subclass(L7::'a,N7) --> subclass(M7::'a,N7)) & (\O7 Q7 P7. equal(O7::'a,P7) & subclass(Q7::'a,O7) --> subclass(Q7::'a,P7))" abbreviation "SET004_1_ax range_of function maps apply application_function singleton_relation element_relation complement intersection single_valued3 singleton image' domain single_valued2 second single_valued1 identity_relation INVERSE not_subclass_element first domain_of domain_relation composition_function compos equal ordered_pair member universal_class cross_product compose_class subclass \ (\X. subclass(compose_class(X),cross_product(universal_class::'a,universal_class))) & (\X Y Z. member(ordered_pair(Y::'a,Z),compose_class(X)) --> equal(compos(X::'a,Y),Z)) & (\Y Z X. member(ordered_pair(Y::'a,Z),cross_product(universal_class::'a,universal_class)) & equal(compos(X::'a,Y),Z) --> member(ordered_pair(Y::'a,Z),compose_class(X))) & (subclass(composition_function::'a,cross_product(universal_class::'a,cross_product(universal_class::'a,universal_class)))) & (\X Y Z. member(ordered_pair(X::'a,ordered_pair(Y::'a,Z)),composition_function) --> equal(compos(X::'a,Y),Z)) & (\X Y. member(ordered_pair(X::'a,Y),cross_product(universal_class::'a,universal_class)) --> member(ordered_pair(X::'a,ordered_pair(Y::'a,compos(X::'a,Y))),composition_function)) & (subclass(domain_relation::'a,cross_product(universal_class::'a,universal_class))) & (\X Y. member(ordered_pair(X::'a,Y),domain_relation) --> equal(domain_of(X),Y)) & (\X. member(X::'a,universal_class) --> member(ordered_pair(X::'a,domain_of(X)),domain_relation)) & (\X. equal(first(not_subclass_element(compos(X::'a,INVERSE(X)),identity_relation)),single_valued1(X))) & (\X. equal(second(not_subclass_element(compos(X::'a,INVERSE(X)),identity_relation)),single_valued2(X))) & (\X. equal(domain(X::'a,image'(INVERSE(X),singleton(single_valued1(X))),single_valued2(X)),single_valued3(X))) & (equal(intersection(complement(compos(element_relation::'a,complement(identity_relation))),element_relation),singleton_relation)) & (subclass(application_function::'a,cross_product(universal_class::'a,cross_product(universal_class::'a,universal_class)))) & (\Z Y X. member(ordered_pair(X::'a,ordered_pair(Y::'a,Z)),application_function) --> member(Y::'a,domain_of(X))) & (\X Y Z. member(ordered_pair(X::'a,ordered_pair(Y::'a,Z)),application_function) --> equal(apply(X::'a,Y),Z)) & (\Z X Y. member(ordered_pair(X::'a,ordered_pair(Y::'a,Z)),cross_product(universal_class::'a,cross_product(universal_class::'a,universal_class))) & member(Y::'a,domain_of(X)) --> member(ordered_pair(X::'a,ordered_pair(Y::'a,apply(X::'a,Y))),application_function)) & (\X Y Xf. maps(Xf::'a,X,Y) --> function(Xf)) & (\Y Xf X. maps(Xf::'a,X,Y) --> equal(domain_of(Xf),X)) & (\X Xf Y. maps(Xf::'a,X,Y) --> subclass(range_of(Xf),Y)) & (\Xf Y. function(Xf) & subclass(range_of(Xf),Y) --> maps(Xf::'a,domain_of(Xf),Y))" abbreviation "SET004_1_eq maps single_valued3 single_valued2 single_valued1 compose_class equal \ (\L M. equal(L::'a,M) --> equal(compose_class(L),compose_class(M))) & (\N2 O2. equal(N2::'a,O2) --> equal(single_valued1(N2),single_valued1(O2))) & (\P2 Q2. equal(P2::'a,Q2) --> equal(single_valued2(P2),single_valued2(Q2))) & (\R2 S2. equal(R2::'a,S2) --> equal(single_valued3(R2),single_valued3(S2))) & (\X2 Y2 Z2 A3. equal(X2::'a,Y2) & maps(X2::'a,Z2,A3) --> maps(Y2::'a,Z2,A3)) & (\B3 D3 C3 E3. equal(B3::'a,C3) & maps(D3::'a,B3,E3) --> maps(D3::'a,C3,E3)) & (\F3 H3 I3 G3. equal(F3::'a,G3) & maps(H3::'a,I3,F3) --> maps(H3::'a,I3,G3))" abbreviation "NUM004_0_ax integer_of omega ordinal_multiply add_relation ordinal_add recursion apply range_of union_of_range_map function recursion_equation_functions rest_relation rest_of limit_ordinals kind_1_ordinals successor_relation image' universal_class sum_class element_relation ordinal_numbers section not_well_ordering ordered_pair least member well_ordering singleton domain_of segment null_class intersection asymmetric compos transitive cross_product connected identity_relation complement restrct subclass irreflexive symmetrization_of INVERSE union equal \ (\X. equal(union(X::'a,INVERSE(X)),symmetrization_of(X))) & (\X Y. irreflexive(X::'a,Y) --> subclass(restrct(X::'a,Y,Y),complement(identity_relation))) & (\X Y. subclass(restrct(X::'a,Y,Y),complement(identity_relation)) --> irreflexive(X::'a,Y)) & (\Y X. connected(X::'a,Y) --> subclass(cross_product(Y::'a,Y),union(identity_relation::'a,symmetrization_of(X)))) & (\X Y. subclass(cross_product(Y::'a,Y),union(identity_relation::'a,symmetrization_of(X))) --> connected(X::'a,Y)) & (\Xr Y. transitive(Xr::'a,Y) --> subclass(compos(restrct(Xr::'a,Y,Y),restrct(Xr::'a,Y,Y)),restrct(Xr::'a,Y,Y))) & (\Xr Y. subclass(compos(restrct(Xr::'a,Y,Y),restrct(Xr::'a,Y,Y)),restrct(Xr::'a,Y,Y)) --> transitive(Xr::'a,Y)) & (\Xr Y. asymmetric(Xr::'a,Y) --> equal(restrct(intersection(Xr::'a,INVERSE(Xr)),Y,Y),null_class)) & (\Xr Y. equal(restrct(intersection(Xr::'a,INVERSE(Xr)),Y,Y),null_class) --> asymmetric(Xr::'a,Y)) & (\Xr Y Z. equal(segment(Xr::'a,Y,Z),domain_of(restrct(Xr::'a,Y,singleton(Z))))) & (\X Y. well_ordering(X::'a,Y) --> connected(X::'a,Y)) & (\Y Xr U. well_ordering(Xr::'a,Y) & subclass(U::'a,Y) --> equal(U::'a,null_class) | member(least(Xr::'a,U),U)) & (\Y V Xr U. well_ordering(Xr::'a,Y) & subclass(U::'a,Y) & member(V::'a,U) --> member(least(Xr::'a,U),U)) & (\Y Xr U. well_ordering(Xr::'a,Y) & subclass(U::'a,Y) --> equal(segment(Xr::'a,U,least(Xr::'a,U)),null_class)) & (\Y V U Xr. ~(well_ordering(Xr::'a,Y) & subclass(U::'a,Y) & member(V::'a,U) & member(ordered_pair(V::'a,least(Xr::'a,U)),Xr))) & (\Xr Y. connected(Xr::'a,Y) & equal(not_well_ordering(Xr::'a,Y),null_class) --> well_ordering(Xr::'a,Y)) & (\Xr Y. connected(Xr::'a,Y) --> subclass(not_well_ordering(Xr::'a,Y),Y) | well_ordering(Xr::'a,Y)) & (\V Xr Y. member(V::'a,not_well_ordering(Xr::'a,Y)) & equal(segment(Xr::'a,not_well_ordering(Xr::'a,Y),V),null_class) & connected(Xr::'a,Y) --> well_ordering(Xr::'a,Y)) & (\Xr Y Z. section(Xr::'a,Y,Z) --> subclass(Y::'a,Z)) & (\Xr Z Y. section(Xr::'a,Y,Z) --> subclass(domain_of(restrct(Xr::'a,Z,Y)),Y)) & (\Xr Y Z. subclass(Y::'a,Z) & subclass(domain_of(restrct(Xr::'a,Z,Y)),Y) --> section(Xr::'a,Y,Z)) & (\X. member(X::'a,ordinal_numbers) --> well_ordering(element_relation::'a,X)) & (\X. member(X::'a,ordinal_numbers) --> subclass(sum_class(X),X)) & (\X. well_ordering(element_relation::'a,X) & subclass(sum_class(X),X) & member(X::'a,universal_class) --> member(X::'a,ordinal_numbers)) & (\X. well_ordering(element_relation::'a,X) & subclass(sum_class(X),X) --> member(X::'a,ordinal_numbers) | equal(X::'a,ordinal_numbers)) & (equal(union(singleton(null_class),image'(successor_relation::'a,ordinal_numbers)),kind_1_ordinals)) & (equal(intersection(complement(kind_1_ordinals),ordinal_numbers),limit_ordinals)) & (\X. subclass(rest_of(X),cross_product(universal_class::'a,universal_class))) & (\V U X. member(ordered_pair(U::'a,V),rest_of(X)) --> member(U::'a,domain_of(X))) & (\X U V. member(ordered_pair(U::'a,V),rest_of(X)) --> equal(restrct(X::'a,U,universal_class),V)) & (\U V X. member(U::'a,domain_of(X)) & equal(restrct(X::'a,U,universal_class),V) --> member(ordered_pair(U::'a,V),rest_of(X))) & (subclass(rest_relation::'a,cross_product(universal_class::'a,universal_class))) & (\X Y. member(ordered_pair(X::'a,Y),rest_relation) --> equal(rest_of(X),Y)) & (\X. member(X::'a,universal_class) --> member(ordered_pair(X::'a,rest_of(X)),rest_relation)) & (\X Z. member(X::'a,recursion_equation_functions(Z)) --> function(Z)) & (\Z X. member(X::'a,recursion_equation_functions(Z)) --> function(X)) & (\Z X. member(X::'a,recursion_equation_functions(Z)) --> member(domain_of(X),ordinal_numbers)) & (\Z X. member(X::'a,recursion_equation_functions(Z)) --> equal(compos(Z::'a,rest_of(X)),X)) & (\X Z. function(Z) & function(X) & member(domain_of(X),ordinal_numbers) & equal(compos(Z::'a,rest_of(X)),X) --> member(X::'a,recursion_equation_functions(Z))) & (subclass(union_of_range_map::'a,cross_product(universal_class::'a,universal_class))) & (\X Y. member(ordered_pair(X::'a,Y),union_of_range_map) --> equal(sum_class(range_of(X)),Y)) & (\X Y. member(ordered_pair(X::'a,Y),cross_product(universal_class::'a,universal_class)) & equal(sum_class(range_of(X)),Y) --> member(ordered_pair(X::'a,Y),union_of_range_map)) & (\X Y. equal(apply(recursion(X::'a,successor_relation,union_of_range_map),Y),ordinal_add(X::'a,Y))) & (\X Y. equal(recursion(null_class::'a,apply(add_relation::'a,X),union_of_range_map),ordinal_multiply(X::'a,Y))) & (\X. member(X::'a,omega) --> equal(integer_of(X),X)) & (\X. member(X::'a,omega) | equal(integer_of(X),null_class))" abbreviation "NUM004_0_eq well_ordering transitive section irreflexive connected asymmetric symmetrization_of segment rest_of recursion_equation_functions recursion ordinal_multiply ordinal_add not_well_ordering least integer_of equal \ (\D E. equal(D::'a,E) --> equal(integer_of(D),integer_of(E))) & (\F' G H. equal(F'::'a,G) --> equal(least(F'::'a,H),least(G::'a,H))) & (\I' K' J. equal(I'::'a,J) --> equal(least(K'::'a,I'),least(K'::'a,J))) & (\L M N. equal(L::'a,M) --> equal(not_well_ordering(L::'a,N),not_well_ordering(M::'a,N))) & (\O' Q P. equal(O'::'a,P) --> equal(not_well_ordering(Q::'a,O'),not_well_ordering(Q::'a,P))) & (\R S' T'. equal(R::'a,S') --> equal(ordinal_add(R::'a,T'),ordinal_add(S'::'a,T'))) & (\U W V. equal(U::'a,V) --> equal(ordinal_add(W::'a,U),ordinal_add(W::'a,V))) & (\X Y Z. equal(X::'a,Y) --> equal(ordinal_multiply(X::'a,Z),ordinal_multiply(Y::'a,Z))) & (\A1 C1 B1. equal(A1::'a,B1) --> equal(ordinal_multiply(C1::'a,A1),ordinal_multiply(C1::'a,B1))) & (\F1 G1 H1 I1. equal(F1::'a,G1) --> equal(recursion(F1::'a,H1,I1),recursion(G1::'a,H1,I1))) & (\J1 L1 K1 M1. equal(J1::'a,K1) --> equal(recursion(L1::'a,J1,M1),recursion(L1::'a,K1,M1))) & (\N1 P1 Q1 O1. equal(N1::'a,O1) --> equal(recursion(P1::'a,Q1,N1),recursion(P1::'a,Q1,O1))) & (\R1 S1. equal(R1::'a,S1) --> equal(recursion_equation_functions(R1),recursion_equation_functions(S1))) & (\T1 U1. equal(T1::'a,U1) --> equal(rest_of(T1),rest_of(U1))) & (\V1 W1 X1 Y1. equal(V1::'a,W1) --> equal(segment(V1::'a,X1,Y1),segment(W1::'a,X1,Y1))) & (\Z1 B2 A2 C2. equal(Z1::'a,A2) --> equal(segment(B2::'a,Z1,C2),segment(B2::'a,A2,C2))) & (\D2 F2 G2 E2. equal(D2::'a,E2) --> equal(segment(F2::'a,G2,D2),segment(F2::'a,G2,E2))) & (\H2 I2. equal(H2::'a,I2) --> equal(symmetrization_of(H2),symmetrization_of(I2))) & (\J2 K2 L2. equal(J2::'a,K2) & asymmetric(J2::'a,L2) --> asymmetric(K2::'a,L2)) & (\M2 O2 N2. equal(M2::'a,N2) & asymmetric(O2::'a,M2) --> asymmetric(O2::'a,N2)) & (\P2 Q2 R2. equal(P2::'a,Q2) & connected(P2::'a,R2) --> connected(Q2::'a,R2)) & (\S2 U2 T2. equal(S2::'a,T2) & connected(U2::'a,S2) --> connected(U2::'a,T2)) & (\V2 W2 X2. equal(V2::'a,W2) & irreflexive(V2::'a,X2) --> irreflexive(W2::'a,X2)) & (\Y2 A3 Z2. equal(Y2::'a,Z2) & irreflexive(A3::'a,Y2) --> irreflexive(A3::'a,Z2)) & (\B3 C3 D3 E3. equal(B3::'a,C3) & section(B3::'a,D3,E3) --> section(C3::'a,D3,E3)) & (\F3 H3 G3 I3. equal(F3::'a,G3) & section(H3::'a,F3,I3) --> section(H3::'a,G3,I3)) & (\J3 L3 M3 K3. equal(J3::'a,K3) & section(L3::'a,M3,J3) --> section(L3::'a,M3,K3)) & (\N3 O3 P3. equal(N3::'a,O3) & transitive(N3::'a,P3) --> transitive(O3::'a,P3)) & (\Q3 S3 R3. equal(Q3::'a,R3) & transitive(S3::'a,Q3) --> transitive(S3::'a,R3)) & (\T3 U3 V3. equal(T3::'a,U3) & well_ordering(T3::'a,V3) --> well_ordering(U3::'a,V3)) & (\W3 Y3 X3. equal(W3::'a,X3) & well_ordering(Y3::'a,W3) --> well_ordering(Y3::'a,X3))" (*1345 inferences so far. Searching to depth 7. 23.3 secs. BIG*) lemma NUM180_1: "EQU001_0_ax equal & SET004_0_ax not_homomorphism2 not_homomorphism1 homomorphism compatible operation cantor diagonalise subset_relation one_to_one choice apply regular function identity_relation single_valued_class compos powerClass sum_class omega inductive successor_relation successor image' rng domain range_of INVERSE flip rot domain_of null_class restrct difference union complement intersection element_relation second first cross_product ordered_pair singleton unordered_pair equal universal_class not_subclass_element member subclass & SET004_0_eq subclass single_valued_class operation one_to_one member inductive homomorphism function compatible unordered_pair union sum_class successor singleton second rot restrct regular range_of rng powerClass ordered_pair not_subclass_element not_homomorphism2 not_homomorphism1 INVERSE intersection image' flip first domain_of domain difference diagonalise cross_product compos complement cantor apply equal & SET004_1_ax range_of function maps apply application_function singleton_relation element_relation complement intersection single_valued3 singleton image' domain single_valued2 second single_valued1 identity_relation INVERSE not_subclass_element first domain_of domain_relation composition_function compos equal ordered_pair member universal_class cross_product compose_class subclass & SET004_1_eq maps single_valued3 single_valued2 single_valued1 compose_class equal & NUM004_0_ax integer_of omega ordinal_multiply add_relation ordinal_add recursion apply range_of union_of_range_map function recursion_equation_functions rest_relation rest_of limit_ordinals kind_1_ordinals successor_relation image' universal_class sum_class element_relation ordinal_numbers section not_well_ordering ordered_pair least member well_ordering singleton domain_of segment null_class intersection asymmetric compos transitive cross_product connected identity_relation complement restrct subclass irreflexive symmetrization_of INVERSE union equal & NUM004_0_eq well_ordering transitive section irreflexive connected asymmetric symmetrization_of segment rest_of recursion_equation_functions recursion ordinal_multiply ordinal_add not_well_ordering least integer_of equal & (~subclass(limit_ordinals::'a,ordinal_numbers)) --> False" by meson (*0 inferences so far. Searching to depth 0. 16.8 secs. BIG*) lemma NUM228_1: "EQU001_0_ax equal & SET004_0_ax not_homomorphism2 not_homomorphism1 homomorphism compatible operation cantor diagonalise subset_relation one_to_one choice apply regular function identity_relation single_valued_class compos powerClass sum_class omega inductive successor_relation successor image' rng domain range_of INVERSE flip rot domain_of null_class restrct difference union complement intersection element_relation second first cross_product ordered_pair singleton unordered_pair equal universal_class not_subclass_element member subclass & SET004_0_eq subclass single_valued_class operation one_to_one member inductive homomorphism function compatible unordered_pair union sum_class successor singleton second rot restrct regular range_of rng powerClass ordered_pair not_subclass_element not_homomorphism2 not_homomorphism1 INVERSE intersection image' flip first domain_of domain difference diagonalise cross_product compos complement cantor apply equal & SET004_1_ax range_of function maps apply application_function singleton_relation element_relation complement intersection single_valued3 singleton image' domain single_valued2 second single_valued1 identity_relation INVERSE not_subclass_element first domain_of domain_relation composition_function compos equal ordered_pair member universal_class cross_product compose_class subclass & SET004_1_eq maps single_valued3 single_valued2 single_valued1 compose_class equal & NUM004_0_ax integer_of omega ordinal_multiply add_relation ordinal_add recursion apply range_of union_of_range_map function recursion_equation_functions rest_relation rest_of limit_ordinals kind_1_ordinals successor_relation image' universal_class sum_class element_relation ordinal_numbers section not_well_ordering ordered_pair least member well_ordering singleton domain_of segment null_class intersection asymmetric compos transitive cross_product connected identity_relation complement restrct subclass irreflexive symmetrization_of INVERSE union equal & NUM004_0_eq well_ordering transitive section irreflexive connected asymmetric symmetrization_of segment rest_of recursion_equation_functions recursion ordinal_multiply ordinal_add not_well_ordering least integer_of equal & (~function(z)) & (~equal(recursion_equation_functions(z),null_class)) --> False" by meson (*4868 inferences so far. Searching to depth 12. 4.3 secs*) lemma PLA002_1: "(\Situation1 Situation2. warm(Situation1) | cold(Situation2)) & (\Situation. at(a::'a,Situation) --> at(b::'a,walk(b::'a,Situation))) & (\Situation. at(a::'a,Situation) --> at(b::'a,drive(b::'a,Situation))) & (\Situation. at(b::'a,Situation) --> at(a::'a,walk(a::'a,Situation))) & (\Situation. at(b::'a,Situation) --> at(a::'a,drive(a::'a,Situation))) & (\Situation. cold(Situation) & at(b::'a,Situation) --> at(c::'a,skate(c::'a,Situation))) & (\Situation. cold(Situation) & at(c::'a,Situation) --> at(b::'a,skate(b::'a,Situation))) & (\Situation. warm(Situation) & at(b::'a,Situation) --> at(d::'a,climb(d::'a,Situation))) & (\Situation. warm(Situation) & at(d::'a,Situation) --> at(b::'a,climb(b::'a,Situation))) & (\Situation. at(c::'a,Situation) --> at(d::'a,go(d::'a,Situation))) & (\Situation. at(d::'a,Situation) --> at(c::'a,go(c::'a,Situation))) & (\Situation. at(c::'a,Situation) --> at(e::'a,go(e::'a,Situation))) & (\Situation. at(e::'a,Situation) --> at(c::'a,go(c::'a,Situation))) & (\Situation. at(d::'a,Situation) --> at(f::'a,go(f::'a,Situation))) & (\Situation. at(f::'a,Situation) --> at(d::'a,go(d::'a,Situation))) & (at(f::'a,s0)) & (\S'. ~at(a::'a,S')) --> False" by meson abbreviation "PLA001_0_ax putdown on pickup do holding table differ clear EMPTY and' holds \ (\X Y State. holds(X::'a,State) & holds(Y::'a,State) --> holds(and'(X::'a,Y),State)) & (\State X. holds(EMPTY::'a,State) & holds(clear(X),State) & differ(X::'a,table) --> holds(holding(X),do(pickup(X),State))) & (\Y X State. holds(on(X::'a,Y),State) & holds(clear(X),State) & holds(EMPTY::'a,State) --> holds(clear(Y),do(pickup(X),State))) & (\Y State X Z. holds(on(X::'a,Y),State) & differ(X::'a,Z) --> holds(on(X::'a,Y),do(pickup(Z),State))) & (\State X Z. holds(clear(X),State) & differ(X::'a,Z) --> holds(clear(X),do(pickup(Z),State))) & (\X Y State. holds(holding(X),State) & holds(clear(Y),State) --> holds(EMPTY::'a,do(putdown(X::'a,Y),State))) & (\X Y State. holds(holding(X),State) & holds(clear(Y),State) --> holds(on(X::'a,Y),do(putdown(X::'a,Y),State))) & (\X Y State. holds(holding(X),State) & holds(clear(Y),State) --> holds(clear(X),do(putdown(X::'a,Y),State))) & (\Z W X Y State. holds(on(X::'a,Y),State) --> holds(on(X::'a,Y),do(putdown(Z::'a,W),State))) & (\X State Z Y. holds(clear(Z),State) & differ(Z::'a,Y) --> holds(clear(Z),do(putdown(X::'a,Y),State)))" abbreviation "PLA001_1_ax EMPTY clear s0 on holds table d c b a differ \ (\Y X. differ(Y::'a,X) --> differ(X::'a,Y)) & (differ(a::'a,b)) & (differ(a::'a,c)) & (differ(a::'a,d)) & (differ(a::'a,table)) & (differ(b::'a,c)) & (differ(b::'a,d)) & (differ(b::'a,table)) & (differ(c::'a,d)) & (differ(c::'a,table)) & (differ(d::'a,table)) & (holds(on(a::'a,table),s0)) & (holds(on(b::'a,table),s0)) & (holds(on(c::'a,d),s0)) & (holds(on(d::'a,table),s0)) & (holds(clear(a),s0)) & (holds(clear(b),s0)) & (holds(clear(c),s0)) & (holds(EMPTY::'a,s0)) & (\State. holds(clear(table),State))" (*190 inferences so far. Searching to depth 10. 0.6 secs*) lemma PLA006_1: "PLA001_0_ax putdown on pickup do holding table differ clear EMPTY and' holds & PLA001_1_ax EMPTY clear s0 on holds table d c b a differ & (\State. ~holds(on(c::'a,table),State)) --> False" by meson (*190 inferences so far. Searching to depth 10. 0.5 secs*) lemma PLA017_1: "PLA001_0_ax putdown on pickup do holding table differ clear EMPTY and' holds & PLA001_1_ax EMPTY clear s0 on holds table d c b a differ & (\State. ~holds(on(a::'a,c),State)) --> False" by meson (*13732 inferences so far. Searching to depth 16. 9.8 secs*) lemma PLA022_1: "PLA001_0_ax putdown on pickup do holding table differ clear EMPTY and' holds & PLA001_1_ax EMPTY clear s0 on holds table d c b a differ & (\State. ~holds(and'(on(c::'a,d),on(a::'a,c)),State)) --> False" by meson (*217 inferences so far. Searching to depth 13. 0.7 secs*) lemma PLA022_2: "PLA001_0_ax putdown on pickup do holding table differ clear EMPTY and' holds & PLA001_1_ax EMPTY clear s0 on holds table d c b a differ & (\State. ~holds(and'(on(a::'a,c),on(c::'a,d)),State)) --> False" by meson (*948 inferences so far. Searching to depth 18. 1.1 secs*) lemma PRV001_1: "(\X Y Z. q1(X::'a,Y,Z) & mless_or_equal(X::'a,Y) --> q2(X::'a,Y,Z)) & (\X Y Z. q1(X::'a,Y,Z) --> mless_or_equal(X::'a,Y) | q3(X::'a,Y,Z)) & (\Z X Y. q2(X::'a,Y,Z) --> q4(X::'a,Y,Y)) & (\Z Y X. q3(X::'a,Y,Z) --> q4(X::'a,Y,X)) & (\X. mless_or_equal(X::'a,X)) & (\X Y. mless_or_equal(X::'a,Y) & mless_or_equal(Y::'a,X) --> equal(X::'a,Y)) & (\Y X Z. mless_or_equal(X::'a,Y) & mless_or_equal(Y::'a,Z) --> mless_or_equal(X::'a,Z)) & (\Y X. mless_or_equal(X::'a,Y) | mless_or_equal(Y::'a,X)) & (\X Y. equal(X::'a,Y) --> mless_or_equal(X::'a,Y)) & (\X Y Z. equal(X::'a,Y) & mless_or_equal(X::'a,Z) --> mless_or_equal(Y::'a,Z)) & (\X Z Y. equal(X::'a,Y) & mless_or_equal(Z::'a,X) --> mless_or_equal(Z::'a,Y)) & (q1(a::'a,b,c)) & (\W. ~(q4(a::'a,b,W) & mless_or_equal(a::'a,W) & mless_or_equal(b::'a,W) & mless_or_equal(W::'a,a))) & (\W. ~(q4(a::'a,b,W) & mless_or_equal(a::'a,W) & mless_or_equal(b::'a,W) & mless_or_equal(W::'a,b))) --> False" by meson (*PRV is now called SWV (software verification) *) abbreviation "SWV001_1_ax mless_THAN successor predecessor equal \ (\X. equal(predecessor(successor(X)),X)) & (\X. equal(successor(predecessor(X)),X)) & (\X Y. equal(predecessor(X),predecessor(Y)) --> equal(X::'a,Y)) & (\X Y. equal(successor(X),successor(Y)) --> equal(X::'a,Y)) & (\X. mless_THAN(predecessor(X),X)) & (\X. mless_THAN(X::'a,successor(X))) & (\X Y Z. mless_THAN(X::'a,Y) & mless_THAN(Y::'a,Z) --> mless_THAN(X::'a,Z)) & (\X Y. mless_THAN(X::'a,Y) | mless_THAN(Y::'a,X) | equal(X::'a,Y)) & (\X. ~mless_THAN(X::'a,X)) & (\Y X. ~(mless_THAN(X::'a,Y) & mless_THAN(Y::'a,X))) & (\Y X Z. equal(X::'a,Y) & mless_THAN(X::'a,Z) --> mless_THAN(Y::'a,Z)) & (\Y Z X. equal(X::'a,Y) & mless_THAN(Z::'a,X) --> mless_THAN(Z::'a,Y))" abbreviation "SWV001_0_eq a successor predecessor equal \ (\X Y. equal(X::'a,Y) --> equal(predecessor(X),predecessor(Y))) & (\X Y. equal(X::'a,Y) --> equal(successor(X),successor(Y))) & (\X Y. equal(X::'a,Y) --> equal(a(X),a(Y)))" (*21 inferences so far. Searching to depth 5. 0.4 secs*) lemma PRV003_1: "EQU001_0_ax equal & SWV001_1_ax mless_THAN successor predecessor equal & SWV001_0_eq a successor predecessor equal & (~mless_THAN(n::'a,j)) & (mless_THAN(k::'a,j)) & (~mless_THAN(k::'a,i)) & (mless_THAN(i::'a,n)) & (mless_THAN(a(j),a(k))) & (\X. mless_THAN(X::'a,j) & mless_THAN(a(X),a(k)) --> mless_THAN(X::'a,i)) & (\X. mless_THAN(One::'a,i) & mless_THAN(a(X),a(predecessor(i))) --> mless_THAN(X::'a,i) | mless_THAN(n::'a,X)) & (\X. ~(mless_THAN(One::'a,X) & mless_THAN(X::'a,i) & mless_THAN(a(X),a(predecessor(X))))) & (mless_THAN(j::'a,i)) --> False" by meson (*584 inferences so far. Searching to depth 7. 1.1 secs*) lemma PRV005_1: "EQU001_0_ax equal & SWV001_1_ax mless_THAN successor predecessor equal & SWV001_0_eq a successor predecessor equal & (~mless_THAN(n::'a,k)) & (~mless_THAN(k::'a,l)) & (~mless_THAN(k::'a,i)) & (mless_THAN(l::'a,n)) & (mless_THAN(One::'a,l)) & (mless_THAN(a(k),a(predecessor(l)))) & (\X. mless_THAN(X::'a,successor(n)) & mless_THAN(a(X),a(k)) --> mless_THAN(X::'a,l)) & (\X. mless_THAN(One::'a,l) & mless_THAN(a(X),a(predecessor(l))) --> mless_THAN(X::'a,l) | mless_THAN(n::'a,X)) & (\X. ~(mless_THAN(One::'a,X) & mless_THAN(X::'a,l) & mless_THAN(a(X),a(predecessor(X))))) --> False" by meson (*2343 inferences so far. Searching to depth 8. 3.5 secs*) lemma PRV006_1: "EQU001_0_ax equal & SWV001_1_ax mless_THAN successor predecessor equal & SWV001_0_eq a successor predecessor equal & (~mless_THAN(n::'a,m)) & (mless_THAN(i::'a,m)) & (mless_THAN(i::'a,n)) & (~mless_THAN(i::'a,One)) & (mless_THAN(a(i),a(m))) & (\X. mless_THAN(X::'a,successor(n)) & mless_THAN(a(X),a(m)) --> mless_THAN(X::'a,i)) & (\X. mless_THAN(One::'a,i) & mless_THAN(a(X),a(predecessor(i))) --> mless_THAN(X::'a,i) | mless_THAN(n::'a,X)) & (\X. ~(mless_THAN(One::'a,X) & mless_THAN(X::'a,i) & mless_THAN(a(X),a(predecessor(X))))) --> False" by meson (*86 inferences so far. Searching to depth 14. 0.1 secs*) lemma PRV009_1: "(\Y X. mless_or_equal(X::'a,Y) | mless(Y::'a,X)) & (mless(j::'a,i)) & (mless_or_equal(m::'a,p)) & (mless_or_equal(p::'a,q)) & (mless_or_equal(q::'a,n)) & (\X Y. mless_or_equal(m::'a,X) & mless(X::'a,i) & mless(j::'a,Y) & mless_or_equal(Y::'a,n) --> mless_or_equal(a(X),a(Y))) & (\X Y. mless_or_equal(m::'a,X) & mless_or_equal(X::'a,Y) & mless_or_equal(Y::'a,j) --> mless_or_equal(a(X),a(Y))) & (\X Y. mless_or_equal(i::'a,X) & mless_or_equal(X::'a,Y) & mless_or_equal(Y::'a,n) --> mless_or_equal(a(X),a(Y))) & (~mless_or_equal(a(p),a(q))) --> False" by meson (*222 inferences so far. Searching to depth 8. 0.4 secs*) lemma PUZ012_1: "(\X. equal_fruits(X::'a,X)) & (\X. equal_boxes(X::'a,X)) & (\X Y. ~(label(X::'a,Y) & contains(X::'a,Y))) & (\X. contains(boxa::'a,X) | contains(boxb::'a,X) | contains(boxc::'a,X)) & (\X. contains(X::'a,apples) | contains(X::'a,bananas) | contains(X::'a,oranges)) & (\X Y Z. contains(X::'a,Y) & contains(X::'a,Z) --> equal_fruits(Y::'a,Z)) & (\Y X Z. contains(X::'a,Y) & contains(Z::'a,Y) --> equal_boxes(X::'a,Z)) & (~equal_boxes(boxa::'a,boxb)) & (~equal_boxes(boxb::'a,boxc)) & (~equal_boxes(boxa::'a,boxc)) & (~equal_fruits(apples::'a,bananas)) & (~equal_fruits(bananas::'a,oranges)) & (~equal_fruits(apples::'a,oranges)) & (label(boxa::'a,apples)) & (label(boxb::'a,oranges)) & (label(boxc::'a,bananas)) & (contains(boxb::'a,apples)) & (~(contains(boxa::'a,bananas) & contains(boxc::'a,oranges))) --> False" by meson (*35 inferences so far. Searching to depth 5. 3.2 secs*) lemma PUZ020_1: "EQU001_0_ax equal & (\A B. equal(A::'a,B) --> equal(statement_by(A),statement_by(B))) & (\X. person(X) --> knight(X) | knave(X)) & (\X. ~(person(X) & knight(X) & knave(X))) & (\X Y. says(X::'a,Y) & a_truth(Y) --> a_truth(Y)) & (\X Y. ~(says(X::'a,Y) & equal(X::'a,Y))) & (\Y X. says(X::'a,Y) --> equal(Y::'a,statement_by(X))) & (\X Y. ~(person(X) & equal(X::'a,statement_by(Y)))) & (\X. person(X) & a_truth(statement_by(X)) --> knight(X)) & (\X. person(X) --> a_truth(statement_by(X)) | knave(X)) & (\X Y. equal(X::'a,Y) & knight(X) --> knight(Y)) & (\X Y. equal(X::'a,Y) & knave(X) --> knave(Y)) & (\X Y. equal(X::'a,Y) & person(X) --> person(Y)) & (\X Y Z. equal(X::'a,Y) & says(X::'a,Z) --> says(Y::'a,Z)) & (\X Z Y. equal(X::'a,Y) & says(Z::'a,X) --> says(Z::'a,Y)) & (\X Y. equal(X::'a,Y) & a_truth(X) --> a_truth(Y)) & (\X Y. knight(X) & says(X::'a,Y) --> a_truth(Y)) & (\X Y. ~(knave(X) & says(X::'a,Y) & a_truth(Y))) & (person(husband)) & (person(wife)) & (~equal(husband::'a,wife)) & (says(husband::'a,statement_by(husband))) & (a_truth(statement_by(husband)) & knight(husband) --> knight(wife)) & (knight(husband) --> a_truth(statement_by(husband))) & (a_truth(statement_by(husband)) | knight(wife)) & (knight(wife) --> a_truth(statement_by(husband))) & (~knight(husband)) --> False" by meson (*121806 inferences so far. Searching to depth 17. 63.0 secs*) lemma PUZ025_1: "(\X. a_truth(truthteller(X)) | a_truth(liar(X))) & (\X. ~(a_truth(truthteller(X)) & a_truth(liar(X)))) & (\Truthteller Statement. a_truth(truthteller(Truthteller)) & a_truth(says(Truthteller::'a,Statement)) --> a_truth(Statement)) & (\Liar Statement. ~(a_truth(liar(Liar)) & a_truth(says(Liar::'a,Statement)) & a_truth(Statement))) & (\Statement Truthteller. a_truth(Statement) & a_truth(says(Truthteller::'a,Statement)) --> a_truth(truthteller(Truthteller))) & (\Statement Liar. a_truth(says(Liar::'a,Statement)) --> a_truth(Statement) | a_truth(liar(Liar))) & (\Z X Y. people(X::'a,Y,Z) & a_truth(liar(X)) & a_truth(liar(Y)) --> a_truth(equal_type(X::'a,Y))) & (\Z X Y. people(X::'a,Y,Z) & a_truth(truthteller(X)) & a_truth(truthteller(Y)) --> a_truth(equal_type(X::'a,Y))) & (\X Y. a_truth(equal_type(X::'a,Y)) & a_truth(truthteller(X)) --> a_truth(truthteller(Y))) & (\X Y. a_truth(equal_type(X::'a,Y)) & a_truth(liar(X)) --> a_truth(liar(Y))) & (\X Y. a_truth(truthteller(X)) --> a_truth(equal_type(X::'a,Y)) | a_truth(liar(Y))) & (\X Y. a_truth(liar(X)) --> a_truth(equal_type(X::'a,Y)) | a_truth(truthteller(Y))) & (\Y X. a_truth(equal_type(X::'a,Y)) --> a_truth(equal_type(Y::'a,X))) & (\X Y. ask_1_if_2(X::'a,Y) & a_truth(truthteller(X)) & a_truth(Y) --> answer(yes)) & (\X Y. ask_1_if_2(X::'a,Y) & a_truth(truthteller(X)) --> a_truth(Y) | answer(no)) & (\X Y. ask_1_if_2(X::'a,Y) & a_truth(liar(X)) & a_truth(Y) --> answer(no)) & (\X Y. ask_1_if_2(X::'a,Y) & a_truth(liar(X)) --> a_truth(Y) | answer(yes)) & (people(b::'a,c,a)) & (people(a::'a,b,a)) & (people(a::'a,c,b)) & (people(c::'a,b,a)) & (a_truth(says(a::'a,equal_type(b::'a,c)))) & (ask_1_if_2(c::'a,equal_type(a::'a,b))) & (\Answer. ~answer(Answer)) --> False" by meson (*621 inferences so far. Searching to depth 18. 0.2 secs*) lemma PUZ029_1: "(\X. dances_on_tightropes(X) | eats_pennybuns(X) | old(X)) & (\X. pig(X) & liable_to_giddiness(X) --> treated_with_respect(X)) & (\X. wise(X) & balloonist(X) --> has_umbrella(X)) & (\X. ~(looks_ridiculous(X) & eats_pennybuns(X) & eats_lunch_in_public(X))) & (\X. balloonist(X) & young(X) --> liable_to_giddiness(X)) & (\X. fat(X) & looks_ridiculous(X) --> dances_on_tightropes(X) | eats_lunch_in_public(X)) & (\X. ~(liable_to_giddiness(X) & wise(X) & dances_on_tightropes(X))) & (\X. pig(X) & has_umbrella(X) --> looks_ridiculous(X)) & (\X. treated_with_respect(X) --> dances_on_tightropes(X) | fat(X)) & (\X. young(X) | old(X)) & (\X. ~(young(X) & old(X))) & (wise(piggy)) & (young(piggy)) & (pig(piggy)) & (balloonist(piggy)) --> False" by meson abbreviation "RNG001_0_ax equal additive_inverse add multiply product additive_identity sum \ (\X. sum(additive_identity::'a,X,X)) & (\X. sum(X::'a,additive_identity,X)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y. sum(X::'a,Y,add(X::'a,Y))) & (\X. sum(additive_inverse(X),X,additive_identity)) & (\X. sum(X::'a,additive_inverse(X),additive_identity)) & (\Y U Z X V W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(U::'a,Z,W) --> sum(X::'a,V,W)) & (\Y X V U Z W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(X::'a,V,W) --> sum(U::'a,Z,W)) & (\Y X Z. sum(X::'a,Y,Z) --> sum(Y::'a,X,Z)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\Y Z X V3 V1 V2 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & product(X::'a,V3,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(X::'a,V3,V4)) & (\Y Z V3 X V1 V2 V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & product(V3::'a,X,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 V3 X V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(V3::'a,X,V4)) & (\X Y U V. sum(X::'a,Y,U) & sum(X::'a,Y,V) --> equal(U::'a,V)) & (\X Y U V. product(X::'a,Y,U) & product(X::'a,Y,V) --> equal(U::'a,V))" abbreviation "RNG001_0_eq product multiply sum add additive_inverse equal \ (\X Y. equal(X::'a,Y) --> equal(additive_inverse(X),additive_inverse(Y))) & (\X Y W. equal(X::'a,Y) --> equal(add(X::'a,W),add(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(add(W::'a,X),add(W::'a,Y))) & (\X Y W Z. equal(X::'a,Y) & sum(X::'a,W,Z) --> sum(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & sum(W::'a,X,Z) --> sum(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & sum(W::'a,Z,X) --> sum(W::'a,Z,Y)) & (\X Y W. equal(X::'a,Y) --> equal(multiply(X::'a,W),multiply(Y::'a,W))) & (\X W Y. equal(X::'a,Y) --> equal(multiply(W::'a,X),multiply(W::'a,Y))) & (\X Y W Z. equal(X::'a,Y) & product(X::'a,W,Z) --> product(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & product(W::'a,X,Z) --> product(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & product(W::'a,Z,X) --> product(W::'a,Z,Y))" (*93620 inferences so far. Searching to depth 24. 65.9 secs*) lemma RNG001_3: "(\X. sum(additive_identity::'a,X,X)) & (\X. sum(additive_inverse(X),X,additive_identity)) & (\Y U Z X V W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(U::'a,Z,W) --> sum(X::'a,V,W)) & (\Y X V U Z W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(X::'a,V,W) --> sum(U::'a,Z,W)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\Y Z X V3 V1 V2 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & product(X::'a,V3,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(X::'a,V3,V4)) & (~product(a::'a,additive_identity,additive_identity)) --> False" by meson abbreviation "RNG_other_ax multiply add equal product additive_identity additive_inverse sum \ (\X. sum(X::'a,additive_inverse(X),additive_identity)) & (\Y U Z X V W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(U::'a,Z,W) --> sum(X::'a,V,W)) & (\Y X V U Z W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(X::'a,V,W) --> sum(U::'a,Z,W)) & (\Y X Z. sum(X::'a,Y,Z) --> sum(Y::'a,X,Z)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\Y Z X V3 V1 V2 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & product(X::'a,V3,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(X::'a,V3,V4)) & (\Y Z V3 X V1 V2 V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & product(V3::'a,X,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 V3 X V4. product(Y::'a,X,V1) & product(Z::'a,X,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(V3::'a,X,V4)) & (\X Y U V. sum(X::'a,Y,U) & sum(X::'a,Y,V) --> equal(U::'a,V)) & (\X Y U V. product(X::'a,Y,U) & product(X::'a,Y,V) --> equal(U::'a,V)) & (\X Y. equal(X::'a,Y) --> equal(additive_inverse(X),additive_inverse(Y))) & (\X Y W. equal(X::'a,Y) --> equal(add(X::'a,W),add(Y::'a,W))) & (\X Y W Z. equal(X::'a,Y) & sum(X::'a,W,Z) --> sum(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & sum(W::'a,X,Z) --> sum(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & sum(W::'a,Z,X) --> sum(W::'a,Z,Y)) & (\X Y W. equal(X::'a,Y) --> equal(multiply(X::'a,W),multiply(Y::'a,W))) & (\X Y W Z. equal(X::'a,Y) & product(X::'a,W,Z) --> product(Y::'a,W,Z)) & (\X W Y Z. equal(X::'a,Y) & product(W::'a,X,Z) --> product(W::'a,Y,Z)) & (\X W Z Y. equal(X::'a,Y) & product(W::'a,Z,X) --> product(W::'a,Z,Y))" (****************SLOW 76385914 inferences so far. Searching to depth 18 No proof after 5 1/2 hours! (griffon) ****************) lemma RNG001_5: "EQU001_0_ax equal & (\X. sum(additive_identity::'a,X,X)) & (\X. sum(X::'a,additive_identity,X)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y. sum(X::'a,Y,add(X::'a,Y))) & (\X. sum(additive_inverse(X),X,additive_identity)) & RNG_other_ax multiply add equal product additive_identity additive_inverse sum & (~product(a::'a,additive_identity,additive_identity)) --> False" oops (*0 inferences so far. Searching to depth 0. 0.5 secs*) lemma RNG011_5: "EQU001_0_ax equal & (\A B C. equal(A::'a,B) --> equal(add(A::'a,C),add(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(add(F'::'a,D),add(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(additive_inverse(G),additive_inverse(H))) & (\I' J K'. equal(I'::'a,J) --> equal(multiply(I'::'a,K'),multiply(J::'a,K'))) & (\L N M. equal(L::'a,M) --> equal(multiply(N::'a,L),multiply(N::'a,M))) & (\A B C D. equal(A::'a,B) --> equal(associator(A::'a,C,D),associator(B::'a,C,D))) & (\E G F' H. equal(E::'a,F') --> equal(associator(G::'a,E,H),associator(G::'a,F',H))) & (\I' K' L J. equal(I'::'a,J) --> equal(associator(K'::'a,L,I'),associator(K'::'a,L,J))) & (\M N O'. equal(M::'a,N) --> equal(commutator(M::'a,O'),commutator(N::'a,O'))) & (\P R Q. equal(P::'a,Q) --> equal(commutator(R::'a,P),commutator(R::'a,Q))) & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(add(X::'a,Y),Z),add(X::'a,add(Y::'a,Z)))) & (\X. equal(add(X::'a,additive_identity),X)) & (\X. equal(add(additive_identity::'a,X),X)) & (\X. equal(add(X::'a,additive_inverse(X)),additive_identity)) & (\X. equal(add(additive_inverse(X),X),additive_identity)) & (equal(additive_inverse(additive_identity),additive_identity)) & (\X Y. equal(add(X::'a,add(additive_inverse(X),Y)),Y)) & (\X Y. equal(additive_inverse(add(X::'a,Y)),add(additive_inverse(X),additive_inverse(Y)))) & (\X. equal(additive_inverse(additive_inverse(X)),X)) & (\X. equal(multiply(X::'a,additive_identity),additive_identity)) & (\X. equal(multiply(additive_identity::'a,X),additive_identity)) & (\X Y. equal(multiply(additive_inverse(X),additive_inverse(Y)),multiply(X::'a,Y))) & (\X Y. equal(multiply(X::'a,additive_inverse(Y)),additive_inverse(multiply(X::'a,Y)))) & (\X Y. equal(multiply(additive_inverse(X),Y),additive_inverse(multiply(X::'a,Y)))) & (\Y X Z. equal(multiply(X::'a,add(Y::'a,Z)),add(multiply(X::'a,Y),multiply(X::'a,Z)))) & (\X Y Z. equal(multiply(add(X::'a,Y),Z),add(multiply(X::'a,Z),multiply(Y::'a,Z)))) & (\X Y. equal(multiply(multiply(X::'a,Y),Y),multiply(X::'a,multiply(Y::'a,Y)))) & (\X Y Z. equal(associator(X::'a,Y,Z),add(multiply(multiply(X::'a,Y),Z),additive_inverse(multiply(X::'a,multiply(Y::'a,Z)))))) & (\X Y. equal(commutator(X::'a,Y),add(multiply(Y::'a,X),additive_inverse(multiply(X::'a,Y))))) & (\X Y. equal(multiply(multiply(associator(X::'a,X,Y),X),associator(X::'a,X,Y)),additive_identity)) & (~equal(multiply(multiply(associator(a::'a,a,b),a),associator(a::'a,a,b)),additive_identity)) --> False" by meson (*202 inferences so far. Searching to depth 8. 0.6 secs*) lemma RNG023_6: "EQU001_0_ax equal & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(X::'a,add(Y::'a,Z)),add(add(X::'a,Y),Z))) & (\X. equal(add(additive_identity::'a,X),X)) & (\X. equal(add(X::'a,additive_identity),X)) & (\X. equal(multiply(additive_identity::'a,X),additive_identity)) & (\X. equal(multiply(X::'a,additive_identity),additive_identity)) & (\X. equal(add(additive_inverse(X),X),additive_identity)) & (\X. equal(add(X::'a,additive_inverse(X)),additive_identity)) & (\Y X Z. equal(multiply(X::'a,add(Y::'a,Z)),add(multiply(X::'a,Y),multiply(X::'a,Z)))) & (\X Y Z. equal(multiply(add(X::'a,Y),Z),add(multiply(X::'a,Z),multiply(Y::'a,Z)))) & (\X. equal(additive_inverse(additive_inverse(X)),X)) & (\X Y. equal(multiply(multiply(X::'a,Y),Y),multiply(X::'a,multiply(Y::'a,Y)))) & (\X Y. equal(multiply(multiply(X::'a,X),Y),multiply(X::'a,multiply(X::'a,Y)))) & (\X Y Z. equal(associator(X::'a,Y,Z),add(multiply(multiply(X::'a,Y),Z),additive_inverse(multiply(X::'a,multiply(Y::'a,Z)))))) & (\X Y. equal(commutator(X::'a,Y),add(multiply(Y::'a,X),additive_inverse(multiply(X::'a,Y))))) & (\D E F'. equal(D::'a,E) --> equal(add(D::'a,F'),add(E::'a,F'))) & (\G I' H. equal(G::'a,H) --> equal(add(I'::'a,G),add(I'::'a,H))) & (\J K'. equal(J::'a,K') --> equal(additive_inverse(J),additive_inverse(K'))) & (\L M N O'. equal(L::'a,M) --> equal(associator(L::'a,N,O'),associator(M::'a,N,O'))) & (\P R Q S'. equal(P::'a,Q) --> equal(associator(R::'a,P,S'),associator(R::'a,Q,S'))) & (\T' V W U. equal(T'::'a,U) --> equal(associator(V::'a,W,T'),associator(V::'a,W,U))) & (\X Y Z. equal(X::'a,Y) --> equal(commutator(X::'a,Z),commutator(Y::'a,Z))) & (\A1 C1 B1. equal(A1::'a,B1) --> equal(commutator(C1::'a,A1),commutator(C1::'a,B1))) & (\D1 E1 F1. equal(D1::'a,E1) --> equal(multiply(D1::'a,F1),multiply(E1::'a,F1))) & (\G1 I1 H1. equal(G1::'a,H1) --> equal(multiply(I1::'a,G1),multiply(I1::'a,H1))) & (~equal(associator(x::'a,x,y),additive_identity)) --> False" by meson (*0 inferences so far. Searching to depth 0. 0.6 secs*) lemma RNG028_2: "EQU001_0_ax equal & (\X. equal(add(additive_identity::'a,X),X)) & (\X. equal(multiply(additive_identity::'a,X),additive_identity)) & (\X. equal(multiply(X::'a,additive_identity),additive_identity)) & (\X. equal(add(additive_inverse(X),X),additive_identity)) & (\X Y. equal(additive_inverse(add(X::'a,Y)),add(additive_inverse(X),additive_inverse(Y)))) & (\X. equal(additive_inverse(additive_inverse(X)),X)) & (\Y X Z. equal(multiply(X::'a,add(Y::'a,Z)),add(multiply(X::'a,Y),multiply(X::'a,Z)))) & (\X Y Z. equal(multiply(add(X::'a,Y),Z),add(multiply(X::'a,Z),multiply(Y::'a,Z)))) & (\X Y. equal(multiply(multiply(X::'a,Y),Y),multiply(X::'a,multiply(Y::'a,Y)))) & (\X Y. equal(multiply(multiply(X::'a,X),Y),multiply(X::'a,multiply(X::'a,Y)))) & (\X Y. equal(multiply(additive_inverse(X),Y),additive_inverse(multiply(X::'a,Y)))) & (\X Y. equal(multiply(X::'a,additive_inverse(Y)),additive_inverse(multiply(X::'a,Y)))) & (equal(additive_inverse(additive_identity),additive_identity)) & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(X::'a,add(Y::'a,Z)),add(add(X::'a,Y),Z))) & (\Z X Y. equal(add(X::'a,Z),add(Y::'a,Z)) --> equal(X::'a,Y)) & (\Z X Y. equal(add(Z::'a,X),add(Z::'a,Y)) --> equal(X::'a,Y)) & (\D E F'. equal(D::'a,E) --> equal(add(D::'a,F'),add(E::'a,F'))) & (\G I' H. equal(G::'a,H) --> equal(add(I'::'a,G),add(I'::'a,H))) & (\J K'. equal(J::'a,K') --> equal(additive_inverse(J),additive_inverse(K'))) & (\D1 E1 F1. equal(D1::'a,E1) --> equal(multiply(D1::'a,F1),multiply(E1::'a,F1))) & (\G1 I1 H1. equal(G1::'a,H1) --> equal(multiply(I1::'a,G1),multiply(I1::'a,H1))) & (\X Y Z. equal(associator(X::'a,Y,Z),add(multiply(multiply(X::'a,Y),Z),additive_inverse(multiply(X::'a,multiply(Y::'a,Z)))))) & (\L M N O'. equal(L::'a,M) --> equal(associator(L::'a,N,O'),associator(M::'a,N,O'))) & (\P R Q S'. equal(P::'a,Q) --> equal(associator(R::'a,P,S'),associator(R::'a,Q,S'))) & (\T' V W U. equal(T'::'a,U) --> equal(associator(V::'a,W,T'),associator(V::'a,W,U))) & (\X Y. ~equal(multiply(multiply(Y::'a,X),Y),multiply(Y::'a,multiply(X::'a,Y)))) & (\X Y Z. ~equal(associator(Y::'a,X,Z),additive_inverse(associator(X::'a,Y,Z)))) & (\X Y Z. ~equal(associator(Z::'a,Y,X),additive_inverse(associator(X::'a,Y,Z)))) & (~equal(multiply(multiply(cx::'a,multiply(cy::'a,cx)),cz),multiply(cx::'a,multiply(cy::'a,multiply(cx::'a,cz))))) --> False" by meson (*209 inferences so far. Searching to depth 9. 1.2 secs*) lemma RNG038_2: "(\X. sum(X::'a,additive_identity,X)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y. sum(X::'a,Y,add(X::'a,Y))) & RNG_other_ax multiply add equal product additive_identity additive_inverse sum & (\X. product(additive_identity::'a,X,additive_identity)) & (\X. product(X::'a,additive_identity,additive_identity)) & (\X Y. equal(X::'a,additive_identity) --> product(X::'a,h(X::'a,Y),Y)) & (product(a::'a,b,additive_identity)) & (~equal(a::'a,additive_identity)) & (~equal(b::'a,additive_identity)) --> False" by meson (*2660 inferences so far. Searching to depth 10. 7.0 secs*) lemma RNG040_2: "EQU001_0_ax equal & RNG001_0_eq product multiply sum add additive_inverse equal & (\X. sum(additive_identity::'a,X,X)) & (\X. sum(X::'a,additive_identity,X)) & (\X Y. product(X::'a,Y,multiply(X::'a,Y))) & (\X Y. sum(X::'a,Y,add(X::'a,Y))) & (\X. sum(additive_inverse(X),X,additive_identity)) & (\X. sum(X::'a,additive_inverse(X),additive_identity)) & (\Y U Z X V W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(U::'a,Z,W) --> sum(X::'a,V,W)) & (\Y X V U Z W. sum(X::'a,Y,U) & sum(Y::'a,Z,V) & sum(X::'a,V,W) --> sum(U::'a,Z,W)) & (\Y X Z. sum(X::'a,Y,Z) --> sum(Y::'a,X,Z)) & (\Y U Z X V W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(U::'a,Z,W) --> product(X::'a,V,W)) & (\Y X V U Z W. product(X::'a,Y,U) & product(Y::'a,Z,V) & product(X::'a,V,W) --> product(U::'a,Z,W)) & (\Y Z X V3 V1 V2 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & product(X::'a,V3,V4) --> sum(V1::'a,V2,V4)) & (\Y Z V1 V2 X V3 V4. product(X::'a,Y,V1) & product(X::'a,Z,V2) & sum(Y::'a,Z,V3) & sum(V1::'a,V2,V4) --> product(X::'a,V3,V4)) & (\X Y U V. sum(X::'a,Y,U) & sum(X::'a,Y,V) --> equal(U::'a,V)) & (\X Y U V. product(X::'a,Y,U) & product(X::'a,Y,V) --> equal(U::'a,V)) & (\A. product(A::'a,multiplicative_identity,A)) & (\A. product(multiplicative_identity::'a,A,A)) & (\A. product(A::'a,h(A),multiplicative_identity) | equal(A::'a,additive_identity)) & (\A. product(h(A),A,multiplicative_identity) | equal(A::'a,additive_identity)) & (\B A C. product(A::'a,B,C) --> product(B::'a,A,C)) & (\A B. equal(A::'a,B) --> equal(h(A),h(B))) & (sum(b::'a,c,d)) & (product(d::'a,a,additive_identity)) & (product(b::'a,a,l)) & (product(c::'a,a,n)) & (~sum(l::'a,n,additive_identity)) --> False" by meson (*8991 inferences so far. Searching to depth 9. 22.2 secs*) lemma RNG041_1: "EQU001_0_ax equal & RNG001_0_ax equal additive_inverse add multiply product additive_identity sum & RNG001_0_eq product multiply sum add additive_inverse equal & (\A B. equal(A::'a,B) --> equal(h(A),h(B))) & (\A. product(additive_identity::'a,A,additive_identity)) & (\A. product(A::'a,additive_identity,additive_identity)) & (\A. product(A::'a,multiplicative_identity,A)) & (\A. product(multiplicative_identity::'a,A,A)) & (\A. product(A::'a,h(A),multiplicative_identity) | equal(A::'a,additive_identity)) & (\A. product(h(A),A,multiplicative_identity) | equal(A::'a,additive_identity)) & (product(a::'a,b,additive_identity)) & (~equal(a::'a,additive_identity)) & (~equal(b::'a,additive_identity)) --> False" by meson (*101319 inferences so far. Searching to depth 14. 76.0 secs*) lemma ROB010_1: "EQU001_0_ax equal & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(add(X::'a,Y),Z),add(X::'a,add(Y::'a,Z)))) & (\Y X. equal(negate(add(negate(add(X::'a,Y)),negate(add(X::'a,negate(Y))))),X)) & (\A B C. equal(A::'a,B) --> equal(add(A::'a,C),add(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(add(F'::'a,D),add(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(negate(G),negate(H))) & (equal(negate(add(a::'a,negate(b))),c)) & (~equal(negate(add(c::'a,negate(add(b::'a,a)))),a)) --> False" by meson (*6933 inferences so far. Searching to depth 12. 5.1 secs*) lemma ROB013_1: "EQU001_0_ax equal & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(add(X::'a,Y),Z),add(X::'a,add(Y::'a,Z)))) & (\Y X. equal(negate(add(negate(add(X::'a,Y)),negate(add(X::'a,negate(Y))))),X)) & (\A B C. equal(A::'a,B) --> equal(add(A::'a,C),add(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(add(F'::'a,D),add(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(negate(G),negate(H))) & (equal(negate(add(a::'a,b)),c)) & (~equal(negate(add(c::'a,negate(add(negate(b),a)))),a)) --> False" by meson (*6614 inferences so far. Searching to depth 11. 20.4 secs*) lemma ROB016_1: "EQU001_0_ax equal & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(add(X::'a,Y),Z),add(X::'a,add(Y::'a,Z)))) & (\Y X. equal(negate(add(negate(add(X::'a,Y)),negate(add(X::'a,negate(Y))))),X)) & (\A B C. equal(A::'a,B) --> equal(add(A::'a,C),add(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(add(F'::'a,D),add(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(negate(G),negate(H))) & (\J K' L. equal(J::'a,K') --> equal(multiply(J::'a,L),multiply(K'::'a,L))) & (\M O' N. equal(M::'a,N) --> equal(multiply(O'::'a,M),multiply(O'::'a,N))) & (\P Q. equal(P::'a,Q) --> equal(successor(P),successor(Q))) & (\R S'. equal(R::'a,S') & positive_integer(R) --> positive_integer(S')) & (\X. equal(multiply(One::'a,X),X)) & (\V X. positive_integer(X) --> equal(multiply(successor(V),X),add(X::'a,multiply(V::'a,X)))) & (positive_integer(One)) & (\X. positive_integer(X) --> positive_integer(successor(X))) & (equal(negate(add(d::'a,e)),negate(e))) & (positive_integer(k)) & (\Vk X Y. equal(negate(add(negate(Y),negate(add(X::'a,negate(Y))))),X) & positive_integer(Vk) --> equal(negate(add(Y::'a,multiply(Vk::'a,add(X::'a,negate(add(X::'a,negate(Y))))))),negate(Y))) & (~equal(negate(add(e::'a,multiply(k::'a,add(d::'a,negate(add(d::'a,negate(e))))))),negate(e))) --> False" by meson (*14077 inferences so far. Searching to depth 11. 32.8 secs*) lemma ROB021_1: "EQU001_0_ax equal & (\Y X. equal(add(X::'a,Y),add(Y::'a,X))) & (\X Y Z. equal(add(add(X::'a,Y),Z),add(X::'a,add(Y::'a,Z)))) & (\Y X. equal(negate(add(negate(add(X::'a,Y)),negate(add(X::'a,negate(Y))))),X)) & (\A B C. equal(A::'a,B) --> equal(add(A::'a,C),add(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(add(F'::'a,D),add(F'::'a,E))) & (\G H. equal(G::'a,H) --> equal(negate(G),negate(H))) & (\X Y. equal(negate(X),negate(Y)) --> equal(X::'a,Y)) & (~equal(add(negate(add(a::'a,negate(b))),negate(add(negate(a),negate(b)))),b)) --> False" by meson (*35532 inferences so far. Searching to depth 19. 54.3 secs*) lemma SET005_1: "(\Subset Element Superset. member(Element::'a,Subset) & subset(Subset::'a,Superset) --> member(Element::'a,Superset)) & (\Superset Subset. subset(Subset::'a,Superset) | member(member_of_1_not_of_2(Subset::'a,Superset),Subset)) & (\Subset Superset. member(member_of_1_not_of_2(Subset::'a,Superset),Superset) --> subset(Subset::'a,Superset)) & (\Subset Superset. equal_sets(Subset::'a,Superset) --> subset(Subset::'a,Superset)) & (\Subset Superset. equal_sets(Superset::'a,Subset) --> subset(Subset::'a,Superset)) & (\Set2 Set1. subset(Set1::'a,Set2) & subset(Set2::'a,Set1) --> equal_sets(Set2::'a,Set1)) & (\Set2 Intersection Element Set1. intersection(Set1::'a,Set2,Intersection) & member(Element::'a,Intersection) --> member(Element::'a,Set1)) & (\Set1 Intersection Element Set2. intersection(Set1::'a,Set2,Intersection) & member(Element::'a,Intersection) --> member(Element::'a,Set2)) & (\Set2 Set1 Element Intersection. intersection(Set1::'a,Set2,Intersection) & member(Element::'a,Set2) & member(Element::'a,Set1) --> member(Element::'a,Intersection)) & (\Set2 Intersection Set1. member(h(Set1::'a,Set2,Intersection),Intersection) | intersection(Set1::'a,Set2,Intersection) | member(h(Set1::'a,Set2,Intersection),Set1)) & (\Set1 Intersection Set2. member(h(Set1::'a,Set2,Intersection),Intersection) | intersection(Set1::'a,Set2,Intersection) | member(h(Set1::'a,Set2,Intersection),Set2)) & (\Set1 Set2 Intersection. member(h(Set1::'a,Set2,Intersection),Intersection) & member(h(Set1::'a,Set2,Intersection),Set2) & member(h(Set1::'a,Set2,Intersection),Set1) --> intersection(Set1::'a,Set2,Intersection)) & (intersection(a::'a,b,aIb)) & (intersection(b::'a,c,bIc)) & (intersection(a::'a,bIc,aIbIc)) & (~intersection(aIb::'a,c,aIbIc)) --> False" by meson (*6450 inferences so far. Searching to depth 14. 4.2 secs*) lemma SET009_1: "(\Subset Element Superset. member(Element::'a,Subset) & ssubset(Subset::'a,Superset) --> member(Element::'a,Superset)) & (\Superset Subset. ssubset(Subset::'a,Superset) | member(member_of_1_not_of_2(Subset::'a,Superset),Subset)) & (\Subset Superset. member(member_of_1_not_of_2(Subset::'a,Superset),Superset) --> ssubset(Subset::'a,Superset)) & (\Subset Superset. equal_sets(Subset::'a,Superset) --> ssubset(Subset::'a,Superset)) & (\Subset Superset. equal_sets(Superset::'a,Subset) --> ssubset(Subset::'a,Superset)) & (\Set2 Set1. ssubset(Set1::'a,Set2) & ssubset(Set2::'a,Set1) --> equal_sets(Set2::'a,Set1)) & (\Set2 Difference Element Set1. difference(Set1::'a,Set2,Difference) & member(Element::'a,Difference) --> member(Element::'a,Set1)) & (\Element A_set Set1 Set2. ~(member(Element::'a,Set1) & member(Element::'a,Set2) & difference(A_set::'a,Set1,Set2))) & (\Set1 Difference Element Set2. member(Element::'a,Set1) & difference(Set1::'a,Set2,Difference) --> member(Element::'a,Difference) | member(Element::'a,Set2)) & (\Set1 Set2 Difference. difference(Set1::'a,Set2,Difference) | member(k(Set1::'a,Set2,Difference),Set1) | member(k(Set1::'a,Set2,Difference),Difference)) & (\Set1 Set2 Difference. member(k(Set1::'a,Set2,Difference),Set2) --> member(k(Set1::'a,Set2,Difference),Difference) | difference(Set1::'a,Set2,Difference)) & (\Set1 Set2 Difference. member(k(Set1::'a,Set2,Difference),Difference) & member(k(Set1::'a,Set2,Difference),Set1) --> member(k(Set1::'a,Set2,Difference),Set2) | difference(Set1::'a,Set2,Difference)) & (ssubset(d::'a,a)) & (difference(b::'a,a,bDa)) & (difference(b::'a,d,bDd)) & (~ssubset(bDa::'a,bDd)) --> False" by meson (*34726 inferences so far. Searching to depth 6. 2420 secs: 40 mins! BIG*) lemma SET025_4: "EQU001_0_ax equal & (\Y X. member(X::'a,Y) --> little_set(X)) & (\X Y. little_set(f1(X::'a,Y)) | equal(X::'a,Y)) & (\X Y. member(f1(X::'a,Y),X) | member(f1(X::'a,Y),Y) | equal(X::'a,Y)) & (\X Y. member(f1(X::'a,Y),X) & member(f1(X::'a,Y),Y) --> equal(X::'a,Y)) & (\X U Y. member(U::'a,non_ordered_pair(X::'a,Y)) --> equal(U::'a,X) | equal(U::'a,Y)) & (\Y U X. little_set(U) & equal(U::'a,X) --> member(U::'a,non_ordered_pair(X::'a,Y))) & (\X U Y. little_set(U) & equal(U::'a,Y) --> member(U::'a,non_ordered_pair(X::'a,Y))) & (\X Y. little_set(non_ordered_pair(X::'a,Y))) & (\X. equal(singleton_set(X),non_ordered_pair(X::'a,X))) & (\X Y. equal(ordered_pair(X::'a,Y),non_ordered_pair(singleton_set(X),non_ordered_pair(X::'a,Y)))) & (\X. ordered_pair_predicate(X) --> little_set(f2(X))) & (\X. ordered_pair_predicate(X) --> little_set(f3(X))) & (\X. ordered_pair_predicate(X) --> equal(X::'a,ordered_pair(f2(X),f3(X)))) & (\X Y Z. little_set(Y) & little_set(Z) & equal(X::'a,ordered_pair(Y::'a,Z)) --> ordered_pair_predicate(X)) & (\Z X. member(Z::'a,first(X)) --> little_set(f4(Z::'a,X))) & (\Z X. member(Z::'a,first(X)) --> little_set(f5(Z::'a,X))) & (\Z X. member(Z::'a,first(X)) --> equal(X::'a,ordered_pair(f4(Z::'a,X),f5(Z::'a,X)))) & (\Z X. member(Z::'a,first(X)) --> member(Z::'a,f4(Z::'a,X))) & (\X V Z U. little_set(U) & little_set(V) & equal(X::'a,ordered_pair(U::'a,V)) & member(Z::'a,U) --> member(Z::'a,first(X))) & (\Z X. member(Z::'a,second(X)) --> little_set(f6(Z::'a,X))) & (\Z X. member(Z::'a,second(X)) --> little_set(f7(Z::'a,X))) & (\Z X. member(Z::'a,second(X)) --> equal(X::'a,ordered_pair(f6(Z::'a,X),f7(Z::'a,X)))) & (\Z X. member(Z::'a,second(X)) --> member(Z::'a,f7(Z::'a,X))) & (\X U Z V. little_set(U) & little_set(V) & equal(X::'a,ordered_pair(U::'a,V)) & member(Z::'a,V) --> member(Z::'a,second(X))) & (\Z. member(Z::'a,estin) --> ordered_pair_predicate(Z)) & (\Z. member(Z::'a,estin) --> member(first(Z),second(Z))) & (\Z. little_set(Z) & ordered_pair_predicate(Z) & member(first(Z),second(Z)) --> member(Z::'a,estin)) & (\Y Z X. member(Z::'a,intersection(X::'a,Y)) --> member(Z::'a,X)) & (\X Z Y. member(Z::'a,intersection(X::'a,Y)) --> member(Z::'a,Y)) & (\X Z Y. member(Z::'a,X) & member(Z::'a,Y) --> member(Z::'a,intersection(X::'a,Y))) & (\Z X. ~(member(Z::'a,complement(X)) & member(Z::'a,X))) & (\Z X. little_set(Z) --> member(Z::'a,complement(X)) | member(Z::'a,X)) & (\X Y. equal(union(X::'a,Y),complement(intersection(complement(X),complement(Y))))) & (\Z X. member(Z::'a,domain_of(X)) --> ordered_pair_predicate(f8(Z::'a,X))) & (\Z X. member(Z::'a,domain_of(X)) --> member(f8(Z::'a,X),X)) & (\Z X. member(Z::'a,domain_of(X)) --> equal(Z::'a,first(f8(Z::'a,X)))) & (\X Z Xp. little_set(Z) & ordered_pair_predicate(Xp) & member(Xp::'a,X) & equal(Z::'a,first(Xp)) --> member(Z::'a,domain_of(X))) & (\X Y Z. member(Z::'a,cross_product(X::'a,Y)) --> ordered_pair_predicate(Z)) & (\Y Z X. member(Z::'a,cross_product(X::'a,Y)) --> member(first(Z),X)) & (\X Z Y. member(Z::'a,cross_product(X::'a,Y)) --> member(second(Z),Y)) & (\X Z Y. little_set(Z) & ordered_pair_predicate(Z) & member(first(Z),X) & member(second(Z),Y) --> member(Z::'a,cross_product(X::'a,Y))) & (\X Z. member(Z::'a,inv1 X) --> ordered_pair_predicate(Z)) & (\Z X. member(Z::'a,inv1 X) --> member(ordered_pair(second(Z),first(Z)),X)) & (\Z X. little_set(Z) & ordered_pair_predicate(Z) & member(ordered_pair(second(Z),first(Z)),X) --> member(Z::'a,inv1 X)) & (\Z X. member(Z::'a,rot_right(X)) --> little_set(f9(Z::'a,X))) & (\Z X. member(Z::'a,rot_right(X)) --> little_set(f10(Z::'a,X))) & (\Z X. member(Z::'a,rot_right(X)) --> little_set(f11(Z::'a,X))) & (\Z X. member(Z::'a,rot_right(X)) --> equal(Z::'a,ordered_pair(f9(Z::'a,X),ordered_pair(f10(Z::'a,X),f11(Z::'a,X))))) & (\Z X. member(Z::'a,rot_right(X)) --> member(ordered_pair(f10(Z::'a,X),ordered_pair(f11(Z::'a,X),f9(Z::'a,X))),X)) & (\Z V W U X. little_set(Z) & little_set(U) & little_set(V) & little_set(W) & equal(Z::'a,ordered_pair(U::'a,ordered_pair(V::'a,W))) & member(ordered_pair(V::'a,ordered_pair(W::'a,U)),X) --> member(Z::'a,rot_right(X))) & (\Z X. member(Z::'a,flip_range_of(X)) --> little_set(f12(Z::'a,X))) & (\Z X. member(Z::'a,flip_range_of(X)) --> little_set(f13(Z::'a,X))) & (\Z X. member(Z::'a,flip_range_of(X)) --> little_set(f14(Z::'a,X))) & (\Z X. member(Z::'a,flip_range_of(X)) --> equal(Z::'a,ordered_pair(f12(Z::'a,X),ordered_pair(f13(Z::'a,X),f14(Z::'a,X))))) & (\Z X. member(Z::'a,flip_range_of(X)) --> member(ordered_pair(f12(Z::'a,X),ordered_pair(f14(Z::'a,X),f13(Z::'a,X))),X)) & (\Z U W V X. little_set(Z) & little_set(U) & little_set(V) & little_set(W) & equal(Z::'a,ordered_pair(U::'a,ordered_pair(V::'a,W))) & member(ordered_pair(U::'a,ordered_pair(W::'a,V)),X) --> member(Z::'a,flip_range_of(X))) & (\X. equal(successor(X),union(X::'a,singleton_set(X)))) & (\Z. ~member(Z::'a,empty_set)) & (\Z. little_set(Z) --> member(Z::'a,universal_set)) & (little_set(infinity)) & (member(empty_set::'a,infinity)) & (\X. member(X::'a,infinity) --> member(successor(X),infinity)) & (\Z X. member(Z::'a,sigma(X)) --> member(f16(Z::'a,X),X)) & (\Z X. member(Z::'a,sigma(X)) --> member(Z::'a,f16(Z::'a,X))) & (\X Z Y. member(Y::'a,X) & member(Z::'a,Y) --> member(Z::'a,sigma(X))) & (\U. little_set(U) --> little_set(sigma(U))) & (\X U Y. ssubset(X::'a,Y) & member(U::'a,X) --> member(U::'a,Y)) & (\Y X. ssubset(X::'a,Y) | member(f17(X::'a,Y),X)) & (\X Y. member(f17(X::'a,Y),Y) --> ssubset(X::'a,Y)) & (\X Y. proper_subset(X::'a,Y) --> ssubset(X::'a,Y)) & (\X Y. ~(proper_subset(X::'a,Y) & equal(X::'a,Y))) & (\X Y. ssubset(X::'a,Y) --> proper_subset(X::'a,Y) | equal(X::'a,Y)) & (\Z X. member(Z::'a,powerset(X)) --> ssubset(Z::'a,X)) & (\Z X. little_set(Z) & ssubset(Z::'a,X) --> member(Z::'a,powerset(X))) & (\U. little_set(U) --> little_set(powerset(U))) & (\Z X. relation(Z) & member(X::'a,Z) --> ordered_pair_predicate(X)) & (\Z. relation(Z) | member(f18(Z),Z)) & (\Z. ordered_pair_predicate(f18(Z)) --> relation(Z)) & (\U X V W. single_valued_set(X) & little_set(U) & little_set(V) & little_set(W) & member(ordered_pair(U::'a,V),X) & member(ordered_pair(U::'a,W),X) --> equal(V::'a,W)) & (\X. single_valued_set(X) | little_set(f19(X))) & (\X. single_valued_set(X) | little_set(f20(X))) & (\X. single_valued_set(X) | little_set(f21(X))) & (\X. single_valued_set(X) | member(ordered_pair(f19(X),f20(X)),X)) & (\X. single_valued_set(X) | member(ordered_pair(f19(X),f21(X)),X)) & (\X. equal(f20(X),f21(X)) --> single_valued_set(X)) & (\Xf. function(Xf) --> relation(Xf)) & (\Xf. function(Xf) --> single_valued_set(Xf)) & (\Xf. relation(Xf) & single_valued_set(Xf) --> function(Xf)) & (\Z X Xf. member(Z::'a,image'(X::'a,Xf)) --> ordered_pair_predicate(f22(Z::'a,X,Xf))) & (\Z X Xf. member(Z::'a,image'(X::'a,Xf)) --> member(f22(Z::'a,X,Xf),Xf)) & (\Z Xf X. member(Z::'a,image'(X::'a,Xf)) --> member(first(f22(Z::'a,X,Xf)),X)) & (\X Xf Z. member(Z::'a,image'(X::'a,Xf)) --> equal(second(f22(Z::'a,X,Xf)),Z)) & (\Xf X Y Z. little_set(Z) & ordered_pair_predicate(Y) & member(Y::'a,Xf) & member(first(Y),X) & equal(second(Y),Z) --> member(Z::'a,image'(X::'a,Xf))) & (\X Xf. little_set(X) & function(Xf) --> little_set(image'(X::'a,Xf))) & (\X U Y. ~(disjoint(X::'a,Y) & member(U::'a,X) & member(U::'a,Y))) & (\Y X. disjoint(X::'a,Y) | member(f23(X::'a,Y),X)) & (\X Y. disjoint(X::'a,Y) | member(f23(X::'a,Y),Y)) & (\X. equal(X::'a,empty_set) | member(f24(X),X)) & (\X. equal(X::'a,empty_set) | disjoint(f24(X),X)) & (function(f25)) & (\X. little_set(X) --> equal(X::'a,empty_set) | member(f26(X),X)) & (\X. little_set(X) --> equal(X::'a,empty_set) | member(ordered_pair(X::'a,f26(X)),f25)) & (\Z X. member(Z::'a,range_of(X)) --> ordered_pair_predicate(f27(Z::'a,X))) & (\Z X. member(Z::'a,range_of(X)) --> member(f27(Z::'a,X),X)) & (\Z X. member(Z::'a,range_of(X)) --> equal(Z::'a,second(f27(Z::'a,X)))) & (\X Z Xp. little_set(Z) & ordered_pair_predicate(Xp) & member(Xp::'a,X) & equal(Z::'a,second(Xp)) --> member(Z::'a,range_of(X))) & (\Z. member(Z::'a,identity_relation) --> ordered_pair_predicate(Z)) & (\Z. member(Z::'a,identity_relation) --> equal(first(Z),second(Z))) & (\Z. little_set(Z) & ordered_pair_predicate(Z) & equal(first(Z),second(Z)) --> member(Z::'a,identity_relation)) & (\X Y. equal(restrct(X::'a,Y),intersection(X::'a,cross_product(Y::'a,universal_set)))) & (\Xf. one_to_one_function(Xf) --> function(Xf)) & (\Xf. one_to_one_function(Xf) --> function(inv1 Xf)) & (\Xf. function(Xf) & function(inv1 Xf) --> one_to_one_function(Xf)) & (\Z Xf Y. member(Z::'a,apply(Xf::'a,Y)) --> ordered_pair_predicate(f28(Z::'a,Xf,Y))) & (\Z Y Xf. member(Z::'a,apply(Xf::'a,Y)) --> member(f28(Z::'a,Xf,Y),Xf)) & (\Z Xf Y. member(Z::'a,apply(Xf::'a,Y)) --> equal(first(f28(Z::'a,Xf,Y)),Y)) & (\Z Xf Y. member(Z::'a,apply(Xf::'a,Y)) --> member(Z::'a,second(f28(Z::'a,Xf,Y)))) & (\Xf Y Z W. ordered_pair_predicate(W) & member(W::'a,Xf) & equal(first(W),Y) & member(Z::'a,second(W)) --> member(Z::'a,apply(Xf::'a,Y))) & (\Xf X Y. equal(apply_to_two_arguments(Xf::'a,X,Y),apply(Xf::'a,ordered_pair(X::'a,Y)))) & (\X Y Xf. maps(Xf::'a,X,Y) --> function(Xf)) & (\Y Xf X. maps(Xf::'a,X,Y) --> equal(domain_of(Xf),X)) & (\X Xf Y. maps(Xf::'a,X,Y) --> ssubset(range_of(Xf),Y)) & (\X Xf Y. function(Xf) & equal(domain_of(Xf),X) & ssubset(range_of(Xf),Y) --> maps(Xf::'a,X,Y)) & (\Xf Xs. closed(Xs::'a,Xf) --> little_set(Xs)) & (\Xs Xf. closed(Xs::'a,Xf) --> little_set(Xf)) & (\Xf Xs. closed(Xs::'a,Xf) --> maps(Xf::'a,cross_product(Xs::'a,Xs),Xs)) & (\Xf Xs. little_set(Xs) & little_set(Xf) & maps(Xf::'a,cross_product(Xs::'a,Xs),Xs) --> closed(Xs::'a,Xf)) & (\Z Xf Xg. member(Z::'a,composition(Xf::'a,Xg)) --> little_set(f29(Z::'a,Xf,Xg))) & (\Z Xf Xg. member(Z::'a,composition(Xf::'a,Xg)) --> little_set(f30(Z::'a,Xf,Xg))) & (\Z Xf Xg. member(Z::'a,composition(Xf::'a,Xg)) --> little_set(f31(Z::'a,Xf,Xg))) & (\Z Xf Xg. member(Z::'a,composition(Xf::'a,Xg)) --> equal(Z::'a,ordered_pair(f29(Z::'a,Xf,Xg),f30(Z::'a,Xf,Xg)))) & (\Z Xg Xf. member(Z::'a,composition(Xf::'a,Xg)) --> member(ordered_pair(f29(Z::'a,Xf,Xg),f31(Z::'a,Xf,Xg)),Xf)) & (\Z Xf Xg. member(Z::'a,composition(Xf::'a,Xg)) --> member(ordered_pair(f31(Z::'a,Xf,Xg),f30(Z::'a,Xf,Xg)),Xg)) & (\Z X Xf W Y Xg. little_set(Z) & little_set(X) & little_set(Y) & little_set(W) & equal(Z::'a,ordered_pair(X::'a,Y)) & member(ordered_pair(X::'a,W),Xf) & member(ordered_pair(W::'a,Y),Xg) --> member(Z::'a,composition(Xf::'a,Xg))) & (\Xh Xs2 Xf2 Xs1 Xf1. homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) --> closed(Xs1::'a,Xf1)) & (\Xh Xs1 Xf1 Xs2 Xf2. homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) --> closed(Xs2::'a,Xf2)) & (\Xf1 Xf2 Xh Xs1 Xs2. homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) --> maps(Xh::'a,Xs1,Xs2)) & (\Xs2 Xs1 Xf1 Xf2 X Xh Y. homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) & member(X::'a,Xs1) & member(Y::'a,Xs1) --> equal(apply(Xh::'a,apply_to_two_arguments(Xf1::'a,X,Y)),apply_to_two_arguments(Xf2::'a,apply(Xh::'a,X),apply(Xh::'a,Y)))) & (\Xh Xf1 Xs2 Xf2 Xs1. closed(Xs1::'a,Xf1) & closed(Xs2::'a,Xf2) & maps(Xh::'a,Xs1,Xs2) --> homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) | member(f32(Xh::'a,Xs1,Xf1,Xs2,Xf2),Xs1)) & (\Xh Xf1 Xs2 Xf2 Xs1. closed(Xs1::'a,Xf1) & closed(Xs2::'a,Xf2) & maps(Xh::'a,Xs1,Xs2) --> homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2) | member(f33(Xh::'a,Xs1,Xf1,Xs2,Xf2),Xs1)) & (\Xh Xs1 Xf1 Xs2 Xf2. closed(Xs1::'a,Xf1) & closed(Xs2::'a,Xf2) & maps(Xh::'a,Xs1,Xs2) & equal(apply(Xh::'a,apply_to_two_arguments(Xf1::'a,f32(Xh::'a,Xs1,Xf1,Xs2,Xf2),f33(Xh::'a,Xs1,Xf1,Xs2,Xf2))),apply_to_two_arguments(Xf2::'a,apply(Xh::'a,f32(Xh::'a,Xs1,Xf1,Xs2,Xf2)),apply(Xh::'a,f33(Xh::'a,Xs1,Xf1,Xs2,Xf2)))) --> homomorphism(Xh::'a,Xs1,Xf1,Xs2,Xf2)) & (\A B C. equal(A::'a,B) --> equal(f1(A::'a,C),f1(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(f1(F'::'a,D),f1(F'::'a,E))) & (\A2 B2. equal(A2::'a,B2) --> equal(f2(A2),f2(B2))) & (\G4 H4. equal(G4::'a,H4) --> equal(f3(G4),f3(H4))) & (\O7 P7 Q7. equal(O7::'a,P7) --> equal(f4(O7::'a,Q7),f4(P7::'a,Q7))) & (\R7 T7 S7. equal(R7::'a,S7) --> equal(f4(T7::'a,R7),f4(T7::'a,S7))) & (\U7 V7 W7. equal(U7::'a,V7) --> equal(f5(U7::'a,W7),f5(V7::'a,W7))) & (\X7 Z7 Y7. equal(X7::'a,Y7) --> equal(f5(Z7::'a,X7),f5(Z7::'a,Y7))) & (\A8 B8 C8. equal(A8::'a,B8) --> equal(f6(A8::'a,C8),f6(B8::'a,C8))) & (\D8 F8 E8. equal(D8::'a,E8) --> equal(f6(F8::'a,D8),f6(F8::'a,E8))) & (\G8 H8 I8. equal(G8::'a,H8) --> equal(f7(G8::'a,I8),f7(H8::'a,I8))) & (\J8 L8 K8. equal(J8::'a,K8) --> equal(f7(L8::'a,J8),f7(L8::'a,K8))) & (\M8 N8 O8. equal(M8::'a,N8) --> equal(f8(M8::'a,O8),f8(N8::'a,O8))) & (\P8 R8 Q8. equal(P8::'a,Q8) --> equal(f8(R8::'a,P8),f8(R8::'a,Q8))) & (\S8 T8 U8. equal(S8::'a,T8) --> equal(f9(S8::'a,U8),f9(T8::'a,U8))) & (\V8 X8 W8. equal(V8::'a,W8) --> equal(f9(X8::'a,V8),f9(X8::'a,W8))) & (\G H I'. equal(G::'a,H) --> equal(f10(G::'a,I'),f10(H::'a,I'))) & (\J L K'. equal(J::'a,K') --> equal(f10(L::'a,J),f10(L::'a,K'))) & (\M N O'. equal(M::'a,N) --> equal(f11(M::'a,O'),f11(N::'a,O'))) & (\P R Q. equal(P::'a,Q) --> equal(f11(R::'a,P),f11(R::'a,Q))) & (\S' T' U. equal(S'::'a,T') --> equal(f12(S'::'a,U),f12(T'::'a,U))) & (\V X W. equal(V::'a,W) --> equal(f12(X::'a,V),f12(X::'a,W))) & (\Y Z A1. equal(Y::'a,Z) --> equal(f13(Y::'a,A1),f13(Z::'a,A1))) & (\B1 D1 C1. equal(B1::'a,C1) --> equal(f13(D1::'a,B1),f13(D1::'a,C1))) & (\E1 F1 G1. equal(E1::'a,F1) --> equal(f14(E1::'a,G1),f14(F1::'a,G1))) & (\H1 J1 I1. equal(H1::'a,I1) --> equal(f14(J1::'a,H1),f14(J1::'a,I1))) & (\K1 L1 M1. equal(K1::'a,L1) --> equal(f16(K1::'a,M1),f16(L1::'a,M1))) & (\N1 P1 O1. equal(N1::'a,O1) --> equal(f16(P1::'a,N1),f16(P1::'a,O1))) & (\Q1 R1 S1. equal(Q1::'a,R1) --> equal(f17(Q1::'a,S1),f17(R1::'a,S1))) & (\T1 V1 U1. equal(T1::'a,U1) --> equal(f17(V1::'a,T1),f17(V1::'a,U1))) & (\W1 X1. equal(W1::'a,X1) --> equal(f18(W1),f18(X1))) & (\Y1 Z1. equal(Y1::'a,Z1) --> equal(f19(Y1),f19(Z1))) & (\C2 D2. equal(C2::'a,D2) --> equal(f20(C2),f20(D2))) & (\E2 F2. equal(E2::'a,F2) --> equal(f21(E2),f21(F2))) & (\G2 H2 I2 J2. equal(G2::'a,H2) --> equal(f22(G2::'a,I2,J2),f22(H2::'a,I2,J2))) & (\K2 M2 L2 N2. equal(K2::'a,L2) --> equal(f22(M2::'a,K2,N2),f22(M2::'a,L2,N2))) & (\O2 Q2 R2 P2. equal(O2::'a,P2) --> equal(f22(Q2::'a,R2,O2),f22(Q2::'a,R2,P2))) & (\S2 T2 U2. equal(S2::'a,T2) --> equal(f23(S2::'a,U2),f23(T2::'a,U2))) & (\V2 X2 W2. equal(V2::'a,W2) --> equal(f23(X2::'a,V2),f23(X2::'a,W2))) & (\Y2 Z2. equal(Y2::'a,Z2) --> equal(f24(Y2),f24(Z2))) & (\A3 B3. equal(A3::'a,B3) --> equal(f26(A3),f26(B3))) & (\C3 D3 E3. equal(C3::'a,D3) --> equal(f27(C3::'a,E3),f27(D3::'a,E3))) & (\F3 H3 G3. equal(F3::'a,G3) --> equal(f27(H3::'a,F3),f27(H3::'a,G3))) & (\I3 J3 K3 L3. equal(I3::'a,J3) --> equal(f28(I3::'a,K3,L3),f28(J3::'a,K3,L3))) & (\M3 O3 N3 P3. equal(M3::'a,N3) --> equal(f28(O3::'a,M3,P3),f28(O3::'a,N3,P3))) & (\Q3 S3 T3 R3. equal(Q3::'a,R3) --> equal(f28(S3::'a,T3,Q3),f28(S3::'a,T3,R3))) & (\U3 V3 W3 X3. equal(U3::'a,V3) --> equal(f29(U3::'a,W3,X3),f29(V3::'a,W3,X3))) & (\Y3 A4 Z3 B4. equal(Y3::'a,Z3) --> equal(f29(A4::'a,Y3,B4),f29(A4::'a,Z3,B4))) & (\C4 E4 F4 D4. equal(C4::'a,D4) --> equal(f29(E4::'a,F4,C4),f29(E4::'a,F4,D4))) & (\I4 J4 K4 L4. equal(I4::'a,J4) --> equal(f30(I4::'a,K4,L4),f30(J4::'a,K4,L4))) & (\M4 O4 N4 P4. equal(M4::'a,N4) --> equal(f30(O4::'a,M4,P4),f30(O4::'a,N4,P4))) & (\Q4 S4 T4 R4. equal(Q4::'a,R4) --> equal(f30(S4::'a,T4,Q4),f30(S4::'a,T4,R4))) & (\U4 V4 W4 X4. equal(U4::'a,V4) --> equal(f31(U4::'a,W4,X4),f31(V4::'a,W4,X4))) & (\Y4 A5 Z4 B5. equal(Y4::'a,Z4) --> equal(f31(A5::'a,Y4,B5),f31(A5::'a,Z4,B5))) & (\C5 E5 F5 D5. equal(C5::'a,D5) --> equal(f31(E5::'a,F5,C5),f31(E5::'a,F5,D5))) & (\G5 H5 I5 J5 K5 L5. equal(G5::'a,H5) --> equal(f32(G5::'a,I5,J5,K5,L5),f32(H5::'a,I5,J5,K5,L5))) & (\M5 O5 N5 P5 Q5 R5. equal(M5::'a,N5) --> equal(f32(O5::'a,M5,P5,Q5,R5),f32(O5::'a,N5,P5,Q5,R5))) & (\S5 U5 V5 T5 W5 X5. equal(S5::'a,T5) --> equal(f32(U5::'a,V5,S5,W5,X5),f32(U5::'a,V5,T5,W5,X5))) & (\Y5 A6 B6 C6 Z5 D6. equal(Y5::'a,Z5) --> equal(f32(A6::'a,B6,C6,Y5,D6),f32(A6::'a,B6,C6,Z5,D6))) & (\E6 G6 H6 I6 J6 F6. equal(E6::'a,F6) --> equal(f32(G6::'a,H6,I6,J6,E6),f32(G6::'a,H6,I6,J6,F6))) & (\K6 L6 M6 N6 O6 P6. equal(K6::'a,L6) --> equal(f33(K6::'a,M6,N6,O6,P6),f33(L6::'a,M6,N6,O6,P6))) & (\Q6 S6 R6 T6 U6 V6. equal(Q6::'a,R6) --> equal(f33(S6::'a,Q6,T6,U6,V6),f33(S6::'a,R6,T6,U6,V6))) & (\W6 Y6 Z6 X6 A7 B7. equal(W6::'a,X6) --> equal(f33(Y6::'a,Z6,W6,A7,B7),f33(Y6::'a,Z6,X6,A7,B7))) & (\C7 E7 F7 G7 D7 H7. equal(C7::'a,D7) --> equal(f33(E7::'a,F7,G7,C7,H7),f33(E7::'a,F7,G7,D7,H7))) & (\I7 K7 L7 M7 N7 J7. equal(I7::'a,J7) --> equal(f33(K7::'a,L7,M7,N7,I7),f33(K7::'a,L7,M7,N7,J7))) & (\A B C. equal(A::'a,B) --> equal(apply(A::'a,C),apply(B::'a,C))) & (\D F' E. equal(D::'a,E) --> equal(apply(F'::'a,D),apply(F'::'a,E))) & (\G H I' J. equal(G::'a,H) --> equal(apply_to_two_arguments(G::'a,I',J),apply_to_two_arguments(H::'a,I',J))) & (\K' M L N. equal(K'::'a,L) --> equal(apply_to_two_arguments(M::'a,K',N),apply_to_two_arguments(M::'a,L,N))) & (\O' Q R P. equal(O'::'a,P) --> equal(apply_to_two_arguments(Q::'a,R,O'),apply_to_two_arguments(Q::'a,R,P))) & (\S' T'. equal(S'::'a,T') --> equal(complement(S'),complement(T'))) & (\U V W. equal(U::'a,V) --> equal(composition(U::'a,W),composition(V::'a,W))) & (\X Z Y. equal(X::'a,Y) --> equal(composition(Z::'a,X),composition(Z::'a,Y))) & (\A1 B1. equal(A1::'a,B1) --> equal(inv1 A1,inv1 B1)) & (\C1 D1 E1. equal(C1::'a,D1) --> equal(cross_product(C1::'a,E1),cross_product(D1::'a,E1))) & (\F1 H1 G1. equal(F1::'a,G1) --> equal(cross_product(H1::'a,F1),cross_product(H1::'a,G1))) & (\I1 J1. equal(I1::'a,J1) --> equal(domain_of(I1),domain_of(J1))) & (\I10 J10. equal(I10::'a,J10) --> equal(first(I10),first(J10))) & (\Q10 R10. equal(Q10::'a,R10) --> equal(flip_range_of(Q10),flip_range_of(R10))) & (\S10 T10 U10. equal(S10::'a,T10) --> equal(image'(S10::'a,U10),image'(T10::'a,U10))) & (\V10 X10 W10. equal(V10::'a,W10) --> equal(image'(X10::'a,V10),image'(X10::'a,W10))) & (\Y10 Z10 A11. equal(Y10::'a,Z10) --> equal(intersection(Y10::'a,A11),intersection(Z10::'a,A11))) & (\B11 D11 C11. equal(B11::'a,C11) --> equal(intersection(D11::'a,B11),intersection(D11::'a,C11))) & (\E11 F11 G11. equal(E11::'a,F11) --> equal(non_ordered_pair(E11::'a,G11),non_ordered_pair(F11::'a,G11))) & (\H11 J11 I11. equal(H11::'a,I11) --> equal(non_ordered_pair(J11::'a,H11),non_ordered_pair(J11::'a,I11))) & (\K11 L11 M11. equal(K11::'a,L11) --> equal(ordered_pair(K11::'a,M11),ordered_pair(L11::'a,M11))) & (\N11 P11 O11. equal(N11::'a,O11) --> equal(ordered_pair(P11::'a,N11),ordered_pair(P11::'a,O11))) & (\Q11 R11. equal(Q11::'a,R11) --> equal(powerset(Q11),powerset(R11))) & (\S11 T11. equal(S11::'a,T11) --> equal(range_of(S11),range_of(T11))) & (\U11 V11 W11. equal(U11::'a,V11) --> equal(restrct(U11::'a,W11),restrct(V11::'a,W11))) & (\X11 Z11 Y11. equal(X11::'a,Y11) --> equal(restrct(Z11::'a,X11),restrct(Z11::'a,Y11))) & (\A12 B12. equal(A12::'a,B12) --> equal(rot_right(A12),rot_right(B12))) & (\C12 D12. equal(C12::'a,D12) --> equal(second(C12),second(D12))) & (\K12 L12. equal(K12::'a,L12) --> equal(sigma(K12),sigma(L12))) & (\M12 N12. equal(M12::'a,N12) --> equal(singleton_set(M12),singleton_set(N12))) & (\O12 P12. equal(O12::'a,P12) --> equal(successor(O12),successor(P12))) & (\Q12 R12 S12. equal(Q12::'a,R12) --> equal(union(Q12::'a,S12),union(R12::'a,S12))) & (\T12 V12 U12. equal(T12::'a,U12) --> equal(union(V12::'a,T12),union(V12::'a,U12))) & (\W12 X12 Y12. equal(W12::'a,X12) & closed(W12::'a,Y12) --> closed(X12::'a,Y12)) & (\Z12 B13 A13. equal(Z12::'a,A13) & closed(B13::'a,Z12) --> closed(B13::'a,A13)) & (\C13 D13 E13. equal(C13::'a,D13) & disjoint(C13::'a,E13) --> disjoint(D13::'a,E13)) & (\F13 H13 G13. equal(F13::'a,G13) & disjoint(H13::'a,F13) --> disjoint(H13::'a,G13)) & (\I13 J13. equal(I13::'a,J13) & function(I13) --> function(J13)) & (\K13 L13 M13 N13 O13 P13. equal(K13::'a,L13) & homomorphism(K13::'a,M13,N13,O13,P13) --> homomorphism(L13::'a,M13,N13,O13,P13)) & (\Q13 S13 R13 T13 U13 V13. equal(Q13::'a,R13) & homomorphism(S13::'a,Q13,T13,U13,V13) --> homomorphism(S13::'a,R13,T13,U13,V13)) & (\W13 Y13 Z13 X13 A14 B14. equal(W13::'a,X13) & homomorphism(Y13::'a,Z13,W13,A14,B14) --> homomorphism(Y13::'a,Z13,X13,A14,B14)) & (\C14 E14 F14 G14 D14 H14. equal(C14::'a,D14) & homomorphism(E14::'a,F14,G14,C14,H14) --> homomorphism(E14::'a,F14,G14,D14,H14)) & (\I14 K14 L14 M14 N14 J14. equal(I14::'a,J14) & homomorphism(K14::'a,L14,M14,N14,I14) --> homomorphism(K14::'a,L14,M14,N14,J14)) & (\O14 P14. equal(O14::'a,P14) & little_set(O14) --> little_set(P14)) & (\Q14 R14 S14 T14. equal(Q14::'a,R14) & maps(Q14::'a,S14,T14) --> maps(R14::'a,S14,T14)) & (\U14 W14 V14 X14. equal(U14::'a,V14) & maps(W14::'a,U14,X14) --> maps(W14::'a,V14,X14)) & (\Y14 A15 B15 Z14. equal(Y14::'a,Z14) & maps(A15::'a,B15,Y14) --> maps(A15::'a,B15,Z14)) & (\C15 D15 E15. equal(C15::'a,D15) & member(C15::'a,E15) --> member(D15::'a,E15)) & (\F15 H15 G15. equal(F15::'a,G15) & member(H15::'a,F15) --> member(H15::'a,G15)) & (\I15 J15. equal(I15::'a,J15) & one_to_one_function(I15) --> one_to_one_function(J15)) & (\K15 L15. equal(K15::'a,L15) & ordered_pair_predicate(K15) --> ordered_pair_predicate(L15)) & (\M15 N15 O15. equal(M15::'a,N15) & proper_subset(M15::'a,O15) --> proper_subset(N15::'a,O15)) & (\P15 R15 Q15. equal(P15::'a,Q15) & proper_subset(R15::'a,P15) --> proper_subset(R15::'a,Q15)) & (\S15 T15. equal(S15::'a,T15) & relation(S15) --> relation(T15)) & (\U15 V15. equal(U15::'a,V15) & single_valued_set(U15) --> single_valued_set(V15)) & (\W15 X15 Y15. equal(W15::'a,X15) & ssubset(W15::'a,Y15) --> ssubset(X15::'a,Y15)) & (\Z15 B16 A16. equal(Z15::'a,A16) & ssubset(B16::'a,Z15) --> ssubset(B16::'a,A16)) & (~little_set(ordered_pair(a::'a,b))) --> False" oops (*13 inferences so far. Searching to depth 8. 0 secs*) lemma SET046_5: "(\Y X. ~(element(X::'a,a) & element(X::'a,Y) & element(Y::'a,X))) & (\X. element(X::'a,f(X)) | element(X::'a,a)) & (\X. element(f(X),X) | element(X::'a,a)) --> False" by meson (*33 inferences so far. Searching to depth 9. 0.2 secs*) lemma SET047_5: "(\X Z Y. set_equal(X::'a,Y) & element(Z::'a,X) --> element(Z::'a,Y)) & (\Y Z X. set_equal(X::'a,Y) & element(Z::'a,Y) --> element(Z::'a,X)) & (\X Y. element(f(X::'a,Y),X) | element(f(X::'a,Y),Y) | set_equal(X::'a,Y)) & (\X Y. element(f(X::'a,Y),Y) & element(f(X::'a,Y),X) --> set_equal(X::'a,Y)) & (set_equal(a::'a,b) | set_equal(b::'a,a)) & (~(set_equal(b::'a,a) & set_equal(a::'a,b))) --> False" by meson (*311 inferences so far. Searching to depth 12. 0.1 secs*) lemma SYN034_1: "(\A. p(A::'a,a) | p(A::'a,f(A))) & (\A. p(A::'a,a) | p(f(A),A)) & (\A B. ~(p(A::'a,B) & p(B::'a,A) & p(B::'a,a))) --> False" by meson (*30 inferences so far. Searching to depth 6. 0.2 secs*) lemma SYN071_1: "EQU001_0_ax equal & (equal(a::'a,b) | equal(c::'a,d)) & (equal(a::'a,c) | equal(b::'a,d)) & (~equal(a::'a,d)) & (~equal(b::'a,c)) --> False" by meson (*1897410 inferences so far. Searching to depth 48 206s, nearly 4 mins on griffon.*) lemma SYN349_1: "(\X Y. f(w(X),g(X::'a,Y)) --> f(X::'a,g(X::'a,Y))) & (\X Y. f(X::'a,g(X::'a,Y)) --> f(w(X),g(X::'a,Y))) & (\Y X. f(X::'a,g(X::'a,Y)) & f(Y::'a,g(X::'a,Y)) --> f(g(X::'a,Y),Y) | f(g(X::'a,Y),w(X))) & (\Y X. f(g(X::'a,Y),Y) & f(Y::'a,g(X::'a,Y)) --> f(X::'a,g(X::'a,Y)) | f(g(X::'a,Y),w(X))) & (\Y X. f(X::'a,g(X::'a,Y)) | f(g(X::'a,Y),Y) | f(Y::'a,g(X::'a,Y)) | f(g(X::'a,Y),w(X))) & (\Y X. f(X::'a,g(X::'a,Y)) & f(g(X::'a,Y),Y) --> f(Y::'a,g(X::'a,Y)) | f(g(X::'a,Y),w(X))) & (\Y X. f(X::'a,g(X::'a,Y)) & f(g(X::'a,Y),w(X)) --> f(g(X::'a,Y),Y) | f(Y::'a,g(X::'a,Y))) & (\Y X. f(g(X::'a,Y),Y) & f(g(X::'a,Y),w(X)) --> f(X::'a,g(X::'a,Y)) | f(Y::'a,g(X::'a,Y))) & (\Y X. f(Y::'a,g(X::'a,Y)) & f(g(X::'a,Y),w(X)) --> f(X::'a,g(X::'a,Y)) | f(g(X::'a,Y),Y)) & (\Y X. ~(f(X::'a,g(X::'a,Y)) & f(g(X::'a,Y),Y) & f(Y::'a,g(X::'a,Y)) & f(g(X::'a,Y),w(X)))) --> False" oops (*398 inferences so far. Searching to depth 12. 0.4 secs*) lemma SYN352_1: "(f(a::'a,b)) & (\X Y. f(X::'a,Y) --> f(b::'a,z(X::'a,Y)) | f(Y::'a,z(X::'a,Y))) & (\X Y. f(X::'a,Y) | f(z(X::'a,Y),z(X::'a,Y))) & (\X Y. f(b::'a,z(X::'a,Y)) | f(X::'a,z(X::'a,Y)) | f(z(X::'a,Y),z(X::'a,Y))) & (\X Y. f(b::'a,z(X::'a,Y)) & f(X::'a,z(X::'a,Y)) --> f(z(X::'a,Y),z(X::'a,Y))) & (\X Y. ~(f(X::'a,Y) & f(X::'a,z(X::'a,Y)) & f(Y::'a,z(X::'a,Y)))) & (\X Y. f(X::'a,Y) --> f(X::'a,z(X::'a,Y)) | f(Y::'a,z(X::'a,Y))) --> False" by meson (*5336 inferences so far. Searching to depth 15. 5.3 secs*) lemma TOP001_2: "(\Vf U. element_of_set(U::'a,union_of_members(Vf)) --> element_of_set(U::'a,f1(Vf::'a,U))) & (\U Vf. element_of_set(U::'a,union_of_members(Vf)) --> element_of_collection(f1(Vf::'a,U),Vf)) & (\U Uu1 Vf. element_of_set(U::'a,Uu1) & element_of_collection(Uu1::'a,Vf) --> element_of_set(U::'a,union_of_members(Vf))) & (\Vf X. basis(X::'a,Vf) --> equal_sets(union_of_members(Vf),X)) & (\Vf U X. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_set(X::'a,f10(Vf::'a,U,X))) & (\U X Vf. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_collection(f10(Vf::'a,U,X),Vf)) & (\X. subset_sets(X::'a,X)) & (\X U Y. subset_sets(X::'a,Y) & element_of_set(U::'a,X) --> element_of_set(U::'a,Y)) & (\X Y. equal_sets(X::'a,Y) --> subset_sets(X::'a,Y)) & (\Y X. subset_sets(X::'a,Y) | element_of_set(in_1st_set(X::'a,Y),X)) & (\X Y. element_of_set(in_1st_set(X::'a,Y),Y) --> subset_sets(X::'a,Y)) & (basis(cx::'a,f)) & (~subset_sets(union_of_members(top_of_basis(f)),cx)) --> False" by meson (*0 inferences so far. Searching to depth 0. 0 secs*) lemma TOP002_2: "(\Vf U. element_of_collection(U::'a,top_of_basis(Vf)) | element_of_set(f11(Vf::'a,U),U)) & (\X. ~element_of_set(X::'a,empty_set)) & (~element_of_collection(empty_set::'a,top_of_basis(f))) --> False" by meson (*0 inferences so far. Searching to depth 0. 6.5 secs. BIG*) lemma TOP004_1: "(\Vf U. element_of_set(U::'a,union_of_members(Vf)) --> element_of_set(U::'a,f1(Vf::'a,U))) & (\U Vf. element_of_set(U::'a,union_of_members(Vf)) --> element_of_collection(f1(Vf::'a,U),Vf)) & (\U Uu1 Vf. element_of_set(U::'a,Uu1) & element_of_collection(Uu1::'a,Vf) --> element_of_set(U::'a,union_of_members(Vf))) & (\Vf U Va. element_of_set(U::'a,intersection_of_members(Vf)) & element_of_collection(Va::'a,Vf) --> element_of_set(U::'a,Va)) & (\U Vf. element_of_set(U::'a,intersection_of_members(Vf)) | element_of_collection(f2(Vf::'a,U),Vf)) & (\Vf U. element_of_set(U::'a,f2(Vf::'a,U)) --> element_of_set(U::'a,intersection_of_members(Vf))) & (\Vt X. topological_space(X::'a,Vt) --> equal_sets(union_of_members(Vt),X)) & (\X Vt. topological_space(X::'a,Vt) --> element_of_collection(empty_set::'a,Vt)) & (\X Vt. topological_space(X::'a,Vt) --> element_of_collection(X::'a,Vt)) & (\X Y Z Vt. topological_space(X::'a,Vt) & element_of_collection(Y::'a,Vt) & element_of_collection(Z::'a,Vt) --> element_of_collection(intersection_of_sets(Y::'a,Z),Vt)) & (\X Vf Vt. topological_space(X::'a,Vt) & subset_collections(Vf::'a,Vt) --> element_of_collection(union_of_members(Vf),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) --> topological_space(X::'a,Vt) | element_of_collection(f3(X::'a,Vt),Vt) | subset_collections(f5(X::'a,Vt),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) & element_of_collection(union_of_members(f5(X::'a,Vt)),Vt) --> topological_space(X::'a,Vt) | element_of_collection(f3(X::'a,Vt),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) --> topological_space(X::'a,Vt) | element_of_collection(f4(X::'a,Vt),Vt) | subset_collections(f5(X::'a,Vt),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) & element_of_collection(union_of_members(f5(X::'a,Vt)),Vt) --> topological_space(X::'a,Vt) | element_of_collection(f4(X::'a,Vt),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) & element_of_collection(intersection_of_sets(f3(X::'a,Vt),f4(X::'a,Vt)),Vt) --> topological_space(X::'a,Vt) | subset_collections(f5(X::'a,Vt),Vt)) & (\X Vt. equal_sets(union_of_members(Vt),X) & element_of_collection(empty_set::'a,Vt) & element_of_collection(X::'a,Vt) & element_of_collection(intersection_of_sets(f3(X::'a,Vt),f4(X::'a,Vt)),Vt) & element_of_collection(union_of_members(f5(X::'a,Vt)),Vt) --> topological_space(X::'a,Vt)) & (\U X Vt. open(U::'a,X,Vt) --> topological_space(X::'a,Vt)) & (\X U Vt. open(U::'a,X,Vt) --> element_of_collection(U::'a,Vt)) & (\X U Vt. topological_space(X::'a,Vt) & element_of_collection(U::'a,Vt) --> open(U::'a,X,Vt)) & (\U X Vt. closed(U::'a,X,Vt) --> topological_space(X::'a,Vt)) & (\U X Vt. closed(U::'a,X,Vt) --> open(relative_complement_sets(U::'a,X),X,Vt)) & (\U X Vt. topological_space(X::'a,Vt) & open(relative_complement_sets(U::'a,X),X,Vt) --> closed(U::'a,X,Vt)) & (\Vs X Vt. finer(Vt::'a,Vs,X) --> topological_space(X::'a,Vt)) & (\Vt X Vs. finer(Vt::'a,Vs,X) --> topological_space(X::'a,Vs)) & (\X Vs Vt. finer(Vt::'a,Vs,X) --> subset_collections(Vs::'a,Vt)) & (\X Vs Vt. topological_space(X::'a,Vt) & topological_space(X::'a,Vs) & subset_collections(Vs::'a,Vt) --> finer(Vt::'a,Vs,X)) & (\Vf X. basis(X::'a,Vf) --> equal_sets(union_of_members(Vf),X)) & (\X Vf Y Vb1 Vb2. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> element_of_set(Y::'a,f6(X::'a,Vf,Y,Vb1,Vb2))) & (\X Y Vb1 Vb2 Vf. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> element_of_collection(f6(X::'a,Vf,Y,Vb1,Vb2),Vf)) & (\X Vf Y Vb1 Vb2. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> subset_sets(f6(X::'a,Vf,Y,Vb1,Vb2),intersection_of_sets(Vb1::'a,Vb2))) & (\Vf X. equal_sets(union_of_members(Vf),X) --> basis(X::'a,Vf) | element_of_set(f7(X::'a,Vf),X)) & (\X Vf. equal_sets(union_of_members(Vf),X) --> basis(X::'a,Vf) | element_of_collection(f8(X::'a,Vf),Vf)) & (\X Vf. equal_sets(union_of_members(Vf),X) --> basis(X::'a,Vf) | element_of_collection(f9(X::'a,Vf),Vf)) & (\X Vf. equal_sets(union_of_members(Vf),X) --> basis(X::'a,Vf) | element_of_set(f7(X::'a,Vf),intersection_of_sets(f8(X::'a,Vf),f9(X::'a,Vf)))) & (\Uu9 X Vf. equal_sets(union_of_members(Vf),X) & element_of_set(f7(X::'a,Vf),Uu9) & element_of_collection(Uu9::'a,Vf) & subset_sets(Uu9::'a,intersection_of_sets(f8(X::'a,Vf),f9(X::'a,Vf))) --> basis(X::'a,Vf)) & (\Vf U X. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_set(X::'a,f10(Vf::'a,U,X))) & (\U X Vf. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_collection(f10(Vf::'a,U,X),Vf)) & (\Vf X U. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> subset_sets(f10(Vf::'a,U,X),U)) & (\Vf U. element_of_collection(U::'a,top_of_basis(Vf)) | element_of_set(f11(Vf::'a,U),U)) & (\Vf Uu11 U. element_of_set(f11(Vf::'a,U),Uu11) & element_of_collection(Uu11::'a,Vf) & subset_sets(Uu11::'a,U) --> element_of_collection(U::'a,top_of_basis(Vf))) & (\U Y X Vt. element_of_collection(U::'a,subspace_topology(X::'a,Vt,Y)) --> topological_space(X::'a,Vt)) & (\U Vt Y X. element_of_collection(U::'a,subspace_topology(X::'a,Vt,Y)) --> subset_sets(Y::'a,X)) & (\X Y U Vt. element_of_collection(U::'a,subspace_topology(X::'a,Vt,Y)) --> element_of_collection(f12(X::'a,Vt,Y,U),Vt)) & (\X Vt Y U. element_of_collection(U::'a,subspace_topology(X::'a,Vt,Y)) --> equal_sets(U::'a,intersection_of_sets(Y::'a,f12(X::'a,Vt,Y,U)))) & (\X Vt U Y Uu12. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) & element_of_collection(Uu12::'a,Vt) & equal_sets(U::'a,intersection_of_sets(Y::'a,Uu12)) --> element_of_collection(U::'a,subspace_topology(X::'a,Vt,Y))) & (\U Y X Vt. element_of_set(U::'a,interior(Y::'a,X,Vt)) --> topological_space(X::'a,Vt)) & (\U Vt Y X. element_of_set(U::'a,interior(Y::'a,X,Vt)) --> subset_sets(Y::'a,X)) & (\Y X Vt U. element_of_set(U::'a,interior(Y::'a,X,Vt)) --> element_of_set(U::'a,f13(Y::'a,X,Vt,U))) & (\X Vt U Y. element_of_set(U::'a,interior(Y::'a,X,Vt)) --> subset_sets(f13(Y::'a,X,Vt,U),Y)) & (\Y U X Vt. element_of_set(U::'a,interior(Y::'a,X,Vt)) --> open(f13(Y::'a,X,Vt,U),X,Vt)) & (\U Y Uu13 X Vt. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) & element_of_set(U::'a,Uu13) & subset_sets(Uu13::'a,Y) & open(Uu13::'a,X,Vt) --> element_of_set(U::'a,interior(Y::'a,X,Vt))) & (\U Y X Vt. element_of_set(U::'a,closure(Y::'a,X,Vt)) --> topological_space(X::'a,Vt)) & (\U Vt Y X. element_of_set(U::'a,closure(Y::'a,X,Vt)) --> subset_sets(Y::'a,X)) & (\Y X Vt U V. element_of_set(U::'a,closure(Y::'a,X,Vt)) & subset_sets(Y::'a,V) & closed(V::'a,X,Vt) --> element_of_set(U::'a,V)) & (\Y X Vt U. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) --> element_of_set(U::'a,closure(Y::'a,X,Vt)) | subset_sets(Y::'a,f14(Y::'a,X,Vt,U))) & (\Y U X Vt. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) --> element_of_set(U::'a,closure(Y::'a,X,Vt)) | closed(f14(Y::'a,X,Vt,U),X,Vt)) & (\Y X Vt U. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) & element_of_set(U::'a,f14(Y::'a,X,Vt,U)) --> element_of_set(U::'a,closure(Y::'a,X,Vt))) & (\U Y X Vt. neighborhood(U::'a,Y,X,Vt) --> topological_space(X::'a,Vt)) & (\Y U X Vt. neighborhood(U::'a,Y,X,Vt) --> open(U::'a,X,Vt)) & (\X Vt Y U. neighborhood(U::'a,Y,X,Vt) --> element_of_set(Y::'a,U)) & (\X Vt Y U. topological_space(X::'a,Vt) & open(U::'a,X,Vt) & element_of_set(Y::'a,U) --> neighborhood(U::'a,Y,X,Vt)) & (\Z Y X Vt. limit_point(Z::'a,Y,X,Vt) --> topological_space(X::'a,Vt)) & (\Z Vt Y X. limit_point(Z::'a,Y,X,Vt) --> subset_sets(Y::'a,X)) & (\Z X Vt U Y. limit_point(Z::'a,Y,X,Vt) & neighborhood(U::'a,Z,X,Vt) --> element_of_set(f15(Z::'a,Y,X,Vt,U),intersection_of_sets(U::'a,Y))) & (\Y X Vt U Z. ~(limit_point(Z::'a,Y,X,Vt) & neighborhood(U::'a,Z,X,Vt) & eq_p(f15(Z::'a,Y,X,Vt,U),Z))) & (\Y Z X Vt. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) --> limit_point(Z::'a,Y,X,Vt) | neighborhood(f16(Z::'a,Y,X,Vt),Z,X,Vt)) & (\X Vt Y Uu16 Z. topological_space(X::'a,Vt) & subset_sets(Y::'a,X) & element_of_set(Uu16::'a,intersection_of_sets(f16(Z::'a,Y,X,Vt),Y)) --> limit_point(Z::'a,Y,X,Vt) | eq_p(Uu16::'a,Z)) & (\U Y X Vt. element_of_set(U::'a,boundary(Y::'a,X,Vt)) --> topological_space(X::'a,Vt)) & (\U Y X Vt. element_of_set(U::'a,boundary(Y::'a,X,Vt)) --> element_of_set(U::'a,closure(Y::'a,X,Vt))) & (\U Y X Vt. element_of_set(U::'a,boundary(Y::'a,X,Vt)) --> element_of_set(U::'a,closure(relative_complement_sets(Y::'a,X),X,Vt))) & (\U Y X Vt. topological_space(X::'a,Vt) & element_of_set(U::'a,closure(Y::'a,X,Vt)) & element_of_set(U::'a,closure(relative_complement_sets(Y::'a,X),X,Vt)) --> element_of_set(U::'a,boundary(Y::'a,X,Vt))) & (\X Vt. hausdorff(X::'a,Vt) --> topological_space(X::'a,Vt)) & (\X_2 X_1 X Vt. hausdorff(X::'a,Vt) & element_of_set(X_1::'a,X) & element_of_set(X_2::'a,X) --> eq_p(X_1::'a,X_2) | neighborhood(f17(X::'a,Vt,X_1,X_2),X_1,X,Vt)) & (\X_1 X_2 X Vt. hausdorff(X::'a,Vt) & element_of_set(X_1::'a,X) & element_of_set(X_2::'a,X) --> eq_p(X_1::'a,X_2) | neighborhood(f18(X::'a,Vt,X_1,X_2),X_2,X,Vt)) & (\X Vt X_1 X_2. hausdorff(X::'a,Vt) & element_of_set(X_1::'a,X) & element_of_set(X_2::'a,X) --> eq_p(X_1::'a,X_2) | disjoint_s(f17(X::'a,Vt,X_1,X_2),f18(X::'a,Vt,X_1,X_2))) & (\Vt X. topological_space(X::'a,Vt) --> hausdorff(X::'a,Vt) | element_of_set(f19(X::'a,Vt),X)) & (\Vt X. topological_space(X::'a,Vt) --> hausdorff(X::'a,Vt) | element_of_set(f20(X::'a,Vt),X)) & (\X Vt. topological_space(X::'a,Vt) & eq_p(f19(X::'a,Vt),f20(X::'a,Vt)) --> hausdorff(X::'a,Vt)) & (\X Vt Uu19 Uu20. topological_space(X::'a,Vt) & neighborhood(Uu19::'a,f19(X::'a,Vt),X,Vt) & neighborhood(Uu20::'a,f20(X::'a,Vt),X,Vt) & disjoint_s(Uu19::'a,Uu20) --> hausdorff(X::'a,Vt)) & (\Va1 Va2 X Vt. separation(Va1::'a,Va2,X,Vt) --> topological_space(X::'a,Vt)) & (\Va2 X Vt Va1. ~(separation(Va1::'a,Va2,X,Vt) & equal_sets(Va1::'a,empty_set))) & (\Va1 X Vt Va2. ~(separation(Va1::'a,Va2,X,Vt) & equal_sets(Va2::'a,empty_set))) & (\Va2 X Va1 Vt. separation(Va1::'a,Va2,X,Vt) --> element_of_collection(Va1::'a,Vt)) & (\Va1 X Va2 Vt. separation(Va1::'a,Va2,X,Vt) --> element_of_collection(Va2::'a,Vt)) & (\Vt Va1 Va2 X. separation(Va1::'a,Va2,X,Vt) --> equal_sets(union_of_sets(Va1::'a,Va2),X)) & (\X Vt Va1 Va2. separation(Va1::'a,Va2,X,Vt) --> disjoint_s(Va1::'a,Va2)) & (\Vt X Va1 Va2. topological_space(X::'a,Vt) & element_of_collection(Va1::'a,Vt) & element_of_collection(Va2::'a,Vt) & equal_sets(union_of_sets(Va1::'a,Va2),X) & disjoint_s(Va1::'a,Va2) --> separation(Va1::'a,Va2,X,Vt) | equal_sets(Va1::'a,empty_set) | equal_sets(Va2::'a,empty_set)) & (\X Vt. connected_space(X::'a,Vt) --> topological_space(X::'a,Vt)) & (\Va1 Va2 X Vt. ~(connected_space(X::'a,Vt) & separation(Va1::'a,Va2,X,Vt))) & (\X Vt. topological_space(X::'a,Vt) --> connected_space(X::'a,Vt) | separation(f21(X::'a,Vt),f22(X::'a,Vt),X,Vt)) & (\Va X Vt. connected_set(Va::'a,X,Vt) --> topological_space(X::'a,Vt)) & (\Vt Va X. connected_set(Va::'a,X,Vt) --> subset_sets(Va::'a,X)) & (\X Vt Va. connected_set(Va::'a,X,Vt) --> connected_space(Va::'a,subspace_topology(X::'a,Vt,Va))) & (\X Vt Va. topological_space(X::'a,Vt) & subset_sets(Va::'a,X) & connected_space(Va::'a,subspace_topology(X::'a,Vt,Va)) --> connected_set(Va::'a,X,Vt)) & (\Vf X Vt. open_covering(Vf::'a,X,Vt) --> topological_space(X::'a,Vt)) & (\X Vf Vt. open_covering(Vf::'a,X,Vt) --> subset_collections(Vf::'a,Vt)) & (\Vt Vf X. open_covering(Vf::'a,X,Vt) --> equal_sets(union_of_members(Vf),X)) & (\Vt Vf X. topological_space(X::'a,Vt) & subset_collections(Vf::'a,Vt) & equal_sets(union_of_members(Vf),X) --> open_covering(Vf::'a,X,Vt)) & (\X Vt. compact_space(X::'a,Vt) --> topological_space(X::'a,Vt)) & (\X Vt Vf1. compact_space(X::'a,Vt) & open_covering(Vf1::'a,X,Vt) --> finite'(f23(X::'a,Vt,Vf1))) & (\X Vt Vf1. compact_space(X::'a,Vt) & open_covering(Vf1::'a,X,Vt) --> subset_collections(f23(X::'a,Vt,Vf1),Vf1)) & (\Vf1 X Vt. compact_space(X::'a,Vt) & open_covering(Vf1::'a,X,Vt) --> open_covering(f23(X::'a,Vt,Vf1),X,Vt)) & (\X Vt. topological_space(X::'a,Vt) --> compact_space(X::'a,Vt) | open_covering(f24(X::'a,Vt),X,Vt)) & (\Uu24 X Vt. topological_space(X::'a,Vt) & finite'(Uu24) & subset_collections(Uu24::'a,f24(X::'a,Vt)) & open_covering(Uu24::'a,X,Vt) --> compact_space(X::'a,Vt)) & (\Va X Vt. compact_set(Va::'a,X,Vt) --> topological_space(X::'a,Vt)) & (\Vt Va X. compact_set(Va::'a,X,Vt) --> subset_sets(Va::'a,X)) & (\X Vt Va. compact_set(Va::'a,X,Vt) --> compact_space(Va::'a,subspace_topology(X::'a,Vt,Va))) & (\X Vt Va. topological_space(X::'a,Vt) & subset_sets(Va::'a,X) & compact_space(Va::'a,subspace_topology(X::'a,Vt,Va)) --> compact_set(Va::'a,X,Vt)) & (basis(cx::'a,f)) & (\U. element_of_collection(U::'a,top_of_basis(f))) & (\V. element_of_collection(V::'a,top_of_basis(f))) & (\U V. ~element_of_collection(intersection_of_sets(U::'a,V),top_of_basis(f))) --> False" by meson (*0 inferences so far. Searching to depth 0. 0.8 secs*) lemma TOP004_2: "(\U Uu1 Vf. element_of_set(U::'a,Uu1) & element_of_collection(Uu1::'a,Vf) --> element_of_set(U::'a,union_of_members(Vf))) & (\Vf X. basis(X::'a,Vf) --> equal_sets(union_of_members(Vf),X)) & (\X Vf Y Vb1 Vb2. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> element_of_set(Y::'a,f6(X::'a,Vf,Y,Vb1,Vb2))) & (\X Y Vb1 Vb2 Vf. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> element_of_collection(f6(X::'a,Vf,Y,Vb1,Vb2),Vf)) & (\X Vf Y Vb1 Vb2. basis(X::'a,Vf) & element_of_set(Y::'a,X) & element_of_collection(Vb1::'a,Vf) & element_of_collection(Vb2::'a,Vf) & element_of_set(Y::'a,intersection_of_sets(Vb1::'a,Vb2)) --> subset_sets(f6(X::'a,Vf,Y,Vb1,Vb2),intersection_of_sets(Vb1::'a,Vb2))) & (\Vf U X. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_set(X::'a,f10(Vf::'a,U,X))) & (\U X Vf. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_collection(f10(Vf::'a,U,X),Vf)) & (\Vf X U. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> subset_sets(f10(Vf::'a,U,X),U)) & (\Vf U. element_of_collection(U::'a,top_of_basis(Vf)) | element_of_set(f11(Vf::'a,U),U)) & (\Vf Uu11 U. element_of_set(f11(Vf::'a,U),Uu11) & element_of_collection(Uu11::'a,Vf) & subset_sets(Uu11::'a,U) --> element_of_collection(U::'a,top_of_basis(Vf))) & (\Y X Z. subset_sets(X::'a,Y) & subset_sets(Y::'a,Z) --> subset_sets(X::'a,Z)) & (\Y Z X. element_of_set(Z::'a,intersection_of_sets(X::'a,Y)) --> element_of_set(Z::'a,X)) & (\X Z Y. element_of_set(Z::'a,intersection_of_sets(X::'a,Y)) --> element_of_set(Z::'a,Y)) & (\X Z Y. element_of_set(Z::'a,X) & element_of_set(Z::'a,Y) --> element_of_set(Z::'a,intersection_of_sets(X::'a,Y))) & (\X U Y V. subset_sets(X::'a,Y) & subset_sets(U::'a,V) --> subset_sets(intersection_of_sets(X::'a,U),intersection_of_sets(Y::'a,V))) & (\X Z Y. equal_sets(X::'a,Y) & element_of_set(Z::'a,X) --> element_of_set(Z::'a,Y)) & (\Y X. equal_sets(intersection_of_sets(X::'a,Y),intersection_of_sets(Y::'a,X))) & (basis(cx::'a,f)) & (\U. element_of_collection(U::'a,top_of_basis(f))) & (\V. element_of_collection(V::'a,top_of_basis(f))) & (\U V. ~element_of_collection(intersection_of_sets(U::'a,V),top_of_basis(f))) --> False" by meson (*53777 inferences so far. Searching to depth 20. 68.7 secs*) lemma TOP005_2: "(\Vf U. element_of_set(U::'a,union_of_members(Vf)) --> element_of_set(U::'a,f1(Vf::'a,U))) & (\U Vf. element_of_set(U::'a,union_of_members(Vf)) --> element_of_collection(f1(Vf::'a,U),Vf)) & (\Vf U X. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_set(X::'a,f10(Vf::'a,U,X))) & (\U X Vf. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> element_of_collection(f10(Vf::'a,U,X),Vf)) & (\Vf X U. element_of_collection(U::'a,top_of_basis(Vf)) & element_of_set(X::'a,U) --> subset_sets(f10(Vf::'a,U,X),U)) & (\Vf U. element_of_collection(U::'a,top_of_basis(Vf)) | element_of_set(f11(Vf::'a,U),U)) & (\Vf Uu11 U. element_of_set(f11(Vf::'a,U),Uu11) & element_of_collection(Uu11::'a,Vf) & subset_sets(Uu11::'a,U) --> element_of_collection(U::'a,top_of_basis(Vf))) & (\X U Y. element_of_set(U::'a,X) --> subset_sets(X::'a,Y) | element_of_set(U::'a,Y)) & (\Y X Z. subset_sets(X::'a,Y) & element_of_collection(Y::'a,Z) --> subset_sets(X::'a,union_of_members(Z))) & (\X U Y. subset_collections(X::'a,Y) & element_of_collection(U::'a,X) --> element_of_collection(U::'a,Y)) & (subset_collections(g::'a,top_of_basis(f))) & (~element_of_collection(union_of_members(g),top_of_basis(f))) --> False" by meson end diff --git a/src/HOL/ex/Reflection_Examples.thy b/src/HOL/ex/Reflection_Examples.thy --- a/src/HOL/ex/Reflection_Examples.thy +++ b/src/HOL/ex/Reflection_Examples.thy @@ -1,487 +1,487 @@ (* Title: HOL/ex/Reflection_Examples.thy Author: Amine Chaieb, TU Muenchen *) section \Examples for generic reflection and reification\ theory Reflection_Examples imports Complex_Main "HOL-Library.Reflection" begin text \This theory presents two methods: reify and reflection\ text \ Consider an HOL type \\\, the structure of which is not recongnisable on the theory level. This is the case of \<^typ>\bool\, arithmetical terms such as \<^typ>\int\, \<^typ>\real\ etc \dots In order to implement a simplification on terms of type \\\ we often need its structure. Traditionnaly such simplifications are written in ML, proofs are synthesized. An other strategy is to declare an HOL datatype \\\ and an HOL function (the interpretation) that maps elements of \\\ to elements of \\\. The functionality of \reify\ then is, given a term \t\ of type \\\, to compute a term \s\ of type \\\. For this it needs equations for the interpretation. N.B: All the interpretations supported by \reify\ must have the type \'a list \ \ \ \\. The method \reify\ can also be told which subterm of the current subgoal should be reified. The general call for \reify\ is \reify eqs (t)\, where \eqs\ are the defining equations of the interpretation and \(t)\ is an optional parameter which specifies the subterm to which reification should be applied to. If \(t)\ is abscent, \reify\ tries to reify the whole subgoal. The method \reflection\ uses \reify\ and has a very similar signature: \reflection corr_thm eqs (t)\. Here again \eqs\ and \(t)\ are as described above and \corr_thm\ is a theorem proving \<^prop>\I vs (f t) = I vs t\. We assume that \I\ is the interpretation and \f\ is some useful and executable simplification of type \\ \ \\. The method \reflection\ applies reification and hence the theorem \<^prop>\t = I xs s\ and hence using \corr_thm\ derives \<^prop>\t = I xs (f s)\. It then uses normalization by equational rewriting to prove \<^prop>\f s = s'\ which almost finishes the proof of \<^prop>\t = t'\ where \<^prop>\I xs s' = t'\. \ text \Example 1 : Propositional formulae and NNF.\ text \The type \fm\ represents simple propositional formulae:\ datatype form = TrueF | FalseF | Less nat nat | And form form | Or form form | Neg form | ExQ form primrec interp :: "form \ ('a::ord) list \ bool" where "interp TrueF vs \ True" | "interp FalseF vs \ False" | "interp (Less i j) vs \ vs ! i < vs ! j" | "interp (And f1 f2) vs \ interp f1 vs \ interp f2 vs" | "interp (Or f1 f2) vs \ interp f1 vs \ interp f2 vs" | "interp (Neg f) vs \ \ interp f vs" | "interp (ExQ f) vs \ (\v. interp f (v # vs))" lemmas interp_reify_eqs = interp.simps declare interp_reify_eqs [reify] lemma "\x. x < y \ x < z" apply reify oops -datatype fm = And fm fm | Or fm fm | Imp fm fm | Iff fm fm | NOT fm | At nat +datatype fm = And fm fm | Or fm fm | Imp fm fm | Iff fm fm | Not fm | At nat primrec Ifm :: "fm \ bool list \ bool" where "Ifm (At n) vs \ vs ! n" | "Ifm (And p q) vs \ Ifm p vs \ Ifm q vs" | "Ifm (Or p q) vs \ Ifm p vs \ Ifm q vs" | "Ifm (Imp p q) vs \ Ifm p vs \ Ifm q vs" | "Ifm (Iff p q) vs \ Ifm p vs = Ifm q vs" -| "Ifm (NOT p) vs \ \ Ifm p vs" +| "Ifm (Not p) vs \ \ Ifm p vs" lemma "Q \ (D \ F \ ((\ D) \ (\ F)))" apply (reify Ifm.simps) oops text \Method \reify\ maps a \<^typ>\bool\ to an \<^typ>\fm\. For this it needs the semantics of \fm\, i.e.\ the rewrite rules in \Ifm.simps\.\ text \You can also just pick up a subterm to reify.\ lemma "Q \ (D \ F \ ((\ D) \ (\ F)))" apply (reify Ifm.simps ("((\ D) \ (\ F))")) oops text \Let's perform NNF. This is a version that tends to generate disjunctions\ primrec fmsize :: "fm \ nat" where "fmsize (At n) = 1" -| "fmsize (NOT p) = 1 + fmsize p" +| "fmsize (Not p) = 1 + fmsize p" | "fmsize (And p q) = 1 + fmsize p + fmsize q" | "fmsize (Or p q) = 1 + fmsize p + fmsize q" | "fmsize (Imp p q) = 2 + fmsize p + fmsize q" | "fmsize (Iff p q) = 2 + 2* fmsize p + 2* fmsize q" lemma [measure_function]: "is_measure fmsize" .. fun nnf :: "fm \ fm" where "nnf (At n) = At n" | "nnf (And p q) = And (nnf p) (nnf q)" | "nnf (Or p q) = Or (nnf p) (nnf q)" -| "nnf (Imp p q) = Or (nnf (NOT p)) (nnf q)" -| "nnf (Iff p q) = Or (And (nnf p) (nnf q)) (And (nnf (NOT p)) (nnf (NOT q)))" -| "nnf (NOT (And p q)) = Or (nnf (NOT p)) (nnf (NOT q))" -| "nnf (NOT (Or p q)) = And (nnf (NOT p)) (nnf (NOT q))" -| "nnf (NOT (Imp p q)) = And (nnf p) (nnf (NOT q))" -| "nnf (NOT (Iff p q)) = Or (And (nnf p) (nnf (NOT q))) (And (nnf (NOT p)) (nnf q))" -| "nnf (NOT (NOT p)) = nnf p" -| "nnf (NOT p) = NOT p" +| "nnf (Imp p q) = Or (nnf (Not p)) (nnf q)" +| "nnf (Iff p q) = Or (And (nnf p) (nnf q)) (And (nnf (Not p)) (nnf (Not q)))" +| "nnf (Not (And p q)) = Or (nnf (Not p)) (nnf (Not q))" +| "nnf (Not (Or p q)) = And (nnf (Not p)) (nnf (Not q))" +| "nnf (Not (Imp p q)) = And (nnf p) (nnf (Not q))" +| "nnf (Not (Iff p q)) = Or (And (nnf p) (nnf (Not q))) (And (nnf (Not p)) (nnf q))" +| "nnf (Not (Not p)) = nnf p" +| "nnf (Not p) = Not p" text \The correctness theorem of \<^const>\nnf\: it preserves the semantics of \<^typ>\fm\\ lemma nnf [reflection]: "Ifm (nnf p) vs = Ifm p vs" by (induct p rule: nnf.induct) auto text \Now let's perform NNF using our \<^const>\nnf\ function defined above. First to the whole subgoal.\ lemma "A \ B \ (B \ A \ (B \ C \ (B \ A \ D))) \ A \ B \ D" apply (reflection Ifm.simps) oops text \Now we specify on which subterm it should be applied\ lemma "A \ B \ (B \ A \ (B \ C \ (B \ A \ D))) \ A \ B \ D" apply (reflection Ifm.simps only: "B \ C \ (B \ A \ D)") oops text \Example 2: Simple arithmetic formulae\ text \The type \num\ reflects linear expressions over natural number\ datatype num = C nat | Add num num | Mul nat num | Var nat | CN nat nat num text \This is just technical to make recursive definitions easier.\ primrec num_size :: "num \ nat" where "num_size (C c) = 1" | "num_size (Var n) = 1" | "num_size (Add a b) = 1 + num_size a + num_size b" | "num_size (Mul c a) = 1 + num_size a" | "num_size (CN n c a) = 4 + num_size a " lemma [measure_function]: "is_measure num_size" .. text \The semantics of num\ primrec Inum:: "num \ nat list \ nat" where Inum_C : "Inum (C i) vs = i" | Inum_Var: "Inum (Var n) vs = vs!n" | Inum_Add: "Inum (Add s t) vs = Inum s vs + Inum t vs " | Inum_Mul: "Inum (Mul c t) vs = c * Inum t vs " | Inum_CN : "Inum (CN n c t) vs = c*(vs!n) + Inum t vs " text \Let's reify some nat expressions \dots\ lemma "4 * (2 * x + (y::nat)) + f a \ 0" apply (reify Inum.simps ("4 * (2 * x + (y::nat)) + f a")) oops text \We're in a bad situation! \x\, \y\ and \f\ have been recongnized as constants, which is correct but does not correspond to our intuition of the constructor C. It should encapsulate constants, i.e. numbers, i.e. numerals.\ text \So let's leave the \Inum_C\ equation at the end and see what happens \dots\ lemma "4 * (2 * x + (y::nat)) \ 0" apply (reify Inum_Var Inum_Add Inum_Mul Inum_CN Inum_C ("4 * (2 * x + (y::nat))")) oops text \Hm, let's specialize \Inum_C\ with numerals.\ lemma Inum_number: "Inum (C (numeral t)) vs = numeral t" by simp lemmas Inum_eqs = Inum_Var Inum_Add Inum_Mul Inum_CN Inum_number text \Second attempt\ lemma "1 * (2 * x + (y::nat)) \ 0" apply (reify Inum_eqs ("1 * (2 * x + (y::nat))")) oops text\That was fine, so let's try another one \dots\ lemma "1 * (2 * x + (y::nat) + 0 + 1) \ 0" apply (reify Inum_eqs ("1 * (2 * x + (y::nat) + 0 + 1)")) oops text \Oh!! 0 is not a variable \dots\ Oh! 0 is not a \numeral\ \dots\ thing. The same for 1. So let's add those equations, too.\ lemma Inum_01: "Inum (C 0) vs = 0" "Inum (C 1) vs = 1" "Inum (C(Suc n)) vs = Suc n" by simp_all lemmas Inum_eqs'= Inum_eqs Inum_01 text\Third attempt:\ lemma "1 * (2 * x + (y::nat) + 0 + 1) \ 0" apply (reify Inum_eqs' ("1 * (2 * x + (y::nat) + 0 + 1)")) oops text \Okay, let's try reflection. Some simplifications on \<^typ>\num\ follow. You can skim until the main theorem \linum\.\ fun lin_add :: "num \ num \ num" where "lin_add (CN n1 c1 r1) (CN n2 c2 r2) = (if n1 = n2 then (let c = c1 + c2 in (if c = 0 then lin_add r1 r2 else CN n1 c (lin_add r1 r2))) else if n1 \ n2 then (CN n1 c1 (lin_add r1 (CN n2 c2 r2))) else (CN n2 c2 (lin_add (CN n1 c1 r1) r2)))" | "lin_add (CN n1 c1 r1) t = CN n1 c1 (lin_add r1 t)" | "lin_add t (CN n2 c2 r2) = CN n2 c2 (lin_add t r2)" | "lin_add (C b1) (C b2) = C (b1 + b2)" | "lin_add a b = Add a b" lemma lin_add: "Inum (lin_add t s) bs = Inum (Add t s) bs" apply (induct t s rule: lin_add.induct, simp_all add: Let_def) apply (case_tac "c1+c2 = 0",case_tac "n1 \ n2", simp_all) apply (case_tac "n1 = n2", simp_all add: algebra_simps) done fun lin_mul :: "num \ nat \ num" where "lin_mul (C j) i = C (i * j)" | "lin_mul (CN n c a) i = (if i=0 then (C 0) else CN n (i * c) (lin_mul a i))" | "lin_mul t i = (Mul i t)" lemma lin_mul: "Inum (lin_mul t i) bs = Inum (Mul i t) bs" by (induct t i rule: lin_mul.induct) (auto simp add: algebra_simps) fun linum:: "num \ num" where "linum (C b) = C b" | "linum (Var n) = CN n 1 (C 0)" | "linum (Add t s) = lin_add (linum t) (linum s)" | "linum (Mul c t) = lin_mul (linum t) c" | "linum (CN n c t) = lin_add (linum (Mul c (Var n))) (linum t)" lemma linum [reflection]: "Inum (linum t) bs = Inum t bs" by (induct t rule: linum.induct) (simp_all add: lin_mul lin_add) text \Now we can use linum to simplify nat terms using reflection\ lemma "Suc (Suc 1) * (x + Suc 1 * y) = 3 * x + 6 * y" apply (reflection Inum_eqs' only: "Suc (Suc 1) * (x + Suc 1 * y)") oops text \Let's lift this to formulae and see what happens\ datatype aform = Lt num num | Eq num num | Ge num num | NEq num num | Conj aform aform | Disj aform aform | NEG aform | T | F primrec linaformsize:: "aform \ nat" where "linaformsize T = 1" | "linaformsize F = 1" | "linaformsize (Lt a b) = 1" | "linaformsize (Ge a b) = 1" | "linaformsize (Eq a b) = 1" | "linaformsize (NEq a b) = 1" | "linaformsize (NEG p) = 2 + linaformsize p" | "linaformsize (Conj p q) = 1 + linaformsize p + linaformsize q" | "linaformsize (Disj p q) = 1 + linaformsize p + linaformsize q" lemma [measure_function]: "is_measure linaformsize" .. primrec is_aform :: "aform => nat list => bool" where "is_aform T vs = True" | "is_aform F vs = False" | "is_aform (Lt a b) vs = (Inum a vs < Inum b vs)" | "is_aform (Eq a b) vs = (Inum a vs = Inum b vs)" | "is_aform (Ge a b) vs = (Inum a vs \ Inum b vs)" | "is_aform (NEq a b) vs = (Inum a vs \ Inum b vs)" | "is_aform (NEG p) vs = (\ (is_aform p vs))" | "is_aform (Conj p q) vs = (is_aform p vs \ is_aform q vs)" | "is_aform (Disj p q) vs = (is_aform p vs \ is_aform q vs)" text\Let's reify and do reflection\ lemma "(3::nat) * x + t < 0 \ (2 * x + y \ 17)" apply (reify Inum_eqs' is_aform.simps) oops text \Note that reification handles several interpretations at the same time\ lemma "(3::nat) * x + t < 0 \ x * x + t * x + 3 + 1 = z * t * 4 * z \ x + x + 1 < 0" apply (reflection Inum_eqs' is_aform.simps only: "x + x + 1") oops text \For reflection we now define a simple transformation on aform: NNF + linum on atoms\ fun linaform:: "aform \ aform" where "linaform (Lt s t) = Lt (linum s) (linum t)" | "linaform (Eq s t) = Eq (linum s) (linum t)" | "linaform (Ge s t) = Ge (linum s) (linum t)" | "linaform (NEq s t) = NEq (linum s) (linum t)" | "linaform (Conj p q) = Conj (linaform p) (linaform q)" | "linaform (Disj p q) = Disj (linaform p) (linaform q)" | "linaform (NEG T) = F" | "linaform (NEG F) = T" | "linaform (NEG (Lt a b)) = Ge a b" | "linaform (NEG (Ge a b)) = Lt a b" | "linaform (NEG (Eq a b)) = NEq a b" | "linaform (NEG (NEq a b)) = Eq a b" | "linaform (NEG (NEG p)) = linaform p" | "linaform (NEG (Conj p q)) = Disj (linaform (NEG p)) (linaform (NEG q))" | "linaform (NEG (Disj p q)) = Conj (linaform (NEG p)) (linaform (NEG q))" | "linaform p = p" lemma linaform: "is_aform (linaform p) vs = is_aform p vs" by (induct p rule: linaform.induct) (auto simp add: linum) lemma "(Suc (Suc (Suc 0)) * ((x::nat) + Suc (Suc 0)) + Suc (Suc (Suc 0)) * (Suc (Suc (Suc 0))) * ((x::nat) + Suc (Suc 0))) < 0 \ Suc 0 + Suc 0 < 0" apply (reflection Inum_eqs' is_aform.simps rules: linaform) oops declare linaform [reflection] lemma "(Suc (Suc (Suc 0)) * ((x::nat) + Suc (Suc 0)) + Suc (Suc (Suc 0)) * (Suc (Suc (Suc 0))) * ((x::nat) + Suc (Suc 0))) < 0 \ Suc 0 + Suc 0 < 0" apply (reflection Inum_eqs' is_aform.simps) oops text \We now give an example where interpretaions have zero or more than only one envornement of different types and show that automatic reification also deals with bindings\ datatype rb = BC bool | BAnd rb rb | BOr rb rb primrec Irb :: "rb \ bool" where "Irb (BC p) \ p" | "Irb (BAnd s t) \ Irb s \ Irb t" | "Irb (BOr s t) \ Irb s \ Irb t" lemma "A \ (B \ D \ B) \ A \ (B \ D \ B) \ A \ (B \ D \ B) \ A \ (B \ D \ B)" apply (reify Irb.simps) oops datatype rint = IC int | IVar nat | IAdd rint rint | IMult rint rint | INeg rint | ISub rint rint primrec Irint :: "rint \ int list \ int" where Irint_Var: "Irint (IVar n) vs = vs ! n" | Irint_Neg: "Irint (INeg t) vs = - Irint t vs" | Irint_Add: "Irint (IAdd s t) vs = Irint s vs + Irint t vs" | Irint_Sub: "Irint (ISub s t) vs = Irint s vs - Irint t vs" | Irint_Mult: "Irint (IMult s t) vs = Irint s vs * Irint t vs" | Irint_C: "Irint (IC i) vs = i" lemma Irint_C0: "Irint (IC 0) vs = 0" by simp lemma Irint_C1: "Irint (IC 1) vs = 1" by simp lemma Irint_Cnumeral: "Irint (IC (numeral x)) vs = numeral x" by simp lemmas Irint_simps = Irint_Var Irint_Neg Irint_Add Irint_Sub Irint_Mult Irint_C0 Irint_C1 Irint_Cnumeral lemma "(3::int) * x + y * y - 9 + (- z) = 0" apply (reify Irint_simps ("(3::int) * x + y * y - 9 + (- z)")) oops datatype rlist = LVar nat | LEmpty | LCons rint rlist | LAppend rlist rlist primrec Irlist :: "rlist \ int list \ int list list \ int list" where "Irlist (LEmpty) is vs = []" | "Irlist (LVar n) is vs = vs ! n" | "Irlist (LCons i t) is vs = Irint i is # Irlist t is vs" | "Irlist (LAppend s t) is vs = Irlist s is vs @ Irlist t is vs" lemma "[(1::int)] = []" apply (reify Irlist.simps Irint_simps ("[1] :: int list")) oops lemma "([(3::int) * x + y * y - 9 + (- z)] @ []) @ xs = [y * y - z - 9 + (3::int) * x]" apply (reify Irlist.simps Irint_simps ("([(3::int) * x + y * y - 9 + (- z)] @ []) @ xs")) oops datatype rnat = NC nat| NVar nat| NSuc rnat | NAdd rnat rnat | NMult rnat rnat | NNeg rnat | NSub rnat rnat | Nlgth rlist primrec Irnat :: "rnat \ int list \ int list list \ nat list \ nat" where Irnat_Suc: "Irnat (NSuc t) is ls vs = Suc (Irnat t is ls vs)" | Irnat_Var: "Irnat (NVar n) is ls vs = vs ! n" | Irnat_Neg: "Irnat (NNeg t) is ls vs = 0" | Irnat_Add: "Irnat (NAdd s t) is ls vs = Irnat s is ls vs + Irnat t is ls vs" | Irnat_Sub: "Irnat (NSub s t) is ls vs = Irnat s is ls vs - Irnat t is ls vs" | Irnat_Mult: "Irnat (NMult s t) is ls vs = Irnat s is ls vs * Irnat t is ls vs" | Irnat_lgth: "Irnat (Nlgth rxs) is ls vs = length (Irlist rxs is ls)" | Irnat_C: "Irnat (NC i) is ls vs = i" lemma Irnat_C0: "Irnat (NC 0) is ls vs = 0" by simp lemma Irnat_C1: "Irnat (NC 1) is ls vs = 1" by simp lemma Irnat_Cnumeral: "Irnat (NC (numeral x)) is ls vs = numeral x" by simp lemmas Irnat_simps = Irnat_Suc Irnat_Var Irnat_Neg Irnat_Add Irnat_Sub Irnat_Mult Irnat_lgth Irnat_C0 Irnat_C1 Irnat_Cnumeral lemma "Suc n * length (([(3::int) * x + y * y - 9 + (- z)] @ []) @ xs) = length xs" apply (reify Irnat_simps Irlist.simps Irint_simps ("Suc n * length (([(3::int) * x + y * y - 9 + (- z)] @ []) @ xs)")) oops datatype rifm = RT | RF | RVar nat | RNLT rnat rnat | RNILT rnat rint | RNEQ rnat rnat | RAnd rifm rifm | ROr rifm rifm | RImp rifm rifm| RIff rifm rifm | RNEX rifm | RIEX rifm | RLEX rifm | RNALL rifm | RIALL rifm | RLALL rifm | RBEX rifm | RBALL rifm primrec Irifm :: "rifm \ bool list \ int list \ (int list) list \ nat list \ bool" where "Irifm RT ps is ls ns \ True" | "Irifm RF ps is ls ns \ False" | "Irifm (RVar n) ps is ls ns \ ps ! n" | "Irifm (RNLT s t) ps is ls ns \ Irnat s is ls ns < Irnat t is ls ns" | "Irifm (RNILT s t) ps is ls ns \ int (Irnat s is ls ns) < Irint t is" | "Irifm (RNEQ s t) ps is ls ns \ Irnat s is ls ns = Irnat t is ls ns" | "Irifm (RAnd p q) ps is ls ns \ Irifm p ps is ls ns \ Irifm q ps is ls ns" | "Irifm (ROr p q) ps is ls ns \ Irifm p ps is ls ns \ Irifm q ps is ls ns" | "Irifm (RImp p q) ps is ls ns \ Irifm p ps is ls ns \ Irifm q ps is ls ns" | "Irifm (RIff p q) ps is ls ns \ Irifm p ps is ls ns = Irifm q ps is ls ns" | "Irifm (RNEX p) ps is ls ns \ (\x. Irifm p ps is ls (x # ns))" | "Irifm (RIEX p) ps is ls ns \ (\x. Irifm p ps (x # is) ls ns)" | "Irifm (RLEX p) ps is ls ns \ (\x. Irifm p ps is (x # ls) ns)" | "Irifm (RBEX p) ps is ls ns \ (\x. Irifm p (x # ps) is ls ns)" | "Irifm (RNALL p) ps is ls ns \ (\x. Irifm p ps is ls (x#ns))" | "Irifm (RIALL p) ps is ls ns \ (\x. Irifm p ps (x # is) ls ns)" | "Irifm (RLALL p) ps is ls ns \ (\x. Irifm p ps is (x#ls) ns)" | "Irifm (RBALL p) ps is ls ns \ (\x. Irifm p (x # ps) is ls ns)" lemma " \x. \n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] @ []) @ xs) = length xs) \ m < 5*n - length (xs @ [2,3,4,x*z + 8 - y]) \ (\p. \q. p \ q \ r)" apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps) oops text \An example for equations containing type variables\ datatype prod = Zero | One | Var nat | Mul prod prod | Pw prod nat | PNM nat nat prod primrec Iprod :: " prod \ ('a::linordered_idom) list \'a" where "Iprod Zero vs = 0" | "Iprod One vs = 1" | "Iprod (Var n) vs = vs ! n" | "Iprod (Mul a b) vs = Iprod a vs * Iprod b vs" | "Iprod (Pw a n) vs = Iprod a vs ^ n" | "Iprod (PNM n k t) vs = (vs ! n) ^ k * Iprod t vs" datatype sgn = Pos prod | Neg prod | ZeroEq prod | NZeroEq prod | Tr | F | Or sgn sgn | And sgn sgn primrec Isgn :: "sgn \ ('a::linordered_idom) list \ bool" where "Isgn Tr vs \ True" | "Isgn F vs \ False" | "Isgn (ZeroEq t) vs \ Iprod t vs = 0" | "Isgn (NZeroEq t) vs \ Iprod t vs \ 0" | "Isgn (Pos t) vs \ Iprod t vs > 0" | "Isgn (Neg t) vs \ Iprod t vs < 0" | "Isgn (And p q) vs \ Isgn p vs \ Isgn q vs" | "Isgn (Or p q) vs \ Isgn p vs \ Isgn q vs" lemmas eqs = Isgn.simps Iprod.simps lemma "(x::'a::{linordered_idom}) ^ 4 * y * z * y ^ 2 * z ^ 23 > 0" apply (reify eqs) oops end diff --git a/src/HOL/ex/Tree23.thy b/src/HOL/ex/Tree23.thy --- a/src/HOL/ex/Tree23.thy +++ b/src/HOL/ex/Tree23.thy @@ -1,430 +1,432 @@ (* Title: HOL/ex/Tree23.thy Author: Tobias Nipkow, TU Muenchen *) section \2-3 Trees\ theory Tree23 imports Main begin +hide_const (open) or + text\This is a very direct translation of some of the functions in table.ML in the Isabelle source code. That source is due to Makarius Wenzel and Stefan Berghofer. Note that because of complicated patterns and mutual recursion, these function definitions take a few minutes and can also be seen as stress tests for the function definition facility.\ type_synonym key = int \ \for simplicity, should be a type class\ datatype ord = LESS | EQUAL | GREATER definition "ord i j = (if i 'a \ 'a tree23 \ 'a growth" where "add key y Empty = Sprout Empty (key,y) Empty" | "add key y (Branch2 left (k,x) right) = (case ord key k of LESS => (case add key y left of Stay left' => Stay (Branch2 left' (k,x) right) | Sprout left1 q left2 => Stay (Branch3 left1 q left2 (k,x) right)) | EQUAL => Stay (Branch2 left (k,y) right) | GREATER => (case add key y right of Stay right' => Stay (Branch2 left (k,x) right') | Sprout right1 q right2 => Stay (Branch3 left (k,x) right1 q right2)))" | "add key y (Branch3 left (k1,x1) mid (k2,x2) right) = (case ord key k1 of LESS => (case add key y left of Stay left' => Stay (Branch3 left' (k1,x1) mid (k2,x2) right) | Sprout left1 q left2 => Sprout (Branch2 left1 q left2) (k1,x1) (Branch2 mid (k2,x2) right)) | EQUAL => Stay (Branch3 left (k1,y) mid (k2,x2) right) | GREATER => (case ord key k2 of LESS => (case add key y mid of Stay mid' => Stay (Branch3 left (k1,x1) mid' (k2,x2) right) | Sprout mid1 q mid2 => Sprout (Branch2 left (k1,x1) mid1) q (Branch2 mid2 (k2,x2) right)) | EQUAL => Stay (Branch3 left (k1,x1) mid (k2,y) right) | GREATER => (case add key y right of Stay right' => Stay (Branch3 left (k1,x1) mid (k2,x2) right') | Sprout right1 q right2 => Sprout (Branch2 left (k1,x1) mid) (k2,x2) (Branch2 right1 q right2))))" definition add0 :: "key \ 'a \ 'a tree23 \ 'a tree23" where "add0 k y t = (case add k y t of Stay t' => t' | Sprout l p r => Branch2 l p r)" value "add0 5 e (add0 4 d (add0 3 c (add0 2 b (add0 1 a Empty))))" fun compare where "compare None (k2, _) = LESS" | "compare (Some k1) (k2, _) = ord k1 k2" fun if_eq where "if_eq EQUAL x y = x" | "if_eq _ x y = y" fun del :: "key option \ 'a tree23 \ ((key * 'a) * bool * 'a tree23)option" where "del (Some k) Empty = None" | "del None (Branch2 Empty p Empty) = Some(p, (True, Empty))" | "del None (Branch3 Empty p Empty q Empty) = Some(p, (False, Branch2 Empty q Empty))" | "del k (Branch2 Empty p Empty) = (case compare k p of EQUAL => Some(p, (True, Empty)) | _ => None)" | "del k (Branch3 Empty p Empty q Empty) = (case compare k p of EQUAL => Some(p, (False, Branch2 Empty q Empty)) | _ => (case compare k q of EQUAL => Some(q, (False, Branch2 Empty p Empty)) | _ => None))" | "del k (Branch2 l p r) = (case compare k p of LESS => (case del k l of None \ None | Some(p', (False, l')) => Some(p', (False, Branch2 l' p r)) | Some(p', (True, l')) => Some(p', case r of Branch2 rl rp rr => (True, Branch3 l' p rl rp rr) | Branch3 rl rp rm rq rr => (False, Branch2 (Branch2 l' p rl) rp (Branch2 rm rq rr)))) | or => (case del (if_eq or None k) r of None \ None | Some(p', (False, r')) => Some(p', (False, Branch2 l (if_eq or p' p) r')) | Some(p', (True, r')) => Some(p', case l of Branch2 ll lp lr => (True, Branch3 ll lp lr (if_eq or p' p) r') | Branch3 ll lp lm lq lr => (False, Branch2 (Branch2 ll lp lm) lq (Branch2 lr (if_eq or p' p) r')))))" | "del k (Branch3 l p m q r) = (case compare k q of LESS => (case compare k p of LESS => (case del k l of None \ None | Some(p', (False, l')) => Some(p', (False, Branch3 l' p m q r)) | Some(p', (True, l')) => Some(p', (False, case (m, r) of (Branch2 ml mp mr, Branch2 _ _ _) => Branch2 (Branch3 l' p ml mp mr) q r | (Branch3 ml mp mm mq mr, _) => Branch3 (Branch2 l' p ml) mp (Branch2 mm mq mr) q r | (Branch2 ml mp mr, Branch3 rl rp rm rq rr) => Branch3 (Branch2 l' p ml) mp (Branch2 mr q rl) rp (Branch2 rm rq rr)))) | or => (case del (if_eq or None k) m of None \ None | Some(p', (False, m')) => Some(p', (False, Branch3 l (if_eq or p' p) m' q r)) | Some(p', (True, m')) => Some(p', (False, case (l, r) of (Branch2 ll lp lr, Branch2 _ _ _) => Branch2 (Branch3 ll lp lr (if_eq or p' p) m') q r | (Branch3 ll lp lm lq lr, _) => Branch3 (Branch2 ll lp lm) lq (Branch2 lr (if_eq or p' p) m') q r | (_, Branch3 rl rp rm rq rr) => Branch3 l (if_eq or p' p) (Branch2 m' q rl) rp (Branch2 rm rq rr))))) | or => (case del (if_eq or None k) r of None \ None | Some(q', (False, r')) => Some(q', (False, Branch3 l p m (if_eq or q' q) r')) | Some(q', (True, r')) => Some(q', (False, case (l, m) of (Branch2 _ _ _, Branch2 ml mp mr) => Branch2 l p (Branch3 ml mp mr (if_eq or q' q) r') | (_, Branch3 ml mp mm mq mr) => Branch3 l p (Branch2 ml mp mm) mq (Branch2 mr (if_eq or q' q) r') | (Branch3 ll lp lm lq lr, Branch2 ml mp mr) => Branch3 (Branch2 ll lp lm) lq (Branch2 lr p ml) mp (Branch2 mr (if_eq or q' q) r')))))" definition del0 :: "key \ 'a tree23 \ 'a tree23" where "del0 k t = (case del (Some k) t of None \ t | Some(_,(_,t')) \ t')" text \Ordered trees\ definition opt_less :: "key option \ key option \ bool" where "opt_less x y = (case x of None \ True | Some a \ (case y of None \ True | Some b \ a < b))" lemma opt_less_simps [simp]: "opt_less None y = True" "opt_less x None = True" "opt_less (Some a) (Some b) = (a < b)" unfolding opt_less_def by (auto simp add: ord_def split: option.split) primrec ord' :: "key option \ 'a tree23 \ key option \ bool" where "ord' x Empty y = opt_less x y" | "ord' x (Branch2 l p r) y = (ord' x l (Some (fst p)) & ord' (Some (fst p)) r y)" | "ord' x (Branch3 l p m q r) y = (ord' x l (Some (fst p)) & ord' (Some (fst p)) m (Some (fst q)) & ord' (Some (fst q)) r y)" definition ord0 :: "'a tree23 \ bool" where "ord0 t = ord' None t None" text \Balanced trees\ inductive full :: "nat \ 'a tree23 \ bool" where "full 0 Empty" | "\full n l; full n r\ \ full (Suc n) (Branch2 l p r)" | "\full n l; full n m; full n r\ \ full (Suc n) (Branch3 l p m q r)" inductive_cases full_elims: "full n Empty" "full n (Branch2 l p r)" "full n (Branch3 l p m q r)" inductive_cases full_0_elim: "full 0 t" inductive_cases full_Suc_elim: "full (Suc n) t" lemma full_0_iff [simp]: "full 0 t \ t = Empty" by (auto elim: full_0_elim intro: full.intros) lemma full_Empty_iff [simp]: "full n Empty \ n = 0" by (auto elim: full_elims intro: full.intros) lemma full_Suc_Branch2_iff [simp]: "full (Suc n) (Branch2 l p r) \ full n l \ full n r" by (auto elim: full_elims intro: full.intros) lemma full_Suc_Branch3_iff [simp]: "full (Suc n) (Branch3 l p m q r) \ full n l \ full n m \ full n r" by (auto elim: full_elims intro: full.intros) fun height :: "'a tree23 \ nat" where "height Empty = 0" | "height (Branch2 l _ r) = Suc(max (height l) (height r))" | "height (Branch3 l _ m _ r) = Suc(max (height l) (max (height m) (height r)))" text\Is a tree balanced?\ fun bal :: "'a tree23 \ bool" where "bal Empty = True" | "bal (Branch2 l _ r) = (bal l & bal r & height l = height r)" | "bal (Branch3 l _ m _ r) = (bal l & bal m & bal r & height l = height m & height m = height r)" lemma full_imp_height: "full n t \ height t = n" by (induct set: full, simp_all) lemma full_imp_bal: "full n t \ bal t" by (induct set: full, auto dest: full_imp_height) lemma bal_imp_full: "bal t \ full (height t) t" by (induct t, simp_all) lemma bal_iff_full: "bal t \ (\n. full n t)" by (auto elim!: bal_imp_full full_imp_bal) text \The \<^term>\add0\ function either preserves the height of the tree, or increases it by one. The constructor returned by the \<^term>\add\ function determines which: A return value of the form \<^term>\Stay t\ indicates that the height will be the same. A value of the form \<^term>\Sprout l p r\ indicates an increase in height.\ primrec gfull :: "nat \ 'a growth \ bool" where "gfull n (Stay t) \ full n t" | "gfull n (Sprout l p r) \ full n l \ full n r" lemma gfull_add: "full n t \ gfull n (add k y t)" by (induct set: full, auto split: ord.split growth.split) text \The \<^term>\add0\ operation preserves balance.\ lemma bal_add0: "bal t \ bal (add0 k y t)" unfolding bal_iff_full add0_def apply (erule exE) apply (drule gfull_add [of _ _ k y]) apply (cases "add k y t") apply (auto intro: full.intros) done text \The \<^term>\add0\ operation preserves order.\ lemma ord_cases: fixes a b :: int obtains "ord a b = LESS" and "a < b" | "ord a b = EQUAL" and "a = b" | "ord a b = GREATER" and "a > b" unfolding ord_def by (rule linorder_cases [of a b]) auto definition gtree :: "'a growth \ 'a tree23" where "gtree g = (case g of Stay t \ t | Sprout l p r \ Branch2 l p r)" lemma gtree_simps [simp]: "gtree (Stay t) = t" "gtree (Sprout l p r) = Branch2 l p r" unfolding gtree_def by simp_all lemma add0: "add0 k y t = gtree (add k y t)" unfolding add0_def by (simp split: growth.split) lemma ord'_add0: "\ord' k1 t k2; opt_less k1 (Some k); opt_less (Some k) k2\ \ ord' k1 (add0 k y t) k2" unfolding add0 apply (induct t arbitrary: k1 k2) apply simp apply clarsimp apply (rule_tac a=k and b=a in ord_cases) apply simp apply (case_tac "add k y t1", simp, simp) apply simp apply simp apply (case_tac "add k y t2", simp, simp) apply clarsimp apply (rule_tac a=k and b=a in ord_cases) apply simp apply (case_tac "add k y t1", simp, simp) apply simp apply simp apply (rule_tac a=k and b=aa in ord_cases) apply simp apply (case_tac "add k y t2", simp, simp) apply simp apply simp apply (case_tac "add k y t3", simp, simp) done lemma ord0_add0: "ord0 t \ ord0 (add0 k y t)" by (simp add: ord0_def ord'_add0) text \The \<^term>\del\ function preserves balance.\ lemma del_extra_simps: "l \ Empty \ r \ Empty \ del k (Branch2 l p r) = (case compare k p of LESS => (case del k l of None \ None | Some(p', (False, l')) => Some(p', (False, Branch2 l' p r)) | Some(p', (True, l')) => Some(p', case r of Branch2 rl rp rr => (True, Branch3 l' p rl rp rr) | Branch3 rl rp rm rq rr => (False, Branch2 (Branch2 l' p rl) rp (Branch2 rm rq rr)))) | or => (case del (if_eq or None k) r of None \ None | Some(p', (False, r')) => Some(p', (False, Branch2 l (if_eq or p' p) r')) | Some(p', (True, r')) => Some(p', case l of Branch2 ll lp lr => (True, Branch3 ll lp lr (if_eq or p' p) r') | Branch3 ll lp lm lq lr => (False, Branch2 (Branch2 ll lp lm) lq (Branch2 lr (if_eq or p' p) r')))))" "l \ Empty \ m \ Empty \ r \ Empty \ del k (Branch3 l p m q r) = (case compare k q of LESS => (case compare k p of LESS => (case del k l of None \ None | Some(p', (False, l')) => Some(p', (False, Branch3 l' p m q r)) | Some(p', (True, l')) => Some(p', (False, case (m, r) of (Branch2 ml mp mr, Branch2 _ _ _) => Branch2 (Branch3 l' p ml mp mr) q r | (Branch3 ml mp mm mq mr, _) => Branch3 (Branch2 l' p ml) mp (Branch2 mm mq mr) q r | (Branch2 ml mp mr, Branch3 rl rp rm rq rr) => Branch3 (Branch2 l' p ml) mp (Branch2 mr q rl) rp (Branch2 rm rq rr)))) | or => (case del (if_eq or None k) m of None \ None | Some(p', (False, m')) => Some(p', (False, Branch3 l (if_eq or p' p) m' q r)) | Some(p', (True, m')) => Some(p', (False, case (l, r) of (Branch2 ll lp lr, Branch2 _ _ _) => Branch2 (Branch3 ll lp lr (if_eq or p' p) m') q r | (Branch3 ll lp lm lq lr, _) => Branch3 (Branch2 ll lp lm) lq (Branch2 lr (if_eq or p' p) m') q r | (_, Branch3 rl rp rm rq rr) => Branch3 l (if_eq or p' p) (Branch2 m' q rl) rp (Branch2 rm rq rr))))) | or => (case del (if_eq or None k) r of None \ None | Some(q', (False, r')) => Some(q', (False, Branch3 l p m (if_eq or q' q) r')) | Some(q', (True, r')) => Some(q', (False, case (l, m) of (Branch2 _ _ _, Branch2 ml mp mr) => Branch2 l p (Branch3 ml mp mr (if_eq or q' q) r') | (_, Branch3 ml mp mm mq mr) => Branch3 l p (Branch2 ml mp mm) mq (Branch2 mr (if_eq or q' q) r') | (Branch3 ll lp lm lq lr, Branch2 ml mp mr) => Branch3 (Branch2 ll lp lm) lq (Branch2 lr p ml) mp (Branch2 mr (if_eq or q' q) r')))))" apply - apply (cases l, cases r, simp_all only: del.simps, simp) apply (cases l, cases m, cases r, simp_all only: del.simps, simp) done fun dfull where "dfull n None \ True" | "dfull n (Some (p, (True, t'))) \ full n t'" | "dfull n (Some (p, (False, t'))) \ full (Suc n) t'" lemmas dfull_case_intros = ord.exhaust [of y "dfull a (case_ord b c d y)"] option.exhaust [of y "dfull a (case_option b c y)"] prod.exhaust [of y "dfull a (case_prod b y)"] bool.exhaust [of y "dfull a (case_bool b c y)"] tree23.exhaust [of y "dfull a (Some (b, case_tree23 c d e y))"] tree23.exhaust [of y "full a (case_tree23 b c d y)"] for a b c d e y lemma dfull_del: "full (Suc n) t \ dfull n (del k t)" proof - { fix n :: "nat" and p :: "key \ 'a" and l r :: "'a tree23" and k assume "\n. \compare k p = LESS; full (Suc n) l\ \ dfull n (del k l)" and "\n. \compare k p = EQUAL; full (Suc n) r\ \ dfull n (del (if_eq EQUAL None k) r)" and "\n. \compare k p = GREATER; full (Suc n) r\ \ dfull n (del (if_eq GREATER None k) r)" and "full (Suc n) (Branch2 l p r)" hence "dfull n (del k (Branch2 l p r))" apply clarsimp apply (cases n) apply (cases k) apply simp apply (simp split: ord.split) apply simp apply (subst del_extra_simps, force) (* This should work, but it is way too slow! apply (force split: ord.split option.split bool.split tree23.split) *) apply (simp | rule dfull_case_intros)+ done } note A = this { fix n :: "nat" and p q :: "key \ 'a" and l m r :: "'a tree23" and k assume "\n. \compare k q = LESS; compare k p = LESS; full (Suc n) l\ \ dfull n (del k l)" and "\n. \compare k q = LESS; compare k p = EQUAL; full (Suc n) m\ \ dfull n (del (if_eq EQUAL None k) m)" and "\n. \compare k q = LESS; compare k p = GREATER; full (Suc n) m\ \ dfull n (del (if_eq GREATER None k) m)" and "\n. \compare k q = EQUAL; full (Suc n) r\ \ dfull n (del (if_eq EQUAL None k) r)" and "\n. \compare k q = GREATER; full (Suc n) r\ \ dfull n (del (if_eq GREATER None k) r)" and "full (Suc n) (Branch3 l p m q r)" hence "dfull n (del k (Branch3 l p m q r))" apply clarsimp apply (cases n) apply (cases k) apply simp apply (simp split: ord.split) apply simp apply (subst del_extra_simps, force) apply (simp | rule dfull_case_intros)+ done } note B = this show "full (Suc n) t \ dfull n (del k t)" proof (induct k t arbitrary: n rule: del.induct, goal_cases) case (1 k n) thus "dfull n (del (Some k) Empty)" by simp next case (2 p n) thus "dfull n (del None (Branch2 Empty p Empty))" by simp next case (3 p q n) thus "dfull n (del None (Branch3 Empty p Empty q Empty))" by simp next case (4 v p n) thus "dfull n (del (Some v) (Branch2 Empty p Empty))" by (simp split: ord.split) next case (5 v p q n) thus "dfull n (del (Some v) (Branch3 Empty p Empty q Empty))" by (simp split: ord.split) next case (26 n) thus ?case by simp qed (fact A | fact B)+ qed lemma bal_del0: "bal t \ bal (del0 k t)" unfolding bal_iff_full del0_def apply (erule exE) apply (case_tac n, simp, simp) apply (frule dfull_del [where k="Some k"]) apply (cases "del (Some k) t", force) apply (rename_tac a, case_tac "a", rename_tac b t', case_tac "b", auto) done text\This is a little test harness and should be commented out once the above functions have been proved correct.\ datatype 'a act = Add int 'a | Del int fun exec where "exec [] t = t" | "exec (Add k x # as) t = exec as (add0 k x t)" | "exec (Del k # as) t = exec as (del0 k t)" text\Some quick checks:\ lemma bal_exec: "bal t \ bal (exec as t)" by (induct as t arbitrary: t rule: exec.induct, simp_all add: bal_add0 bal_del0) lemma "bal(exec as Empty)" by (simp add: bal_exec) lemma "ord0(exec as Empty)" quickcheck oops end