HomeIsabelle/Phabricator
Significantly reduced ML heap usage

General

  • ML heap usage and stored heap size has been significantly reduced, especially for applications with a lot of definitions in a locale or class context. The shrink factor is usually in the range 1.1 .. 2.0, but a factor 3 .. 10 has been seen in unusual situations. This often allows big applications to return to the "small" 64_32 memory model with its hard limit of 16 GiB, and thus reduce the heap size by another factor 1.8.

This refers to current f3d19c8445ec: many changesets are leading towards this great improvement. The key point is actually rather small (f16067da45ef):

changeset:   78048:f16067da45ef
user:        wenzelm
date:        Mon May 15 14:13:58 2023 +0200
files:       src/Pure/Isar/proof_context.ML src/Pure/assumption.ML src/Pure/variable.ML
description:
avoid capture of inner/outer context and thus reduce heaps sizes by 20..40% (see also dd04a8b654fc, e49bf4ebf330, 9c19e15c8548, 71467e35fc3c);

That factor 1.2 .. 1.4 was merely for HOL and HOL-Analysis. Better factors are seen here:

  • HOL-Algebra 3.6
  • HOL-Probability 1.6
  • Category3 5.2
  • MonoidalCategory 7.6
Written by makarius on May 30 2023, 12:18 PM.
User
Projects
None
Subscribers
None

Event Timeline