This is meant to try out the workflow, taking advantage of the filling in of future computations.
In the console, we begin with
mill core.jvm.publishLocal
import sys.process._
println("git log -1".!!)
import $ivy.`io.github.siddhartha-gadgil::provingground-core-jvm:0.1.0`
// import $ivy.`io.monix::monix:3.0.0`
import provingground._, interface._, learning._, translation._, HoTT._
We will now explore without notes.
val tg = TermGenParams()
val terms = FiniteDistribution.unif[Term](Type, Unit, Zero, Star)
val typs = FiniteDistribution.unif[Typ[Term]](Type, Unit, Zero)
val ts = TermState(terms, typs)
val lp = LocalProver(ts)
As everything important is supposed to be memoized, we can calculate anything and intermediate steps are remembered. So we calculate theorems.
val thmsT = lp.theoremsByStatement
val thmsF = thmsT.materialize.runToFuture
thmsF.value
The main result of this experiment was a failure. Should investigate further. This could be because corrections for equations lead to a slowdown. Should try without generating equations.
val lp0 = LocalProver(ts, cutoff = 0.01)
val nsT = lp0.nextState
val nsF = nsT.runToFuture
nsF.value
Error with even a low cutoff, so clearly a bug. But the code ran fine in a REPL, so notebook issue.