Goal

This is meant to try out the workflow, taking advantage of the filling in of future computations.

Before starting

In the console, we begin with

mill core.jvm.publishLocal

First steps

  • print out the git log HEAD for replication
  • import with ivy,
  • import provingground and the main packages,
  • bind to REPL to avoid verbose output.
In [12]:
import sys.process._
println("git log -1".!!)
commit a6ae52a12ec053bbe0202927e570658e8131e43c
Author: Siddhartha Gadgil <siddhartha.gadgil@gmail.com>
Date:   Fri Sep 13 15:18:38 2019 +0530

    removed lfs

Out[12]:
import sys.process._
In [30]:
import $ivy.`io.github.siddhartha-gadgil::provingground-core-jvm:0.1.0`
// import $ivy.`io.monix::monix:3.0.0`
Out[30]:
import $ivy.$                                                          
// import $ivy.`io.monix::monix:3.0.0`
In [31]:
import provingground._, interface._, learning._, translation._, HoTT._
Out[31]:
import provingground._, interface._, learning._, translation._, HoTT._

We will now explore without notes.

In [32]:
val tg = TermGenParams()
Out[32]:
tg: TermGenParams = TermGenParams(
  0.1,
  0.1,
  0.1,
  0.1,
  0.1,
  0.05,
  0.05,
  0.05,
  0.0,
  0.0,
  0.0,
  0.0,
  0.3,
  0.7,
  0.5,
  0.0,
  0.0,
  0.0
)
In [33]:
val terms = FiniteDistribution.unif[Term](Type, Unit, Zero, Star)
Out[33]:
terms: FiniteDistribution[Term] = FiniteDistribution(
  Vector(
    Weighted(Universe(0), 0.25),
    Weighted(Unit, 0.25),
    Weighted(Zero, 0.25),
    Weighted(Star, 0.25)
  )
)
In [34]:
val typs = FiniteDistribution.unif[Typ[Term]](Type, Unit, Zero)
Out[34]:
typs: FiniteDistribution[Typ[Term]] = FiniteDistribution(
  Vector(
    Weighted(Universe(0), 0.3333333333333333),
    Weighted(Unit, 0.3333333333333333),
    Weighted(Zero, 0.3333333333333333)
  )
)
In [35]:
val ts = TermState(terms, typs)
val lp = LocalProver(ts)
Out[35]:
ts: TermState = TermState(
  FiniteDistribution(
    Vector(
      Weighted(Universe(0), 0.25),
      Weighted(Unit, 0.25),
      Weighted(Zero, 0.25),
      Weighted(Star, 0.25)
    )
  ),
  FiniteDistribution(
    Vector(
      Weighted(Universe(0), 0.3333333333333333),
      Weighted(Unit, 0.3333333333333333),
      Weighted(Zero, 0.3333333333333333)
    )
  ),
  Vector(),
  FiniteDistribution(Vector()),
  FiniteDistribution(Vector()),
  Empty
)
lp: LocalProver = LocalProver(
  TermState(
    FiniteDistribution(
      Vector(
        Weighted(Universe(0), 0.25),
        Weighted(Unit, 0.25),
        Weighted(Zero, 0.25),
        Weighted(Star, 0.25)
      )
    ),
    FiniteDistribution(
      Vector(
        Weighted(Universe(0), 0.3333333333333333),
        Weighted(Unit, 0.3333333333333333),
        Weighted(Zero, 0.3333333333333333)
      )
    ),
    Vector(),
    FiniteDistribution(Vector()),
    FiniteDistribution(Vector()),
    Empty
  ),
  TermGenParams(
    0.1,
    0.1,
    0.1,
    0.1,
    0.1,
    0.05,
    0.05,
    0.05,
    0.0,
    0.0,
    0.0,
    0.0,
    0.3,
    0.7,
    0.5,
    0.0,
...

As everything important is supposed to be memoized, we can calculate anything and intermediate steps are remembered. So we calculate theorems.

In [19]:
val thmsT = lp.theoremsByStatement
Out[19]:
thmsT: monix.eval.Task[FiniteDistribution[Typ[Term]]] = Map(
  FlatMap(
    Map(
      Async(<function2>, false, true, false),
      provingground.learning.LocalProver$$Lambda$2484/1295738439@28f0d7b9,
      0
    ),
    provingground.learning.LocalProver$$Lambda$2485/2069605659@5d0be5ed
  ),
  provingground.learning.LocalProverStep$$Lambda$2486/1615995408@28b2db83,
  0
)
In [22]:
val thmsF = thmsT.materialize.runToFuture
thmsF: monix.execution.CancelableFuture[scala.util.Try[FiniteDistribution[Typ[Term]]]] = Success(
  Failure(
    java.util.concurrent.TimeoutException: Task timed-out after 12 minutes of inactivity
  )
)
In [25]:
thmsF.value
Out[25]:
res24: Option[scala.util.Try[scala.util.Try[FiniteDistribution[Typ[Term]]]]] = Some(
  Success(
    Failure(
      java.util.concurrent.TimeoutException: Task timed-out after 12 minutes of inactivity
    )
  )
)

The main result of this experiment was a failure. Should investigate further. This could be because corrections for equations lead to a slowdown. Should try without generating equations.

In [36]:
val lp0 = LocalProver(ts, cutoff = 0.01)
Out[36]:
lp0: LocalProver = LocalProver(
  TermState(
    FiniteDistribution(
      Vector(
        Weighted(Universe(0), 0.25),
        Weighted(Unit, 0.25),
        Weighted(Zero, 0.25),
        Weighted(Star, 0.25)
      )
    ),
    FiniteDistribution(
      Vector(
        Weighted(Universe(0), 0.3333333333333333),
        Weighted(Unit, 0.3333333333333333),
        Weighted(Zero, 0.3333333333333333)
      )
    ),
    Vector(),
    FiniteDistribution(Vector()),
    FiniteDistribution(Vector()),
    Empty
  ),
  TermGenParams(
    0.1,
    0.1,
    0.1,
    0.1,
    0.1,
    0.05,
    0.05,
    0.05,
    0.0,
    0.0,
    0.0,
    0.0,
    0.3,
    0.7,
    0.5,
    0.0,
...
In [37]:
val nsT = lp0.nextState
Out[37]:
nsT: monix.eval.Task[TermState] = FlatMap(
  Map(
    Async(<function2>, false, true, false),
    provingground.learning.LocalProver$$Lambda$2484/1295738439@28f0d7b9,
    0
  ),
  provingground.learning.LocalProver$$Lambda$2485/2069605659@52900737
)
In [38]:
val nsF = nsT.runToFuture
nsF: monix.execution.CancelableFuture[TermState] = [running future]
In [29]:
nsF.value
Out[29]:
res28: Option[scala.util.Try[TermState]] = None

Error with even a low cutoff, so clearly a bug. But the code ran fine in a REPL, so notebook issue.