Goal chomping

  • We run the most basic strategic prover - a goal chomper that keeps trying a succession of goals.
  • Termination is when a goal fails or all goals are finished.
  • Proving is only by generation, with the generation including backward reasoning rules.
  • So far the goal chomper is untested; we test this.
  • We also test updating displays etc.

The goals and proving.

  • To generate the goals, we start with just Type.
  • To prove them, we include also Zero, One and Star
In [1]:
import $cp.bin.`provingground-core-jvm-d7193b6a8f.fat.jar`
import provingground._ , interface._, HoTT._, learning._ 
repl.pprinter() = {
  val p = repl.pprinter()
  p.copy(
    additionalHandlers = p.additionalHandlers.orElse {
      translation.FansiShow.fansiHandler
    }
  )
}
Out[1]:
import $cp.$                                              

import provingground._ , interface._, HoTT._, learning._ 
In [2]:
val terms = FiniteDistribution.unif[Term](Unit, Zero, Star)
val typs = FiniteDistribution.unif[Typ[Term]](Type, Unit, Zero)
val ts = TermState(terms, typs)
val ts0 = TermState(FiniteDistribution(), FiniteDistribution.unif(Type))
Out[2]:
terms: FiniteDistribution[Term] = FiniteDistribution(
  Vector(
    Weighted(Unit, 0.3333333333333333),
    Weighted(Zero, 0.3333333333333333),
    Weighted(Star, 0.3333333333333333)
  )
)
typs: FiniteDistribution[Typ[Term]] = FiniteDistribution(
  Vector(
    Weighted(š’° , 0.3333333333333333),
    Weighted(Unit, 0.3333333333333333),
    Weighted(Zero, 0.3333333333333333)
  )
)
ts: TermState = TermState(
  FiniteDistribution(
    Vector(
      Weighted(Unit, 0.3333333333333333),
      Weighted(Zero, 0.3333333333333333),
      Weighted(Star, 0.3333333333333333)
    )
  ),
  FiniteDistribution(
    Vector(
      Weighted(š’° , 0.3333333333333333),
      Weighted(Unit, 0.3333333333333333),
      Weighted(Zero, 0.3333333333333333)
    )
  ),
  Vector(),
  FiniteDistribution(Vector()),
  FiniteDistribution(Vector()),
  Empty
)
ts0: TermState = TermState(
  FiniteDistribution(Vector()),
  FiniteDistribution(Vector(Weighted(š’° , 1.0))),
  Vector(),
  FiniteDistribution(Vector()),
  FiniteDistribution(Vector()),
  Empty
)
In [3]:
import monix.execution.Scheduler.Implicits.global
val lp = LocalProver(ts).sharpen(10)
val lp0 = LocalProver(ts0).sharpen(10)
Out[3]:
import monix.execution.Scheduler.Implicits.global

lp: LocalProver = LocalProver(
  TermState(
    FiniteDistribution(
      Vector(
        Weighted(Unit, 0.3333333333333333),
        Weighted(Zero, 0.3333333333333333),
        Weighted(Star, 0.3333333333333333)
      )
    ),
    FiniteDistribution(
      Vector(
        Weighted(š’° , 0.3333333333333333),
        Weighted(Unit, 0.3333333333333333),
        Weighted(Zero, 0.3333333333333333)
      )
    ),
    Vector(),
    FiniteDistribution(Vector()),
    FiniteDistribution(Vector()),
    Empty
  ),
  TermGenParams(
    0.1,
    0.1,
    0.1,
    0.1,
    0.1,
    0.05,
    0.05,
    0.05,
    0.0,
    0.0,
    0.0,
    0.0,
    0.3,
    0.7,
    0.5,
    0.0,
    0.0,
...
lp0: LocalProver = LocalProver(
  TermState(
    FiniteDistribution(Vector()),
    FiniteDistribution(Vector(Weighted(š’° , 1.0))),
    Vector(),
    FiniteDistribution(Vector()),
    FiniteDistribution(Vector()),
    Empty
  ),
  TermGenParams(
    0.1,
    0.1,
    0.1,
    0.1,
    0.1,
    0.05,
    0.05,
    0.05,
    0.0,
    0.0,
    0.0,
    0.0,
    0.3,
    0.7,
    0.5,
    0.0,
    0.0,
    0.0
  ),
  1.0E-5,
  12 minutes,
  1.01,
  1.0,
  10000,
  10,
  1.0,
  1.0,
  None,
  false,
...
In [4]:
val unknownsT = lp0.unknownStatements.map(_.entropyVec.map(_.elem)).memoize
val unF = unknownsT.runToFuture
In [5]:
import StrategicProvers._
Out[5]:
import StrategicProvers._
In [6]:
import almond.display._
val chompView = Markdown("## Results from Goal chomping\n")

Goal chomping done

  • chomped: 45
Out[6]:
import almond.display._
In [7]:
val chT = unknownsT.flatMap(typs => goalChomper(lp, typs)).memoize
Out[7]:
chT: monix.eval.Task[(Vector[Successes], Set[EquationNode], Vector[Typ[Term]])] = FlatMap(
  Async(<function2>, false, true, true),
  ammonite.$sess.cmd6$Helper$$Lambda$2791/1503415149@3a6de14a
)
In [8]:
val chF = chT.runToFuture
In [27]:
currentGoal
successes.size
unF.map(_.size)
successes.reverse
In [21]:
chF.foreach(s => chompView.withContent(s"## Goal chomping done\n\n * chomped: ${successes.size}").update())
In [29]:
chF.value
Out[29]:
res28: Option[scala.util.Try[(Vector[Successes], Set[EquationNode], Vector[Typ[Term]])]] = Some(
  Success(
    (
      Vector(
        Vector(
          (
            āˆ‘(@a : { (@a ā†’ Zero) },
            0.5,
            FiniteDistribution(
              Vector(
                Weighted((Zero , rec_{ Zero ; Zero }), 0.01127994556862484),
                Weighted((Zero , (@a : Zero) ā†¦ @a), 0.00974783102948558)
              )
            )
          )
        ),
        Vector(
          (
            āˆ‘(@a : { @a },
            0.5,
            FiniteDistribution(
              Vector(Weighted((Unit , Star), 0.01970769689870014))
            )
          )
        ),
        Vector(
          (
            ((š’°  ā†’ š’° ) ā†’ š’° ),
            0.5,
            FiniteDistribution(
              Vector(
                Weighted(
                  (@a : (š’°  ā†’ š’° )) ā†¦ @a(@a(Zero)),
                  0.003035898473921898
                ),
                Weighted((@a : (š’°  ā†’ š’° )) ā†¦ Zero, 0.004434521334908531),
                Weighted((@a : (š’°  ā†’ š’° )) ā†¦ Unit, 0.004434521334908531),
                Weighted((@a : (š’°  ā†’ š’° )) ā†¦ @a(Zero), 0.0030950013985154468),
...
In [30]:
chF.map(_._3.headOption)

Conclusions

  • Plenty of goals were chomped until the first failure.
  • The failure could be avoided by strategic backward reasoning.
  • Learning would also probably avoid this.
  • For a better test, we should run a liberal chomper that keeps going till there are no unknowns left, recroding both successes and failures.