This is a small experiment to see if targeting as a codomain leads to higher weight for a type. We need functions that act as distractions.
import $cp.bin.`provingground-core-jvm-9cc21d9ac6.fat.jar`
import provingground._ , interface._, HoTT._, learning._
repl.pprinter() = {
val p = repl.pprinter()
p.copy(
additionalHandlers = p.additionalHandlers.orElse {
translation.FansiShow.fansiHandler
}
)
}
val M = "M" :: Type
val mul = "mul" :: (M ->: M ->: M)
val eqM = "eqM" ::(M ->: M ->: Type)
val m = "m" :: M
val n = "n" :: M
val ts = TermState(FiniteDistribution.unif(mul, eqM, m , n), FiniteDistribution.unif(mul.typ, eqM.typ, M))
val lp1 = LocalProver(ts, tg = TermGenParams()).noIsles
val lp2 = LocalProver(ts, tg = TermGenParams(typAsCodW = 0.1)).noIsles
val fd1T = lp1.varDist(TermGeneratorNodes.termsWithTyp(M))
val fd2T = lp2.varDist(TermGeneratorNodes.termsWithTyp(M))
import monix.execution.Scheduler.Implicits.global
fd1T.map(_.entropyVec).runToFuture
fd2T.map(_.entropyVec).runToFuture
Unify.targetCodomain(mul, M)
We see that with strong targeting we get a much sharper distribution. We should do this for better instantiation in the Achal problem.