We have seen that functions should be regarded as equal if they are homotopic. As usual in mathematics, we have a corresponding notion of equivalence between types, namely a function $f: A \to B$ is an equivalence if there is an inverse functions with compositions equal, i.e., homotopic, to the identity. However, if we naturally turn this into a type, the result (as we see below) is not well-behaved. So we call this relation quasi-equivalence.
Instead, we define $f$ to be an equivalence if it has separate left and right equivalences. As we see in Algebra, this is an equivalent condition. However, this is better behaved in that the proofs of the proposition as types is generally unique in this formulation.
We now see the subtle problem in using the relation that we called quasi-equivalence. Consider the identity map on a type $A$. Let $g$ be an inverse of the identity, so that both conditions become $g \sim id$. However elements of the type are in general two proofs of $g \sim id$. For $a : A$, these are two paths from $a$ to $g(a)$. Combining these we get a path from $a$ to itself, the monodromy. In type theoretic terms, this is the following.
Thus, for a point $a : A$, an element of $isQuasiEquiv(A)$ gives an element of $a = a$, i.e., a loop. If there are non-trivial loops in $A$, we have more than one proof that identity is an equivalence.
In homotopy type theory , we interpret a type $A$ as a space $|A|$, and a term $a : A$ as a point $|a| \in |A|$ in the corresponding space. Here space means topological space, but we do not actually need to know what this means. All we need of spaces is:
Subsets of a Euclidean space are spaces.
For spaces $X$ and $Y$, it is meaningful to talk about continuous functions between them.
For spaces $X$ and $Y$, continuous functions between $X$ and $Y$ also form a space $C(X, Y)$.
Currying and uncurrying are continuous.
The interpretation.
As mentioned above types and terms are interpreted as spaces and points. For types $A$ and $B$, we interpret $A \to B$ as the space of continuous functions $C(|A|, |B|)$. We can similarly interpret dependent functions, but we shall not go into details here.
The most crucial ingredient in our interpretation is that of an equality type. For $x, y : A$, the type $x = y$ is interpreted as the space of paths in $|A|$ from $|x|$ to $|y|$, i.e., the space of continuous functions $\alpha: [0, 1] \to |A|$ so that $\alpha(0) = |x|$ and $\alpha(1) = |y|$.
It is a result of Voevodsky and Awodey-Warren that we do have a consistent interpretation of type theory, in the sense we have seen it so far, in which types, terms, (dependent) function types and equality types are mapped in the above fashion.
Given such an interpretation, various topological notions can be defined purely in type theoretic terms. The first of these arises when we ask when functions are equality.
Equality of functions.
Let $A, B$ be types and $f, g: A \to B$ be functions. Then $f$ and $g$ must be equal if and only if there is a path between $|f|$ and $|g|$ in $C(|A|, |B|)$. Such a path is a function $[0, 1] \to |A| \to |B|$, which after Uncurrying corresponds to a function $([0, 1] \times |A|) \to |B|$. Flipping the coordinates in the domain and Currying gives a function $|A| \to ([0, 1] \to |B|)$, i.e., a function mapping points of $|A|$ to paths in $|B|$. Indeed, using that the path we started with is from $|f|$ to $|g|$, it is easy to deduce that a point $|a|$ is mapped to a path from $|f(a)|$ to $|g(a)|$. Such a path corresponds to a term of the type $f(a) = g(a)$!
Thus, we see that if we have a consistent homotopical interpretation, then the equality type $f = g$ must equal the type
Function extensionality was the postulate that these types were equal. We shall not postulate this, but instead introduce and study homotopy type $f\sim g$ corresponding to such equality. We see this in Agda code. Firstly, we see the definitions for functions and dependent functions.
While these are simple inductive constructions, in topological terms they are more interesting. Given paths $\alpha, \beta: [0, 1] \to X$, for a space $X$, if $\alpha(1) = \beta(0)$ we can define a path $\alpha * \beta$ which is the path $\alpha$ followed by the path $\beta$. This is the path corresponding to transitivity. Reflexivity corresponds to constant paths. Finally symmetry corresponds to the map associating to a path $\alpha$ the path $\bar{\alpha}: t \mapsto \alpha(1-t)$.
These exercises are for review before the midterm. They are not to be submitted.
Let $\_\le\_$ denote the type family corresponding to the less than or equal to relation. For a function $f : \mathbb{N} \to \mathbb{N}$, what does the type $\Pi_{n : \mathbb{N}}\Pi_{m : \mathbb{N}} (m \le n \to f(n) \le f(m))$ represent?
For fixed types $A$ and $B$, recall that $A \times B$ is an inductive type. For a type $W$, what is the type of $rec_{A \times B, X}$? Relate this to Currying functions.
For a function $f : \mathbb{N} \to \mathbb{N}$, define a type corresponding to $f$ being bounded (above), using the type family $\_\le\_$.
Let $S$ be a type. Define an inductive type $W(S)$ whose terms are words, i.e., finite sequences, with each letter an element of $S$ or the formal inverse of an element of $S$. For example, if $S$ is the set (which we view as a type) $S = \{a, b\}$, then a term of $W(S)$ is $ab\bar{a}a\bar{b}$.
At the core of homotopy type theory (and its predecessors) is the idea of propostions as types. Namely, we interpret logical propositions - statements that are either true or false, as types, with a term having a given type being viewed as a proof of the corresponding proposition. The Curry-Howard correspondence lets us embed all of logic into type theory in the manner.
True and False
We begin by representing the two simplest propositions: true - always true, and false.
True and False types
12345
dataTrue:Setwhereqed:TruedataFalse:Setwhere
The $True$ type has just one constructor, giving an object with type $True$. On the other hand, the $False$ type has no constructors, so there are no objects of type $False$.
There are various ways of building propositions from other propositions. We see how these translate to constructions of types.
Logical implies
If $P$ and $Q$ are propositions, which we identify with their corresponding types. We interpret the proposition $P \implies Q$ as the function type $P \to Q$.
Some deductions
Modus Ponens is the rule of deduction (going back to antiquity) that says that if the proposition $P$ is true, and $P$ implies $Q$, then $Q$ is true. We can prove this in the types interpretation. Namely, Modus Ponens translates to the statement that if we have an objects of type $P$ and $P \to Q$, then we have an object of type $Q$. We get an object of type $Q$ by function application.
12
mp:{P:Set}→{Q:Set}→P→(P→Q)→Qmppimp=impp
Next, we see my favourite method of proof - vacuous implication. This says that a false statement implies everything, i.e., for any proposition $P$, we have $False \implies P$, which in type theory says $False\to P$ has objects.
As the $False$ type has no cases at all, a function is defined on $False$ by using an absurd pattern, which just says that there are no cases, so no definition is needed.
12
vacuous:{A:Set}→False→Avacuous()
Formally, as the $False$ type has no constructors, the recursion function $rec_{False, A}$ has type $False \to A$. We simply take this as the vacuous implication.
Even and Odd numbers
Next, we define a type family $Even(n)$, for $n : \mathbb{N}$, which is non-empty if and only if $n$ is even. To do this, we see that a number is even if and only if it is even as a consequence of the rules
We can prove that $4$ is even by applying the second constructor to the first constructor (telling us that $0$ is even) twice.
1
4even=(0even+2even)+2even
We next show that $1$ is odd. This means that we have to show that the type $Even\ 1$ is empty. While rules let us construct objects, and verify their types, there is no rule that tells us directly that a type is empty.
However, there is a nice way of capturing that a type $A$ is empty - if there is a function from $A$ to the empty type $False$, then $A$ must be empty - there is nowhere for an object of type $A$ to be mapped.
Indeed, what we prove is that there is a function from $Even\ 1$ to $False$ ; we define this using an absurd pattern.
1 is odd
12
1odd:Even1→False1odd()
This can be formalized by defining a dependent function on the inductive type family $Even$ with appropriate type.
More types for propositions
We now see the types corresponding to other ways of combining propositions: logical operations and and or.
Firstly, if $A$ and $B$ are types corresponding to propositions, then there are objects with each of these types if and only if there is a pair $(a, b)$ of the pair type $A \times B$.
Next, suppose $A$ and $B$ are types corresponding to propositions and we wish to construct a type corresponding to $A$ or $B$, then we require a type whose elements are elements of $A$ and elements of $B$, or more accurately the images of such elements under constructors. This is the direct sum type.
The above methods let us combine propositions without quantifiers in all the logical ways. In doing so we used type theory, but not dependent types. Dependent types also let us express logical statements with quantifiers.
For all
Suppose we have a type $A$ and a family of types $P(a)$ (which we regard as propositions), with a type associated to each object $a$ of type $A$. Then all the types $P(a)$ have objects (i.e., all corresponding propositions are true) if and only if there is a dependent function mapping each object $a$ of type $A$ to an object of type $P(a)$. Thus, the logical statement
translates to the product type
As we have seen, in Agda we represent the product type as $(a : A) \to P(a)$
Exists
Next, if we are given a collection of types $B(a)$ for objects $a$ of type $A$, the type corresponding to at least one of these types having an element is a $\Sigma$-type, as there is an element of $B(a)$ for at least one $a : A$ if and only if there is a pair term $(a, b)$ with $b : B(a)$.
A proof by induction
If $n$ is a natural number, we can prove by induction that one of $n$ and $n+1$ is even. We shall prove this in its type theoretic form. We will do this in an Agda module to keep notation clean - the module consists of all the tab spaced lines following its definition. In the following code, we define the property that for all $n$, one of $n$ and $n+1$ is even as $P$ (in a submodule). We then prove the induction step and use it to prove the theorem.
The definition is just a translation of the logical or into dependent sums and of $\forall$ into dependent functions.
In the induction step, we assume that we have a proof for $n$. Thus $n$ and the corresponding proof are arguments, but $n$ can be inferred. There are two cases - if we have a proof that $n$ is even (i.e., the first alternative for $P(n)$), we obtain a proof that $n+2 = (n+1) + 1$ is even by using the rule $\_+2even$, which is the second alternative for the statement $P(n+1)$. In the case where $n + 1$ is even, we directly obtain the first alternative for $P(n+1)$.
The theorem itself is simply obtained by putting together the base case and the induction step.
This proof is remarkable in many ways. First of all, note that this is no longer or more complicated than an informal proof. Further, note that we did not have to invoke the usual induction axiom schema, but instead just used the rules for constructing objects. Most significantly, as most of our types are inductive type (or type families), we get recursive definitions and inductive proofs in all these cases.
Indeed, using recursive definitions for inductive types we get all so called universal properties. Furthermore, traditionally universal properties require a separate uniqueness statement. But recursion-induction is powerful enough to even give the uniqueness statements for universal properties. This means a great deal of mathematical sophistication (universal algebra, various aspects of category theory) are encapsulated in these simple functions.
Functions with conditions and certificates.
Often functions are meant to be defined only when some conditions are satisfied, for instance we may wish to define a function $half : \mathbb{N} \to \mathbb{N}$ only for even integers. Traditionally, the way to deal with this situation is either to check the conditions, and declare an error (“throw an exception”) if they fail, or return some (incorrect) value if the condition fails perhaps causing serious errors elsewhere.
With propositions as types, there is a better alternative - the function can require, as an additional argument, a proof that a condition is satisfied. For instance, here is such a function.
Our definition also illustrates Agda’s dot notation. The first argument in the first case being .0 means that we can infer from types that its value must be zero.
Note that in the second case we must have a pattern of the form $succ(succ (n))$, from which it can be deduced that $p$ has type $Even n$. If we just tried to match to $n$, then we cannot in general reconcile the terms and types.
We may also wish to assert that the result of functions have some additional property - for instance that a function returns a sorted list. This is best achieved if the value of the function includes a proof that the condition holds. For example, the following function doubles a number and also gives evidence that it is even.
As a more complex example, we construct the function governing the Collatz conjecture, which maps a natural number $n$ to $n/2$ if $n$ is even, and $3n + 1$ if $n$ is odd, or equivalently $n+1$ is even. We use our halving function with proof, and the proof that one of $n$ and $n+1$ is even.
Note that we access a term T in the module n|n+1even from outside the module by prefacing it with n|n+1even. to get n|n+1even.T.
The identity type
One of the most fundamental concepts in homotopy type theory is the identity type family, representing equality between objects with a given type. This is an inductive type family, generated by the reflexivity constructor giving an equality between an object and itself.
The identity type
12
data_==_{A:Type}:A→A→Typewhererefl:(a:A)→a==a
Note that while this is an inductive type family, for a fixed object a the type $a==a$ is not an inductive type defined by $refl(a)$, i.e., we cannot define (dependent) functions on this type but just defining them on the reflexivity constructor. This is a subtle point, which will become clearer as we look at the topological interpretation. We shall study the identity type extensively.
For now, let us show some basic properties of the identity type. All these are proved by constructing proof terms by induction. These are in a separate module Id.
Firstly, we see that equality (given by the identity type) is symmetric and transitive. Symmetry is straightforward.
12
sym:{A:Type}→{xy:A}→x==y→y==xsym(reflx)=reflx
Next, we see transitivity of equality, for which we use $\&\&$ as notation.
We have once more used the dot notation. Notice that we have a term .a - this says that we can deduce, from the types, that $a$ is the only possibility at its position in the pattern.
If two terms are equal, then the terms resulting from applying a function to them are also equal. We prove this next.
As we see, we can express all the usual mathematical statements using types built up using our basic constructions: inductive types, functions and dependent functions. We have also seen that the basic rules for constructing objects are powerful rules of deduction. However, there are some things they cannot deduce, for instance the statement (called the axiom of extensionality) that if $f, g: A\to B$ are function with $f(a)=g(a)$ for all $a \in A$, then $f=g$. Hence, we have to introduce this as a postulate - we just postulate that there is an object (which we give a name) of a given type.
We briefly saw the Fin type family when constructing trees. We clarify and elaborate on its construction.
The Fin type family is indexed by natural numbers $\mathbb{N}$. For $n : \mathbb{N}$, $Fin(n)$ is a finite type with $n$ elements, corresponding to the integers $k$ with $0 \leq k < n$. We denote by $[k]_n$ the element corresponding to the natural number $k$ in $Fin(n)$. Note that $[1]_2$ and $[1]_3$ are not the same term even though they correspond to the same natural number.
We define $Fin$ as an inductive type family with two constructors. In Agda, this is as follows.
The constructors are similar to the $zero$ and $succ$ constructors for natural numbers. However, as remarked earlier, $[0]_2$ and $[0]_3$ are not the same term, so the constructor $fzero$ takes an argument specifying the type of the term constructed. Since $Fin(0)$ does not have a term corresponding to $0$, but $Fin(k)$ does for $k \geq 1$, we set $fzero(n) = [0]_{n +1}$. Formally, as usual, we only specify the type of $fzero$ and declare it to be a constructor.
The second constructor $fsucc$ takes $[k]_n$ to $[ k + 1]_{n +1}$. On the associated natural numbers, this is just the successor function. However it maps the type $Fin(n)$ to the type $Fin(n+1)$ - as one should expect since, given $0 \leq k < n$, we need not have $k + 1 <n$, but always have $0 \leq k+1 \leq n +1$.
Note that if $0\leq k <n$, then there is unique way to obtain $[k]_n$ using the given constructors, namely (using powers to denote iterated application) $[k]_n = fsucc^k (fzero (n -k -1))$.
By a recursive definition, we can define the function $[k]_n \mapsto k$.
We now consider a more complex example, which illustrates the most general form of constructors for inductive types.
A rooted tree with leaf labels of type $A$ is a structure built recursively, being of one of the forms:
a single vertex, which is the root, with a label from $A$, or
a vertex, the root, to which are attached finitely many rooted trees with leaf labels by an edge to their roots.
To model this, specifically to model “finitely many rooted trees”, we introduce the inductive type family associating to a natural number $n: \mathbb{N}$ the set of natural numbers less than $n$.
The Fin type family.
We construct an inductive type family $Fin: \mathbb{N} \to \mathcal{U}$, with $Fin(n)$ consisting of integers $k$ with $0 \leq k < n$. For $n >0$, this has an element $0$. Further, if $k$ gives an element of $Fin(n)$, then $k + 1$ is an element of $Fin(n +1)$. It is easy to see that these give the constructors of the type family. Thus,
Observe that for $n : \mathbb{N}$, a collection of $n$ objects of type $X$ can be viewed as a function $Fin(n) \to X$. To consider all finite trees we consider terms of type $\Pi_{n : \mathbb{N}} Fin(n)\to X$. Using this, we can define the inductive type of rooted trees with leaf labels of type $A$.
Note that the constructor for rooted trees is not of the earlier forms, but a more general form. For a type $W$, we can regard a term of $A \to W$ as giving a family of terms of $W$, indexed by $A$. More generally, we can construct families by regarding $W$ itself as the type of a family, and regarding functions, and dependent functions, to families of terms of $W$ as families of terms of $W$. Note that the domains cannot involve $W$.
We add another rule for types of constructors for $W$. This completes our list of constructors, so we now have the full definition of inductive types.
If $T$ is the type of a constructor of $W$, and $V = \dots \to W$ is the type of a family of terms of $W$, then $V \to W$ can be the type of a constructor.
Consider recursive and inductive definitions on rooted trees, or more generally inductive types with constructors including the new type. The value of a function on a node can depend on values on each of the rooted trees that constitute it. To formalize this, observe that if $\alpha: A \to W$ is a family of terms of type $W$, and $f : W\to X$, then we obtain a function $f \circ \alpha: A \to X$ by composition. The value of $f$ on the image of the constructor can depend on $F \circ \varphi$.
We thus make the following rule for $R_{W, X}$, which is easy to generalize to arbitrary families and to induction functions.
If $T$ is the type of a constructor of of $W$, $A$ is a type and $\alpha : A \to W$ is a variable (or term), then for a constructor $\varphi : (A \to W) \to T$, we have $R_{W, X}(\varphi) = (A \to W) \to (A \to X) \to R_{W, X}(\varphi(\alpha))$.
Exercise:
Define a fold function, which, given $n : \mathbb{N}$ (which can be inferred), $f: Fin(n) \to A$ and $\_ *\_ : A \times B \to B$, for types $A$ and $B$, has result
$f(0) * (f(1) * (\dots * (f(n-1) * b)))$
Define a function that, given a rooted tree with leaf labels in $A$ gives the list of labels.
While we have introduced dependent functions, we do not formally have any way to construct them - those constructed so far used pattern matching. We shall generalize the two ways we have seen to construct functions, lambdas and using recursion functions to constructions of dependent functions. In doing so we also review and clarify the case of ordinary functions.
Lambdas
We can construct a function of type $A \to B$ as a lambda, i.e., we consider an $\lambda$-expression $\lambda a \to b$, which is usually denoted $a \mapsto b$ in mathematics. More precisely,
$a$ is a variable of type $A$. This is just a term with a name, which we temporarily introduce. Note that the type $A$ may not have any terms at all.
$b$ is an expression (judgmentally) of type $B$ which can involve $a$. This means that $b$ is a term formed by the usual rules for forming terms - applying a function to an argument, constructing function types, lambdas etc., except that we can use the term $a : A$ along with other terms previously introduced to the context.
When we apply $\lambda a \to b$ to $a’ : A$, we obtain the result of substituting $a’$ for $a$ in $b$. For this to be meaningful, it is to be understood that our rules for forming terms include rules for substitution.
It is clear how to generalize this to obtain dependent functions, with type say $\Pi_{a : A} B(a)$. Namely, we consider terms $\lambda a \to b$ so that the expression $b$ has type $B(a)$.
Recursion functions revisited.
Functions on inductive types can be constructed using associated recursion functions. We clarify this in the case of the simple inductive types we have considered so far, and extend this to some more general inductive types.
Constructors for $W$
For a type $W$, previously, we considered constructors as terms with type that can be obtained in the following ways.
$W$ itself can be the type of a constructor.
If $T$ is the type of a constructor, then $W \to T$ can also be a type of a constructor for $W$.
If $T$ is the type of a constructor and $A$ is a type not involving $W$, then $A \to T$ can also be a type of a constructor for $W$.
It may seem that any type $X$ should fall into one of the latter two cases, but this is not so. For example $W \to W$ is not in either case, and indeed $(W \to W) \to W$ is not a valid type for a constructor.
Observe that for a constructor $g$ of the type $W \to T$, if $b : B$ then $g(b)$ is also a constructor, and we have a similar statement for the type being $A \to W$.
We now look at two more ways of obtaining constructors, with functions in the second and third rule above generalized to dependent functions.
If we have a function associating to $w : W$ the type $T(w)$ of a constructor for $W$, then $\Pi_{w : W} T(w)$ can also be the type of a constructor.
If $A$ is a type not involving $W$ and we have a function associating to $a : A$ the type $T(a)$ of a constructor for $W$, then $\Pi_{a : A} T(a)$ can also be the type of a constructor.
Domains of Recursion
Given a constructor $\varphi$ for $W$ and a type $X$, we obtain a type which we call the domain of recursion of $\varphi$ and denote $R_{W, X}(\varphi)$. For constructors obtained using the above rules, we define this as follows. Note that we could have defined this purely in terms of the type of $\varphi$, but we chose the definition to be parallel to the case for dependent functions.
If $\varphi : W$, then $R_{W, X}(\varphi) = W$.
If $\varphi : W \to T$ and $w : W$ is a variable (or term), then $R_{W, X}(\varphi) = W \to X \to R_{W, X}(\varphi(w))$. Note that this does not depend on the choice of $w : W$. Indeed it is determined by the type $T$ of $\varphi(w)$.
If $\varphi : A \to T$ and $a : W$ is a variable (or term), then $R_{W, X}(\varphi) = A \to R_{W, X}(\varphi(w))$. Note that this does not depend on the choice of $a : A$. Indeed it is determined by the type $T$ of $\varphi(a)$.
If $\varphi : \Pi_{w: W} T(w)$, then $R_{W, X}(\varphi) = \Pi_{w: W} (W \to X \to R_{W, X}(\varphi(w))).$
If $\varphi : \Pi_{a: A} T(a)$ with the type $A$ not involving $W$, then $R_{W, X}(\varphi) = \Pi_{a: A} (A \to R_{W, X}(\varphi(w))).$
If we have a type such as $W \to W$ that involves $W$ but is not equal to $W$, it is not clear whether the second or third ruls applies. We shall eventually allow some such types, but not others. For these we will extend the rules for $W$.
The recursion function
Suppose now that $W$ is given by constructors $\varphi_1$, $\dots$, $\varphi_k$. Then when defining $W$, we introduce for each type $X$ a term $rec_{W, X}$ with type
We make certain judgments, giving definitional equalities involving the recursion function.
Induction functions
It is straightforward to modify the above for definitions of dependent functions. The analogues of recursive definitions for dependent functions are called inductive definitions, and these are interpreted in terms of induction functions.
For natural numbers
First we briefly consider an example, namely dependent functions on $\mathbb{N}$. Suppose $X : \mathbb{N} \to \mathcal{U}$ is a type family. To define inductively a dependent function $f : \Pi_{n : \mathbb{N}} X(n)$, we must specify
$f(0)$, of type $X(0)$.
For all $n$, $f(n + 1) : X(n+1)$ in terms of $n : \mathbb{N}$ and $f(n) : X(n)$.
Thus we require a term of type $X(0)$, and for each $n$, a term of type $X(n+1)$ as a function of a term of type $\mathbb{N}$ and a term of type $X(n)$. The latter data can be viewed as a single term of type $\Pi_{n : \mathbb{N}} (X(n) \to X(n+1))$. Thus, the induction function for natural numbers has the type
Notice that this type involves the expression $n+1 = succ(n)$, so the type depends on the constructor $\varphi$, not just its type.
Domains of induction
Given now a type family $X$, we define for constructors $\varphi$ constructed as above a type $I_{W, X}(\varphi)$ as follows.
If $\varphi : W$, then $I_{W, X}(\varphi) = W$.
If $\varphi: W \to T$, then $I_{W, X}(\varphi) = \Pi_{w: W} (X(w) \to I_{W, X}(\varphi(w)))$.
If $\varphi: A \to T$, then $I_{W, X}(\varphi) = \Pi_{a: A} I_{W, X}(\varphi(a))$.
If $\varphi: \Pi_{w: W} T(w)$, then $I_{W, X}(\varphi) = \Pi_{w: W} (X(w) \to I_{W, X}(\varphi(w)))$.
If $\varphi: \Pi_{a : A} T(a)$, then $I_{W, X}(\varphi) = \Pi_{a: A} I_{W, X}(\varphi(a))$.
Observe that the constructors involving dependent functions are essentially the same as those involving ordinary functions.
The induction function
This can now be introduced in a manner similar to the recursion function. Suppose now that $W$ is given by constructors $\varphi_1$, $\dots$, $\varphi_k$. Then when defining $W$, we introduce for each type family $X$ a term $ind_{W, X}$ with type
We make certain judgments, giving definitional equalities involving the induction function.
Inductive type families
We now sketch the generalization to inductive type families.
Type families
A type family is one of:
* A type $W : \mathcal{U}$.
* A function from a type $A$ to type families.
Thus, a typical type family is $A \to B \to \mathcal{U}$ or $\Pi_{a : A} (B(a)\to mathcal{U})$.
Inductive type families
An inductive type family, say $W : A \to \mathcal{U}$, is a type family which is determined by constructors and for which there are recursion and induction functions. The most important point to note about such a family is that even though, for $a: A$, $W(a)$ is a type, it is not an inductive type. For instance, for a type $X$ we do not have a recursion function $rec_{W(a), X}$. Instead we can recursively (and inductively) define (dependent) functions for all values of the index, i.e., we have a recursion function with type of the form
and similarly for inductive functions.
Constructors for families.
Consider an inductive type family $W : A \to \mathcal{U}$. It is meaningless to talk of a constructor of type $W$, as $W$ is not a type. Instead basic constructors for such families are of the form $W(a)$ for $a : A$. Similarly, we can extend a constructor $T$ to $W(a’) \to T$.
Definitions of recursion and induction functions for type families from constructors are easy generalizations of the definitions for inductive types.
A function $f$ on a domain $A$ when applied to an elements $a$ of type $A$ gives a value $f(a)$. Further, a function is determined by the values it gives, in the sense that if $f$, $g$ are functions with domain $A$ so that
then
Dependent functions generalize functions, with the above properties continuing to hold. What we no longer require is that $f(a)$ has a fixed type independent of $a$, namely the codomain B. Instead we have a family of codomains $B(a)$, so that $f(a)$ has type $B(a)$.
Such objects are common in mathematics (and physics). For example, the velocity of water flowing on a sphere gives a vector field on a sphere. At a point $x$ on the sphere, the value of the vetor field $V$ lies in the tangent space at the point, i.e.,
Hence it is best to view vector fields as dependent functions. In mathematics, the codomains are generally called fibers, which together form a fiber bundle, and dependent functions are called sections of this bundle.
We can (and often do) regard a dependent function as an ordinary function with codomain the union of the various codomains. But, besides losing information, the function we get is not natural, in the sense that it does not respect the underlying symmetries.
Products and $\Pi$-types
Another situation which is best viewed in terms of dependent functions is a product. Consider a product $A_1 \times A_2 \times \dots \times A_n$. An element of such a product is a tuple $(a_a, a_2, dots, a_n)$. We would like to view such tuples as functions $a$ on the index set $\{1, 2, \dots, n\}$, but to do so we would have to require $a(i) : A_i$, i.e., a separate codomain for each index. Thus tuples are dependent functions.
More generally, if we have a family of types $B(a)$ indexed by a set $A$, then elements of the product
are exactly dependent functions $f$ on $A$ so that $f(a)$ has type $B(a)$ for $a : A$. Thus a type of dependent functions is denoted as in $\Pi_{a : A} B(a)$ and is often called a $\Pi$-type.
Examples and code
We now turn to some simple examples and code. First we consider type families, which give the collection of codomains for dependent functions. The typical example is vectors of length $n$ of elements of a type $A$. Formally, a type family is just a function with codomain a universe, so the values the function takes are types.
This gives a family of types parametrized by A and indexed by natural numbers. The difference between parameters and indices is a bit subtle but crucial. Observe that the Agda syntax treats them quite differently.
Inductive types and inductive type families
We defined Booleans and natural numbers using the data statement, and defined functions on them by pattern matching. More conceptually, these are inductive types, and functions defined on them are defined by applying the recursion function. For instance, in the case of Booleans, the recursion function takes as input a type $A$ and two objects with that type (the values of $true$ and $false$) and gives a function from Booleans to $A$.
In the case of lists, for each type $A$, we obtain a corresponding inductive type. Thus we have a family of inductive types, parametrized by the type $A$.
In the case of vectors too, the type $A$ acts as a parameter. Assume that the type $A$ is fixed, so vectors are now a family of types indexed by natural numbers.
However, the vectors of a fixed size (say $7$) do not form an inductive type - we cannot define a function recursively on vectors of length $7$ alone. In this case, this is evident from the definition, as the constructor giving vectors of size $7$ uses vectors of size $6$. So a recursive definition must also involve vectors of size $6$, hence of $5$ etc.
We can, however, recursively define functions on vectors of all sizes, i.e., of all values of the index. For examples, here is the function that appends (adds to the end) an element to a vector.
Thus, vectors form an inductive type family indexed by natural numbers (and parametrized by A). As we remarked, the type for a given index is not an inductive type. Note that even in cases where we can meaningfully write down a recursion rule for the type at a fixed index, such a recursion rule does not hold, i.e., there is no recursion function using which we can translate pattern matching into function applicatio.
Remark: From the point of view of programming languages, there is another sense in which indexing by natural numbers is different from parametrizing by types - the types we construct depend on objects, not just other types. Modern languages usually allow types to depend on other types (sometimes called generics), but most do not allow dependence on objects.
A dependent function
We shall now construct a dependent function countdown that maps a natural number $n$ to the list consisting of natural numbers from $n$ down to $0$. Thus the type of $countdown(n)$ is vectors of natural numbers of length $n+1$.
The definition in terms of pattern matching is similar to recursive definitions of functions. In terms of homotopy type theory, dependent functions on inductive types are constructed by applying a dependent function called the induction function to the data.
As mentioned earlier, the type of a dependent function is called the product type corresponding to the family of types with base the domain. For instance, the type of countdown is
Except for universes (which we will keep in the background as far as possible), we have now seen all the type constructions - inductive types, functions and dependent functions.
Dependent pairs
Besides dependent products, the other fundamental dependent type associated to a type $A$ and a family of types $B: A \to \mathcal{U}$ is the dependent sum type, denoted $\Sigma_{a : A} B(a)$, whose elements are pairs $(a, b)$ with $a : A$ and $b : B(a)$. Like the pair type $A \times B$, this can be defined as an inductive type.
A principal use of types in programming is to avoid writing meaningless expressions, by ensuring that such expressions violate the rules for constructing objects and types. Dependent types are even better than this. For instance, consider the zip operation on vectors which associates to a vector of type $A$ with entries $a_i$ and a vector of type $B$ with entries $b_j$ a vector of type $A\times B$ with entries pairs $(a_i, b_i)$. This makes sense only when both vectors have the same length. Using dependent functions and types, we can define the function in such a way that it is defined only on pairs of vectors with the same length.
Note that we could have used lists in place of vectors, but we would then have to give a definition that can lead to errors at runtime, or truncates perhaps giving unexpected values.
Exercise:
Define a function $vmap$ on vectors analogous to $map$ on lists.
We have constructed various inductive types, and constructed functions on these types recursively. We have however not addressed the issue of what are valid definitions for inductive types and recursive functions. We now turn to these questions. The definition of inductive types we now consider is not the most general, but it includes all the examples so far.
Examples recalled
Suppose now that we wish to define an inductive type $W$. We recall two previous types we have defined. For $W = \mathbb{N}$, we used the following definition.
123
dataℕ:Setwherezero:ℕsucc:ℕ→ℕ
Lists are a parametrized inductive type, which means that for each type $A$ we get a list of type $A$. We focus on the case where $A =\mathbb{N}$, so the inductive definition of $List\mathbb{N}=List\ \mathbb{N}$ becomes the following.
123
dataListℕ:Typewhere[]:Listℕ_::_:ℕ→Listℕ→Listℕ
Constructors.
The inductive type $W$ is defined by constructors, which are curried functions that give an element in $W$, with the arguments possibly in $W$. In the above examples, the constructors have types:
For $W = \mathbb{N}$, $zero$ has type $W$ and $succ$ has type $W \to W$.
For $W = List\mathbb{N}$, $[]$ has type $W$ and $\_::\_$ has type $\mathbb{N}\to W \to W$.
It is clear how to generalize these. A constructor for a type $W$ can be:
$W$ itself (which we can think of as a function of no arguments giving $W$, i.e. $W = \to W$).
obtained from a constructor type $T =\dots \to W$ by mapping $W$ to this constructor type to get $W \to T = W \to \dots \to W$.
obtained from a constructor type $T = \dots \to W$ by mapping a type $A$ not involving $W$ to this constructor type to get $A \to T = A \to \dots \to W$.
We shall later see some other ways of building constructors. For the present, observe that all our examples so far, $\mathbb{N}$, $List A$, $Bool$, $A \times B$ and $A \oplus B$ are all of this form. We shall next see what we mean by recursive definitions on such inductive types.
Recursive definitions.
We shall associate with an inductively defined type $W$ and another type $X$ a function $rec_{W, X}$ which has arguments the ingredients of a recursive definition and gives a function from $W$ to $X$. So the type of $rec_{W, X}$ is of the form $\dots \to (W \to X)$.
First, consider the case when $W = \mathbb{N}$. A recursive function $f: W \to X$ is defined by specifying its value on $zero$ and on $succ(n)$ for $n \in \mathbb{N}$. These values in turn are determined by terms, whose types we call $D_{zero}$ and $D_{succ}$. In terms of these, the type of $rec_{\mathbb{N}, X}$ is thus $D_{zero} \to D_{succ} \to \mathbb{N}$.
The function $f$ is determined on $zero$ by simply specifying its image $f(zero) : X$, so $D_{zero}$ is just $X$. On the other hand, for $n : \mathbb{N}$, $f(succ(n))$ is a function of $n$ and $f(n)$. So specifying the image of $f$ on all numbers of the form $f(succ(n))$ amounts to giving a function $\mathbb{N} \to X \to X$, with the first argument to be applied to $n$ and the second to $f(n)$.
Thus, the recursion function of $\mathbb{N}$ has the type
Next, consider the case of lists of natural numbers. Again, we define a function $f$ recursively to a type $X$ in terms of the value on the result of each constructor. For the empty list, the value of $f$ is simply given by $f([]) : X$. On the other hand, for a list of the form $head :: tail$, the value of $f$ can be a function of $head$, $tail$ and $f(tail)$. Thus, we have a recursion function with the type
It is easy to generalize these examples to an inductive type $W$ of the form we are considering. Namely, we associate to a constructor type $T$ a recursion data type $R_X(T)$ as follows:
if $T = W$, $R_X(T) = X$
if $T = W \to T’$, $R_X(T) = W \to X \to R_X(T’)$
if $T = A \to T’$ with $A$ independent of $W$, then $R_X(T) = A \to R_X(T’)$
These rules tell us the type of the recursion functions. The recursion function satisfies certain definitional equalities, saying that it acts on the image of constructors as specified. For example, in the case of natural numbers, we get the identities:
In homotopy type theory, we view an inductive definition as introducing a type, constructor functions, a recursion function and certain definitional equalities. We shall see later the final ingredients of an inductive type definition, namely a so called induction function and corresponding definitional equalities.
In terms of Agda code, we can simply define the recursion functions. For instance, for natural numbers, we have:
We shall construct the recursion functions in the other cases later. First, we shall see that we can make recursive definitions using the recursion function, without any pattern matching. In terms of Agda code, this means we need to use pattern matching only once - to construct the recursion function. In terms of homotopy type theory, all we use is that we have a recursion function of the appropriate type and that the corresponding definitional equalities hold.
Definitions using the recursion function.
We turn to examples of definitions using recursion functions. First we define the factorial. Note that in this definition $n!$ is just a variable name.
12
_!:ℕ→ℕ_!=recℕ1(λnn!→(n+1)*n!)
Next, we define addition using the recursion function. This is a curried function $\_plus\_ : \mathbb{N} \to \mathbb{N} \to \mathbb{N}$, so $\_plus\_ (0) = 0\ plus\ \_$ is a function, namely the identity. Similarly $\_plus\_ (succ(n)) = (succ (n))\ plus\ \_$ is a function defined in terms of $n$ and $n\ plus\ \_$, where $n\ plus\ \_$ is the function addition by $n$ (we use the variable name $nplus = \_ plus\_ (n)$). Clearly the following is a definition of addition (we have written this using nested lambdas for clarity).
As you can observe, we have now defined functions purely in terms of lambdas and recursion function. Thus, we now have clear foundation rules. Namely,
We have (so far) two ways of constructing types - inductive types and function types.
Terms can be constructed by applying a function $f: A \to B$ to a term $a : A$. We make the judgment $f(a) : B$.
Terms of function types are constructed using lambdas. A $\lambda$-term $A \to B$ is of the form $\lambda a \mapsto b$ where $b$ is an expression of type $b$, which can involve the variable (term)
$a : A$ and is otherwise built using the usual rules for forming terms.
An inductive type $W$ is specified by specifying the types of constructors. The type of a constructor is built from $W$ in certain specified ways.
When defining an inductive type, we define constructors as terms of the specified type.
For an inductive type $W$ and a type $X$, we obtain a recursion function $rec_{W, X}$, whose type is determined in terms of the constructors of $W$. We have certain computation rules giving definitional equalities for the action of a recursively defined function on the image of a constructor.
We need a few more rules of a similar nature, mainly concerned with extending rules involving functions to so called dependent functions. Our next goal is to introduce dependent functions.
Exercise
Define the recursion functions $recBool$ on Booleans.
Define the recursion function $recList(A)$ as a function of $A : Type$.
Define $not$, $\_ \& \_$ and $\_contains\_$ in terms of the above recursion functions without using any pattern matching.
We now give a program (i.e., a function) that, given a natural number $n$, lists all points in the plane with co-ordinates natural numbers, which are contained in the circle of radius $n$. This illustrates how we can implement a combination of nested loops and conditions using map, flatMap and filter.
We first define a function $leq$, denoting less than or equal to, for natural number. We have reserved $\leq$ for something nicer. This is a recursive definition.
We can now construct the function we promised. For each $a \in \mathbb{N}$, we can define a function associating to it a list by filtering $b\in \mathbb{N}$ by requiring the pair $(a , b)$ to lie in the circle, and then map to the pair $(a, b)$. We flatMap using this list-valued function.
We have used map, flatMap and filter to replace nested loops and control statements (if in this case) in older programming languages. In imperative code we would have started with an empty list, used a loop for $a$, an inner one for $b$ and added to the list pairs that satisfy the condition of lying in the circle.