One of the themes of this book is predictability
[Predictability as a Theme]. One of our key tools in reasoning about
program behavior before we run it is the static checking of
types. For example, when we write
x :: Number, we mean that x will always hold a
Number, and that all parts of the program that depend on
x can rely on this statement being enforced. As we will see,
types are just one point in a spectrum of invariants we might wish to
state, and static type checking—
In this chapter, we will focus especially on static type checking: that is, checking (declared) types before the program even executes.This is an extremely rich and active subject. For further study, we strongly recommend reading Pierce’s Types and Programming Languages. We will explore some of the design space of types and their trade-offs. Finally, though static typing is an especially powerful and important form of invariant enforcement, we will also examine some other techniques that we have available [REF].
fun f(n :: Number) -> Number: n + 3 end f("x")
fun f(n): n + 3 end f("x")
How would you test the assertions that one fails before the program executes while the other fails during execution?
fun f n: n + 3 end
We will begin by introducing a traditional core language of types. Later, we will explore both extensions [REF] and variations [REF].
The essence of any typing mechanism is usually the principle of substitutability: two types A and B “match” when values of one can be used in place of values of the other. Therefore, the design of a type system implicitly forces us to consider when such substitutions are safe (in the sense given by The Central Theorem: Type Soundness).
Of course, the simplest notion of substitutability is simply identity: a type can only be substituted with itself, and nothing else. For instance, if the declared type of a function’s parameter is String, then you can only call it with String-typed values, nothing else. This is known as invariance: the set of values that can be passed into a type cannot “vary” from the set expected by that type. This is so obvious that it might seem to hardly warrant a name! However, it is useful to name because it sets up a contrast with later type systems when we will have richer, non-trivial notions of substitutability (see Subtyping).
Before we can define a type checker, we have to fix two things: the syntax of our typed core language and, hand-in-hand with that, the syntax of types themselves.
We’ll begin with our language with functions-as-values [Functions Anywhere]. To this language we have to add type annotations. Conventionally, we don’t impose type annotations on constants or on primitive operations such as addition, because this would be unbearably tedious; instead, we impose them on the boundaries of functions or methods. Over the course of this study we will explore why this is a good locus for annotations.
data TyExprC: | numC(n :: Number) | plusC(l :: TyExprC, r :: TyExprC) | multC(l :: TyExprC, r :: TyExprC) | trueC | falseC | ifC(c :: TyExprC, t :: TyExprC, e :: TyExprC) | idC(s :: String) | appC(f :: TyExprC, a :: TyExprC) | fdC(arg :: String, at :: Type, rt :: Type, body :: TyExprC) end
Now we have to decide on a language of types. To do so, we follow the tradition that the types abstract over the set of values. In our language, we have three kinds of values. It follows that we should have three kinds of types: one each for numbers, Booleans, and functions.
What information does a number type need to record? In most
languages, there are actually many numeric types, and indeed
there may not even be a single one that represents “numbers”.
However, we have ignored these gradations between numbers
[A First Taste of “Semantics”], so it’s sufficient for us to have just one.
Having decided that, do we record additional information about
which number? We could in principle, but that would mean for
types to check, we would have to be able to decide whether two
expressions compute the same number—
We treat Booleans just like numbers: we ignore which Boolean it is. Here, we perhaps have more value in being precise, because there are only two values we need to track, not an infinite number. That means in some cases, we even know which branch of a conditional we will take, and can examine only that branch (though that may miss a type-error lurking in the other branch: what should we do about that?). However, even the problem of knowing precisely which Boolean we have reduces to the Halting Problem [REF].
Construct an argument for why determining which number or Boolean an arbitrary expression evaluates to is equivalent to solving the Halting Problem.
data Type: | numT | boolT | funT(a :: Type, r :: Type) end
One or both arguments of + is not a number, i.e., does not have type numT.
One or both arguments of * is not a number.
The expression in the function position of an application is not a function, i.e., does not have type funT.
The expression in the function position of an application is a function but the type of the actual argument does not match the type of the formal argument expected by the function.
The expression in the function position of an application is a function but its return type does not match the type expected by the expression that invokes the function?
| numC(n :: Number) | plusC(l :: TyExprC, r :: TyExprC) | multC(l :: TyExprC, r :: TyExprC)
| trueC | falseC | ifC(c :: TyExprC, t :: TyExprC, e :: TyExprC)
The conditional expression must have type Boolean.
Both branches must have the same type (whatever it may be).Implicit is the idea that we can easily determine when two types are the “same”. We’ll return to this in Subtyping.
| idC(s :: String) | appC(f :: TyExprC, a :: TyExprC) | fdC(arg :: String, at :: Type, rt :: Type, body :: TyExprC)
The function position (f) must have a function type (funT).
The type of the actual argument expression (a) must match the argument type (.arg) of the function position.
The type of the body—
assuming the formal argument (arg) has been given a value of the declared type (at)— matches the type declared (rt) as the return type.
The last few cases we just saw had a very interesting structure. Did you spot it?
fun f(x :: String) -> Number: if x == "pi": 3.14 else: 2.78 end end 2 + f("pi")
Similarly, when type-checking the application, having looked up the
type of f, we assume that it will indeed return a value
of type Number. We can assume this because that is the
return type annotation of f. We do assume it because
the type-checker will ensure that the body of f—
In short, the treatment of function definition and application are complementary. They are joined together by a method called assume-guarantee reasoning, whereby each side’s assumptions are guaranteed by the other side, and the two stitch together perfectly to give us the desired safe execution (which we elaborate on later: The Central Theorem: Type Soundness).
tc :: TyExprC -> Boolean
Define the types and functions associated with type environments.
fun tc(e :: TyExprC, tnv :: TyEnv) -> Boolean:
cases (TyExprC) e:
| numC(_) => true
| idC(s) => ty-lookup(s, tnv)
This should make you a little uncomfortable: we seem to be throwing away valuable information about the type of the identifier. Of course, types do throw away information (e.g., which specific number an expression computes). However, the kind of information we’re throwing away here is much more significant: it’s not about a specific value within a type, but the type itself. Nevertheless, let’s push on.It might also bother you that, by only returning a Boolean, we have no means to express what type error occurred. But you might assuage yourself by saying that’s only because we have too weak a return type.
| appC(f, a) =>
f-t = tc(f, tnv)
a-t = tc(a, tnv)
| appC(f, a) => f-t = tc(f, tnv) a-t = tc(a, tnv) if is-funT(f-t): if a-t == f-t.arg:
In other words, what we need is something that will calculate the type of an expression, no matter how complex it is. Of course, such a procedure could only succeed if the expression is well-typed; otherwise it would not be able to provide a coherent answer. In other words, a type “calculator” has type “checking” as a special case!
That was subtle. Read it again.
We should therefore strengthen the inductive invariant on tc: that it not only tells us whether an expression is typed, but also what its type is. Indeed, by giving any type at all it confirms that the expression types, and otherwise it signals an error.
fun tc(e :: TyExprC, tnv :: TyEnv) -> Type:
cases (TyExprC) e:
| numC(_) => numT
| idC(s) => ty-lookup(s, tnv)
| plusC(l, r) => tc-arith-binop(l, r, tnv)
fun tc-arith-binop(l :: TyExprC, r :: TyExprC, tnv :: TyEnv) -> Type: if (tc(l, tnv) == numT) and (tc(r, tnv) == numT): numT else: raise('type error in arithmetic') end end
| multC(l, r) => tc-arith-binop(l, r, tnv)
Did you see what’s different?
That’s right: nothing! That’s because, from the perspective of type-checking (in this type language), there is no difference between addition and multiplication, or indeed between any two operations that consume two numbers and return one. Because we are ignoring the actual numbers, we don’t even need to bother passing tc-arith-binop a function that reflects what to do with the pair of numbers.
Observe another difference between interpreting and type-checking. Both care that the arguments be numbers. The interpreter then returns a precise sum or product, but the type-checker is indifferent to the differences between them: therefore the expression that computes what it returns (numT) is a constant, and the same constant in both cases.
| trueC => boolT
| falseC => boolT
| ifC(cnd, thn, els) =>
cnd-t = tc(cnd, tnv)
if cnd-t == boolT:
thn-t = tc(thn, tnv)
els-t = tc(els, tnv)
if thn-t == els-t:
raise("conditional branches don't match")
raise("conditional isn't Boolean")
Consider each of the three earlier decisions. Change each one, and explain the consequences it has for the type-checker.
| appC(f, a) =>
f-t = tc(f, tnv)
a-t = tc(a, tnv)
if a-t == f-t.arg:
raise("argument type doesn't match declared type")
raise("not a function in application position")
| fdC(a, at, rt, b) =>
bt = tc(b, xtend-t-env(tbind(a, at), tnv))
if bt == rt:
raise("body type doesn't match declared type")
When confronted with a first-class function, our interpreter created a closure. However, we don’t seem to have any notion of a “closure” in our type-checker, even though we’re using an (type) environment. Why not? In particular, recall that the absence of closures resulted in violation of static scope. Is that happening here? Write some tests to investigate.
Observe a curious difference between the interpreter and type-checker. In the interpreter, application was responsible for evaluating the argument expression, extending the environment, and evaluating the body. Here, the application case does check the argument expression, but leaves the environment alone, and simply returns the type of the body without traversing it. Instead, the body is actually traversed by the checker when checking a function definition, so this is the point at which the environment actually extends.
Why is the time of traversal different between interpretation and type-checking?
p = lam(x :: Number) -> (Number -> Number): lam(y :: Number) -> Number: x + y end endWhen we simply define p, the interpreter does not traverse the interior of these expressions, in particular the x + y. Instead, these are suspended waiting for later use (a feature we actually exploit [REF laziness]). Furthermore, when we apply p to some argument, this evaluates the outer function, resulting in a closure (that closes over the binding of x).
Now instead consider the type-checker. As soon as we are given this definition, it traverses the entire expression, including the innermost sub-expression. Because it knows everything it needs to know about x and y—
their types— it can immediately type-check the entire expression. This is why it doesn’t not require to create a closure: there is nothing to be put off until application time (indeed, we don’t want to put type-checking off until execution).
Another way to think about it is that it behaves like substitution does—
and substitution did not need closures to provide static scoping, either— but even more eagerly: it can perform substitution with just the program text without any values at all, because it is substituting types, which are already given. The fact that we use a type environment makes this harder to see, because we may have come to associate environments with closures. However, what matters is when the necessary value is available. Put differently, we used an environment primarily out of convention: here, we could have used (type) substitution just as well.
Write examples to study this. Consider converting the above example as a starting point. Also convert your examples from earlier.
- Consider the following expression:
lam(f :: (Number -> String), n :: Number) -> String: f(n) endWhen evaluating the inner f(n), the interpreter has access to actual values for f and n. In contrast, when type-checking it, it does not know which function will be passed in as f. How, then, can it type-check the use?
The answer is that the annotation tells the type-checker everything it needs to know. The annotation says that f must accept numbers; since n is annotated to be a number, the application works. It also says that f will return strings; because that is what the overall function returns, this also passes.
In other words, the annotation (Number -> String) represents not one but an infinite family of all functions of that type, without committing to any one of them. The type checker then checks that any such function will work in this setting. Once it has done its job, it doesn’t matter which function we actually pass in, provided it has this type. Checking that is, of course, the heart of Assume-Guarantee Reasoning.
Instead of using concrete values, it uses only types. Therefore, it cannot check fine gradations inside values.
In return, it works statically: that is, it’s like running a lightweight testing procedure before ever running the program. (We should not underestimate the value of this: programs that depend on interactive or other external input, on specialized hardware, on timing, and so on, can be quite difficult to test. For such programs, especially, obtaining a lightweight form of testing that does not require being able to run it at all is invaluable.)
Testing only covers the parts of a program that are exercised by test cases. In contrast, the type-checker exercises the whole program. Therefore, it can catch lurking errors. Of course, it also means that the entire program has to be type-conformant: you can’t have some parts (e.g., conditional branches) that are not yet conformant, the way they can fail to work correctly but can be ignored by tests that don’t exercise them.
Finally, types provide another very important property: quantification. Recall our earlier example: the type checker has established something about an infinite number of functions!
Now that we’ve obtained a basic programming language, let’s add recursion to it. We saw earlier [Recursion and Non-Termination] that this could be done quite easily. It’ll prove to be a more complex story here.
Let’s now try to express a simple recursive function. We’ve already seen how to write infinite loops for first-order functions. Annotating them introduces no complications.
Confirm that adding types to recursive and non-terminating first-order functions causes no additional problems.
(fun(x): x(x) end)(fun(x): x(x) end)
Recall that this program is formed by applying ω to itself. Of
course, it is not a given that identical terms must have precisely the
same type, because it depends on the context of use. However, the
specific structure of ω means that it is the same term that
ends up in both contexts—
Therefore, let’s try to type ω; let’s call this type T. It’s clearly a function type, and the function takes one argument, so it must be of the form A -> B. Now what is that argument? It’s ω itself. That is, the type of the value going into A is itself T. Thus, the type of ω is T, which is A -> B, which is the same as T -> B. This expands into (A -> B) -> B, which is the same as (T -> B) -> B. Therefore, this further expands to ((A -> B) -> B) -> B, and so on. In other words, this type cannot be written as any finite string!
Did you notice the subtle but important leap we just made?
We have just argued that we can’t type ω. But why does it follow that we can’t type Ω?
Because type-checking follows by recurring on sub-terms, to type Ω, we have to be able to type ω and then combine its type to obtain one for Ω. But, as we’ve seen, typing ω seems to run into serious problems. From that, however, we jumped to the conclusion that ω’s type cannot be written as any finite string, for which we’ve given only an intuition, not a proof. In fact, something even stranger is true: in the type system we’ve defined so far, we cannot type Ω at all!
This is a strong statement, but it follows from something even
stronger. The typed language we have so far has a property
called strong normalization: every expression that has a type
will terminate computation after a finite number of steps. In other
words, this special (and peculiar) infinite loop program isn’t the
only one we can’t type; we can’t type any infinite loop (or
even potential infinite loop). A rough intuition that might help is
that any type—
Why is this not true when we have named first-order functions?
If our language permitted only straight-line programs, this would be unsurprising. However, we have conditionals and even functions being passed around as values, and with those we can encode almost every program we’re written so far. Yet, we still get this guarantee! That makes this a somewhat astonishing result.
Try to encode lists using functions in the untyped and then in the typed language (see [REF] if you aren’t sure how). What do you see? And what does that tell you about the impact of this type system on the encoding?
This result also says something deeper. It shows that, contrary to
what you may believe—
A complex scheduling algorithm (the guarantee would ensure that the scheduler completes and that the tasks being scheduled will actually run).
A packet-filter in a router. (Network elements that go into infinite loops put a crimp on utility.)
A compiler. (The program it generates may or may not terminate, but it ought to at least finish generating the program.)
A device initializer. (Modern electronics—
such as a smartphones and photocopiers— have complex initialization routines. These have to finish so that the device can actually be put to use.)
because otherwise the rest of the page (or even browser) becomes unresponsive.)
A configuration system, such as a build system or a linker.In Standard ML, the linker uses essentially this language for module linking specifications.
Notice also an important difference between types and tests [Type-Checking, Testing, and Coverage]: you can’t test for termination!
(rec (S num (n num)
(n + (S (n + -1)))))
How do we type such an expression? Clearly, we must have n bound in the body of the function as we type it (but not, of course, in the use of the function, due to static scope); this much we know from typing functions. But what about S? Obviously it must be bound in the type environment when checking the use (S 10)), and its type must be num -> num. But it must also be bound, to the same type, when checking the body of the function. (Observe, too, that the type returned by the body must match its declared return type.)
Now we can see how to break the shackles of the finiteness of the type. It is certainly true that we can write only a finite number of ->’s in types in the program source. However, this rule for typing recursion duplicates the -> in the body that refers to itself, thereby ensuring that there is an inexhaustible supply of applications.It’s our infinite quiver of arrows.
| recC(f, v, at, rt, b, c) =>
extended-env = xtend-t-env(tbind(f, funT(at, rt)), tnv)
if not(rt == tc(b, xtend-t-env(tbind(v, at), extended-env))):
raise("rec: function return type not correct")
We have seen how to type recursive programs, but this doesn’t yet
enable us to create recursive data. We already have one kind of
Creating a new type.
Letting instances of the new type have one or more fields.
Letting some of these fields refer to instances of the same type.
Allowing non-recursive base-cases for the type.
data BinTree: | leaf | node (value :: Number, left :: BinTree, right :: BinTree) end
This style of data definition is sometimes also known as a sum of products. At the outer level, the datatype offers a set of choices (a value can be a leaf or a node). This corresponds to disjunction (“or”), which is sometimes written as a sum (the truth table is suggestive). Inside each sum is a set of fields, all of which must be present. These correspond to a conjunction (“and”), which is sometimes written as a product (ditto).
That covers the notation, but we have not explained where this new
type, BinTree, comes from. It is obviously impractical to
pretend that it is baked into our type-checker, because we can’t keep
changing it for each new recursive type definition—
leaf :: BinTree # a constant, so no arrow node :: Number, BinTree, BinTree -> BinTree is-leaf :: BinTree -> Bool is-node :: BinTree -> Bool .value :: BinTree%(is-node) -> Number .left :: BinTree%(is-node) -> BTnum .right :: BinTree%(is-node) -> BTnum
In what two ways are the last three entries above fictitious?
Both the constructors create instances of BinTree, not something more refined. We will discuss this design tradeoff later [REF].
Both predicates consume values of type BinTree, not “any” value. This is because the type system can already tell us what type a value is. Thus, we only need to distinguish between the variants of that one type.
The selectors really only work on instances of the relevant variant—
e.g., .value can work only on instances of node, not on instances of leaf— but we don’t have a way to express this in the static type system for lack of a suitable static type. Thus, applying these can only result in a dynamic error, not a static one caught by the type system.
.value, .left, and .right are selectors: they select parts of the record by name. But here are the two ways in which they are fictitious. First, syntactically: in most languages with “dotted field access”, there is no such stand-alone operator as .value: e.g., you cannot write .value(...). But even setting aside this syntactic matter (which could be addressed by arguing that writing v.value is just an obscure syntax for applying this operator) the more interesting subtlety is the semantic one.
data Payment: | cash(value :: Number) | card(number :: Number, value :: Number) end
.value :: Payment(is-cash) -> Number .value :: Payment(is-card) -> Number
.value :: Payment -> Number
A characteristic of scripting languages is that objects are merely hash tables, and all field access is turned into a hash-table reference on the string representing the field-name. Hence, o.f is just syntactic sugar for looking up the value indexed by "f" in the dictionary associated with o.
- In Racket, the structure definitions such as
(struct cash (value)) (struct card (number value))generate distinct selectors: in this case, cash-value and card-value, respectively. Now there is no longer any potential for confusion, because they have different names that can each have distinct types.
(define (->value v) (cond [(node? v) (node-value v)] [(cash? v) (cash-value v)] [(card? v) (card-value v)]))
cases (BinTree) t: | leaf => e1 | node(v, l, r) => e2 end
if is-leaf(t): e1 else if is-node(t): v = t.value l = t.left r = t.right e2 end
Except, that’s not quite so easy. Somehow, the desugaring that generates the code above in terms of if needs to know that the three positional selectors for a node are value, left, and right, respectively. This information is explicit in the type definition but only implicitly present in the use of the pattern-matcher (that, indeed, being the point). Somehow this information must be communicated from definition to use. Thus, the desugarer needs something akin to the type environment to accomplish its task.
Observe, furthermore, that expressions such as e1 and e2
cannot be type-checked—