### 20` `Checking Program Invariants Statically: Types

As programs grow larger or more subtle, developers need tools to help
them describe and validate statements about program
☛ invariants. For example, when we write
x :: Number, we mean that x will always hold a
Number, and that all parts of the program that depend on
x can rely on this statement being enforced. As we will see,
types are just one point in a spectrum of invariants we might wish to
state, and static type checking—

#### 20.1` `Types as a Static Discipline

In this chapter, we will focus especially on static type checking: that is, checking (declared) types before the program even executes.This is an extremely rich subject of modern investigation. For further study, I strongly recommend reading Pierce’s Types and Programming Languages. We have already experienced a form of this in our programs by virtue of using a typed programming language. We will explore some of the design space of types and their trade-offs. Finally, though static typing is an especially powerful and important form of invariant enforcement, we will also examine some other techniques that we have available.

fun f(n :: Number) -> Number:

n + 3

end

f("x")

fun f(n):

n + 3

end

f("x")

How would you test the assertions that one fails before the program executes while the other fails during execution?

fun f n:

n + 3

end

#### 20.2` `A Classical View of Types

We will begin by introducing a traditional core language of types. Later, we will explore both extensions [REF] and variations [REF].

##### 20.2.1` `A Simple Type Checker

Before we can define a type checker, we have to fix two things: the syntax of our typed core language and, hand-in-hand with that, the syntax of types themselves.

To begin with, we’ll return to our language with functions-as-values (Functions Anywhere), before we added mutation and other complications (some of which we’ll return to later). To this language we have to add type annotations. Conventionally, we don’t impose type annotations on constants or on primitive operations such as addition; instead, we impose them on the boundaries of functions or methods. Over the course of this study we will explore why this is a good locus for annotations.

data TyExprC:

| numC (n :: Number)

| plusC (l :: TyExprC, r :: TyExprC)

| multC (l :: TyExprC, r :: TyExprC)

| idC (s :: String)

| appC (f :: TyExprC, a :: TyExprC)

| fdC (arg :: String, at :: Type, rt :: Type, body :: TyExprC)

end

data Value:

| numV (n :: Number)

| closV (f :: ExprC(is-fdC), e :: List<Binding>)

end

What information does a number type need to record? In most
languages, there are actually many numeric types, and indeed
there may not even be a single one that represents “numbers”.
However, we have ignored these gradations between numbers
(Changing Representations), so it’s sufficient for us to have just one.
Having decided that, do we record additional information about
which number? We could in principle, but that would mean for
types to check, we would have to be able to decide whether two
expressions compute the same number—

data Type:

| numT

| funT (arg :: Type, ret :: Type)

end

One or both arguments of + is not a number, i.e., is not a numT.

One or both arguments of * is not a number.

The expression in the function position of an application is not a function, i.e., is not a funT.

Any more?

The expression in the function position of an application is a function but the type of the actual argument does not match the type of the formal argument expected by the function.

Any more?

The expression in the function position of an application is a function but its return type does not match the type expected by the expression that invokes the function?

A natural starting type for the type-checker would be that it is a procedure that consumes an expression and returns a boolean value indicating whether or not the expression type-checked. Because we know expressions contain identifiers, it soon becomes clear that we will want a type environment, which maps names to types, analogous to the value environment we have seen so far.

Define the types and functions associated with type environments.

fun tc(e :: TyExprC, tnv :: List<TyBinding>) -> Bool: |

cases (TyExprC) e: |

end |

end |

| numC(n) => true |

| idC(s) => |

if ty-lookup(s, tnv): |

true |

else: |

raise("unbound identifier: " + s); |

| appC(f, a) => |

ft = tc(f, tnv) |

.. |

We should therefore strengthen the inductive invariant on tc: that it not only tells us whether an expression is typed, but also what its type is. Indeed, by giving any type at all it confirms that the expression types, and otherwise it signals an error.

fun tc(e :: TyExprC, tnv :: List<TyBinding>) -> Type: |

cases (TyExprC) e: |

end |

end |

| numC(n) => numT |

| idC(s) => ty-lookup(s, tnv) |

| plusC(l, r) => |

lt = tc(l, tnv) |

rt = tc(r, tnv) |

if is-numT(lt) and is-numT(rt): |

numT |

else: |

raise("addition: must both be numbers"); |

| multC(l, r) => ... |

lt = tc(l, tnv) |

rt = tc(r, tnv) |

if is-numT(lt) and is-numT(rt): |

numT |

else: |

raise("multiplication: must both be numbers"); |

Did you see what’s different?

That’s right: practically nothing! That’s because, from the perspective of type-checking (in this type language), there is no difference between addition and multiplication, or indeed between any two operations that consume two numbers and return one.

Observe another difference between interpreting and type-checking. Both care that the arguments be numbers. The interpreter then returns a precise sum or product, but the type-checker is indifferent to the differences between them: therefore the expression that computes what it returns (numT) is a constant, and the same constant in both cases.

| appC(f, a) => |

ft = tc(f, tnv) |

at = tc(a, tnv) |

if not(is-funT(ft)): |

raise("application: must be a function") |

else if not(ft.arg == at): |

raise("application: argument types must match") |

else: |

ft.ret; |

That leaves function definitions. The function has a formal parameter; unless this is bound in the type environment, any use of that parameter in body would result in a type error. Thus we have to extend the type environment with the formal name bound to its type, and in that extended environment type-check the body. Whatever value this computes must be the same as the declared type of the body. If that is so, then the function itself has a function type from the type of the argument to the type of the body.

| fdC(a, at, rt, b) => |

body-type = tc(b, xtnd-ty-env(tybind(a, at), tnv)) |

if body-type == rt: |

funT(at, rt) |

else: |

raise("function: body does not match declared return type"); |

Observe another curious difference between the interpreter and type-checker. In the interpreter, application was responsible for evaluating the argument expression, extending the environment, and evaluating the body. Here, the application case does check the argument expression, but leaves the environment alone, and simply returns the type of the body without traversing it. Instead, the body is actually traversed by the checker when checking a function definition, so this is the point at which the environment actually extends.

Why is the time of traversal different between interpretation and type-checking?

##### 20.2.2` `Type-Checking Conditionals

Suppose we extend the above language with conditionals. Even the humble if introduces several design decisions. We’ll discuss two here, and return to one of them later [REF].

What should be the type of the test expression? In some languages it must evaluate to a boolean value, in which case we have to enrich the type language to include booleans (which would probably be a good idea anyway). In other languages it can be any value, and some values are considered “truthy” while others “falsy”.

What should be the relationship between the then- and else-branches? In some languages they must be of the same type, so that there is a single, unambiguous type for the overall expression (which is that one type). In other languages the two branches can have distinct types, which greatly changes the design of the type-language and -checker, but also of the nature of the programming language itself.

Add booleans to the type language. What does this entail at a minimum, and what else might be expected in a typical language?

Add a type rule for conditionals, where the test expression is expected to evaluate to a boolean and both then- and else-branches must have the same type, which is the type of the overall expression.

##### 20.2.3` `Recursion in Code

Now that we’ve obtained a basic programming language, let’s add recursion to it. We saw earlier [REF] that this could be done easily through desugaring. It’ll prove to be a more complex story here.

##### 20.2.3.1` `A First Attempt at Typing Recursion

(fun(x): x(x) end)(fun(x): x(x) end)

Why does this construct an infinite loop? What subtle dependency is it making about the nature of function calls?

Now that we have a typed language, and one that forces us to annotate all functions, let’s annotate it. For simplicity, from now on we’ll assume we’re writing programs in a typed surface syntax, and that desugaring takes care of constructing core language terms.

Observe, first, that we have two identical terms being applied to each other. Historically, the overall term is called Ω (capital omega in Greek) and each of the identical sub-terms is called ω (lower-case omega in Greek). It is not a given that identical terms must have precisely the same type, because it depends on what invariants we want to assert of the context of use. In this case, however, observe that x binds to ω, so the second ω goes into both the first and second positions. As a result, typing one effectively types both.

Therefore, let’s try to type ω; let’s call this type γ. It’s clearly a function type, and the function takes one argument, so it must be of the form φ -> ψ. Now what is that argument? It’s ω itself. That is, the type of the value going into φ is itself γ. Thus, the type of ω is γ, which is φ -> ψ, which expands into (φ -> ψ) -> ψ, which further expands to ((φ -> ψ) -> ψ) -> ψ, and so on. In other words, this type cannot be written as any finite string!

Did you notice the subtle but important leap we just made?

##### 20.2.3.2` `Program Termination

We observed that the obvious typing of Ω, which entails typing γ, seems to run into serious problems. From that, however, we jumped to the conclusion that this type cannot be written as any finite string, for which we’ve given only an intuition, not a proof. In fact, something even stranger is true: in the type system we’ve defined so far, we cannot type Ω at all!

This is a strong statement, but we can actually say something much
stronger. The typed language we have so far has a property
called strong normalization: every expression that has a type
will terminate computation after a finite number of steps. In other
words, this special (and peculiar) infinite loop program isn’t the
only one we can’t type; we can’t type any infinite loop (or
even potential infinite loop). A rough intuition that might help is
that any type—

If our language permitted only straight-line programs, this would be unsurprising. However, we have conditionals and even functions being passed around as values, and with those we can encode almost every program we’re written so far. Yet, we still get this guarantee! That makes this a somewhat astonishing result.

Try to encode lists using functions in the untyped and then in the typed language. What do you see? And what does that tell you about the impact of this type system on the encoding?

This result also says something deeper. It shows that, contrary to
what you may believe—

What possible use is a language in which all programs terminate? For
general-purpose programming, none, of course. But in many specialized
domains, it’s a tremendously useful guarantee to have. For instance,
suppose you are implementing a complex scheduling algorithm; you would
like to know that your scheduler is guaranteed to terminate so that
the tasks being scheduled will actually run. There are many other
domains, too, where we would benefit from such a guarantee: a
packet-filter in a router; a real-time event processor; a device
initializer; a configuration file; the callbacks in single-threaded
JavaScript; and even a compiler or linker. In each case, we have an
almost unstated expectation that these programs will always terminate.
And now we have a language that can offer this guarantee—

##### 20.2.3.3` `Typing Recursion

(rec (Σ num (n num) |

(if0 n |

0 |

(n + (Σ (n + -1))))) |

(Σ 10)) |

How do we type such an expression? Clearly, we must have n bound in the body of the function as we type it (but not of course, in the use of the function, due to static scope); this much we know from typing functions. But what about Σ? Obviously it must be bound in the type environment when checking the use ((Σ 10)), and its type must be num -> num. But it must also be bound, to the same type, when checking the body of the function. (Observe, too, that the type returned by the body must match its declared return type.)

Now we can see how to break the shackles of the finiteness of the type. It is certainly true that we can write only a finite number of ->’s in types in the program source. However, this rule for typing recursion duplicates the -> in the body that refers to itself, thereby ensuring that there is an inexhaustible supply of applications.It’s our infinite quiver of arrows.

| recC (f, v, at, rt, b, c) => |

extended-env = xtnd-ty-env(tybind(f, funT(at, rt)), tnv) |

if not(rt == tc(b, xtnd-ty-env(tybind(v, at), extended-env))): |

raise("rec: function return type not correct") |

else: |

tc(c, extended-env); |

##### 20.2.4` `Recursion in Data

We have seen how to type recursive programs, but this doesn’t yet
enable us to create recursive data. We already have one kind of
recursive datum—

##### 20.2.4.1` `Recursive Datatype Definitions

Creating a new type.

Letting instances of the new type have one or more fields.

Letting some of these fields refer to instances of the same type.

Allowing non-recursive base-cases for the type.

data BinTree:

| leaf

| node (value :: Number,

left :: BinTree,

right :: BinTree)

end

This concludes our initial presentation of recursive types, but we
have not actually explained where this new type, BinTree, comes
from. It is obviously impractical to pretend that it is baked into
our type-checker, because we can’t keep changing it for each new
recursive type definition—

This style of data definition is sometimes also known as a sum of products. “Product” refers to the way fields combine in one variant: for instance, the legal values of a node are the cross-product of legal values in each of the fields, supplied to the node constructor. The “sum” is the aggregate of all these variants: any given BinTree value is just one of these. (Think of “product” as being “and”, and “sum” as being “or”.)

##### 20.2.4.2` `Introduced Types

leaf :: BinTree # a constant

node :: Number, BinTree, BinTree -> BinTree

is-leaf :: BinTree -> Bool

is-node :: BinTree -> Bool

.value :: BinTree(is-node) -> Number

.left :: BinTree(is-node) -> BTnum

.right :: BinTree(is-node) -> BTnum

In what two ways are the last three entries above fictitious?

Both the constructors create instances of BinTree, not something more refined. We will discuss this design tradeoff later [REF].

Both predicates consume values of type BinTree, not “any” value. This is because the type system can already tell us what type a value is. Thus, we only need to distinguish between the variants of that one type.

The selectors really only work on instances of the relevant variant—

e.g., .value can work only on instances of node, not on instances of leaf— but we don’t have a way to express this in the static type system for lack of a suitable static type. Thus, applying these can only result in a dynamic error, not a static one caught by the type system.

##### 20.2.4.3` `Selectors

.value, .left, and .right are selectors: they select parts of the record by name. But here are the two ways in which they are fictitious. First, syntactically: in most languages with “dotted field access”, there is no such stand-alone operator as .value: e.g., you cannot write .value(...). But even setting aside this syntactic matter (which could be addressed by arguing that writing v.value is just an obscure syntax for applying this operator) the more interesting subtlety is the semantic one.

data Payment:

| cash(value :: Number)

| card(number :: Number, value :: Number)

.value :: Payment(is-cash) -> Number

.value :: Payment(is-card) -> Number

.value :: Payment -> Number

A characteristic of scripting languages is that objects are merely hash tables, and all field access is turned into a hash-table reference on the string representing the field-name. Hence, o.f is just syntactic sugar for looking up the value indexed by "f" in the dictionary associated with o.

- In Racket, the structure definitions such as
(struct cash (value)) (struct card (number value)) generate distinct selectors: in this case, cash-value and card-value, respectively. Now there is no longer any potential for confusion, because they have different names that can each have distinct types.

(define (->value v) (cond [(node? v) (node-value v)] [(cash? v) (cash-value v)] [(card? v) (card-value v)]))

##### 20.2.4.4` `Pattern-Matching and Desugaring

cases (BinTree) t:

| leaf => e1

| node(v, l, r) => e2

end

if is-leaf(t):

e1

else if is-node(t):

v = t.value

l = t.left

r = t.right

e2

end

Except, that’s not quite so easy. Somehow, the desugaring that generates the code above in terms of if needs to know that the three positional selectors for a node are value, left, and right, respectively. This information is explicit in the type definition but only implicitly present in the use of the pattern-matcher (that, indeed, being the point). Thus, somehow this information must be communicated from definition to use. Thus the desugarer needs something akin to the type environment to accomplish its task.

Observe, furthermore, that expressions such as e1 and e2
cannot be type-checked—

##### 20.2.5` `Types, Time, and Space

Types bestow a performance benefit in safe languages. That is because
the checks that would have been performed at run-time—

This is at some cost to the developer, who must convince the static type system that their program does not induce type errors; due to the limitations of decidability, even programs that might have run without error might run afoul of the type system. Nevertheless, for programs for which this can be done, types provide a notable execution time saving.

Now let’s discuss space. Until now, the language run-time system has needed to store information attached to every value indicating what its type is. This is how it can implement type-level predicates such as is-number, which may be used both by developers and by primitives. If those predicates disappear, so does the space needed to hold information to implement them. Thus, type-tags are no longer necessary.They would, however, still be needed by the garbage collector, though other representations such as BIBOP [REF] can greatly reduce their space impact.

if is-leaf(t):

e1

else if is-node(t):

v = t.value-unsafe

l = t.left-unsafe

r = t.right-unsafe

e2

end

The net result, however, is that the run-time representation must still store enough information to accurately answer these questions. However, previously it needed to use enough bits to record every possible type (and variant). Now, because the types have been statically segregated, for a type with no variants (e.g., there is only one kind of string), there is no need to store any variant information at all; that means the run-time system can use all available bits to store actual dynamic values.I must reiterate that in practice this representational frugality is reduced by a garbage collector’s need for tags.

In contrast, when variants are present, the run-time system must sacrifice bits to distinguish between the variants, but the number of variants within a type is obviously far smaller than the number of variants and types across all types. In the BinTree example above, there are only two variants, so the run-time system needs to use only one bit to record which variant of BinTree a value represents.

Observe, in particular, that the type system’s segregation prevents confusion. If there are two different datatypes that each have two variants, in the untyped world all these four variants require distinct representations. In contrast, in the typed world their representations can overlap across types, because the static type system will ensure one type’s variants are never confused for that the another. Thus, types have a genuine space (saving representation) and time (eliminating run-time checks) performance benefit for programs.

##### 20.2.6` `Types and Mutation

block:

var x = 10

x := 5

x := "something"

end

block:

var x = 10

if is-even(read-number("Enter a number")):

x := 5

else:

x := "something"

end

end

To avoid this morass, traditional type checkers adopt a simple policy:
types must be invariant across mutation. That is, a mutation
operation—

In short, mutation is easy to account for in a traditional type system because its rule is simply that, while the value can change in ways below the level of specificity of the type system, the type cannot change. In the case of an operation like := (or our core language’s setC (Interpreting Variables)) this means the type of the expression whose value is being assigned must match that of the mutant. In the case of structure mutation, such as boxes, it means the assigned value must match that the box’s contained type.

#### 20.3` `The Central Theorem: Type Soundness

We have seen earlier (Program Termination) that certain
type languages can offer very strong theorems about their programs:
for instance, that all programs in the language terminate. In
general, of course, we cannot obtain such a guarantee (indeed, we
added general recursion precisely to let ourselves write unbounded
loops). However, a meaningful type system—

What theorem might we want of a type system? Remember that the type checker runs over the static program, before execution. In doing so, it is essentially making a prediction about the program’s behavior: for instance, when it states that a particular complex term has type Number, it is predicting that when run, that term will produce a numeric value. How do we know this prediction is sound, i.e., that the type checker never lies? Every type system should be accompanied by a theorem that proves this.

The type checker only sees program text, whereas the evaluator runs over actual stores.

The type environment binds identifiers to types, whereas the evaluator’s environment binds identifiers to values or locations.

The type checker compresses (even infinite) sets of values into types, whereas the evaluator treats the elements of these sets distinctly.

The type checker always terminates, whereas the evaluator might not.

The type checker passes over the body of each expression only once, whereas the evaluator might pass over each body anywhere from zero to infinite times.

The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t.

The standard way of proving this theorem is to prove it in two parts, known as progress and preservation. Progress says that if a term passes the type-checker, it will be able to make a step of evaluation (unless it is already a value); preservation says that the result of this step will have the same type as the original. If we interleave these steps (first progress, then preservation; rinse and repeat), we can conclude that the final answer will indeed have the same type as the original, so the type system is indeed sound.

For instance, consider this expression: 5 + (2 * 3). It has
the type Number. In a sound type system, progress offers a
proof that, because this term types, and is not already a value, it
can take a step of execution—

The program may not produce an answer at all; it might loop forever. In this case, the theorem strictly speaking does not apply. However, we can still observe that every intermediate term has the same type as the whole expression, so the program is computing meaningfully even if it isn’t producing a value.

Any rich enough language has properties that cannot be decided statically (and others that perhaps could be, but the language designer chose to put off until run-time to reduce the burden on the programmer to make programs pass the type-checker). When one of these properties fails—

e.g., the array index being within bounds— there is no meaningful type for the program. Thus, implicit in every type soundness theorem is some set of published, permitted exceptions or error conditions that may occur. The developer who uses a type system implicitly signs on to accepting this set.

The latter caveat looks like a cop-out. In fact, it is easy to forget
that it is really a statement about what cannot happen at
run-time: any exception not in this set will provably not be raised.
Of course, in languages designed with static types in the first place,
it is not clear (except by loose analogy) what these exceptions might
be, because there would be no need to define them. But when we
retrofit a type system onto an existing programming
language—

#### 20.4` `Union Types

data Animal:

| armadillo (is-alive :: Bool)

| boa (length :: Number)

end

data LivingThings:

| animal (a :: Animal)

| plant (p :: Plant)

end

data Constraint:

| eq-con (lhs :: Term) (rhs :: Term)

end

Either way, the point of a union type is to represent a disjunction, or “or”. A value’s type is one of the types in the union. A value usually belongs to only one of the types in the union, though this is a function of precisely how the union types are defined, whether there are rules for normalizing them, and so on.

##### 20.4.1` `Structures as Types

(struct armadillo (alive?)) (struct boa (length))

;; An Animal is either ;; - (armadillo <boolean>) ;; - (boa <number>)

(struct: armadillo ([alive? : Boolean])) (struct: boa ([length : Real])) ;; feet

;; http://en.wikipedia.org/wiki/Boa_constrictor#Size_and_weight (define: (big-one? [b : boa]) : Boolean (> (boa-length b) 8))

(define-type Animal (U armadillo boa))

(define: (safe-to-transport? [a : Animal]) : Boolean (cond [(boa? a) (not (big-one? a))] [(armadillo? a) (armadillo-alive? a)]))

##### 20.4.2` `Untagged Unions

(define-type MaybeNumber (U Number Boolean))

(define-type MaybeNumber (U Number False))

(struct: none ()) (define-type (Maybeof T) (U T none))

(All (a) (a (Listof a) -> (U False (Listof a))))

> (member 2 (list 1 2 3)) |

'(2 3) |

(define: (t) (in-list? [e : t] [l : (Listof t)]) : (Maybeof (Listof t)) (let ([v (member e l)]) (if v v (none))))

> (in-list? 2 (list 1 2 3)) |

'(2 3) |

##### 20.4.3` `Discriminating Untagged Unions

fun is-safe-to-transport(a :: Animal) -> Bool:

cases (Animal) a:

| armadillo(is-a) => is-a

| boa(l) => not(is-big-one(l))

end

end

if is-boa(a):

not(is-big-one(a))

##### 20.4.4` `Retrofitting Types

It is unsurprising that Typed Racket uses union types. They are especially useful when retrofitting types onto existing programming languages whose programs were not defined with an ML-like type discipline in mind, such as in scripting languages. A common principle of such retrofitted types is to statically catch as many dynamic exceptions as possible. Of course, the checker must ultimately reject some programs,Unless it implements an interesting idea called soft typing, which rejects no programs but provides information about points where the program would not have been typeable. and if it rejects too many programs that would have run without an error, developers are unlikely to adopt it. Because these programs were written without type-checking in mind, the type checker may therefore need to go to heroic lengths to accept what are considered reasonable idioms in the language.

var slice = function (arr, start, stop) { |

var result = []; |

for (var i = 0; i <= stop - start; i++) { |

result[i] = arr[start + i]; |

} |

return result; |

} |

var slice = function (arr, start, stop) { |

if (typeof stop == "undefined") |

stop = arr.length - 1; |

var result = []; |

for (var i = 0; i <= stop - start; i++) { |

result[i] = arr[start + i]; |

} |

return result; |

} |

∀ t : (Array[t] * Int * (Int U Undefined) -> Array[t])

##### 20.4.5` `Design Choices

Stand-alone structure types (often represented using classes), rather than datatypes with variants.

Ad hoc collections of structures to represent particular types.

The use of sentinel values to represent failure.

Let’s tackle sentinels first. In many cases, sentinels ought to be replaced with exceptions, but in many languages, exceptions can be very costly. Thus, developers prefer to make a distinction between truly exceptional situations—

that ought not occur— and situations that are expected in the normal course of operation. Checking whether an element is in a list and failing to find it is clearly in the latter category (if we already knew the element was or wasn’t present, there would be no need to run this predicate). In the latter case, using sentinels is reasonable. However, we must square this with the observation that failure to check for exceptional sentinel values is a common source of error—

and indeed, security flaws— in C programs. This is easy to reconcile. In C, the sentinel is of the same type (or at least, effectively the same type) as the regular return value, and furthermore, there are no run-time checks. Therefore, the sentinel can be used as a legitimate value without a type error. As a result, a sentinel of 0 can be treated as an address into which to allocate data, thus potentially crashing the system. In contrast, our sentinel is of a truly new type that cannot be used in any computation. We can easily reason about this by observing that no existing functions in our language consume values of type none. - Setting aside the use of “ad hoc”, which is pejorative, are different groupings of a set of structures a good idea? In fact, such groupings occur even in programs using an ML-like discipline, when programmers want to carve different sub-universes of a larger one. For instance, Pyret programmers would need to use a type like
data SExp:

| num-sexp (n :: Number)

| str-sexp (s :: String)

| list-sexp (l :: List<SExp>)

end

to represent s-expressions. If a function now operates on just some subset of these terms—say just numbers and lists of numbers— they must create a fresh type, and convert values between the two types even though their underlying representations are essentially identical. As another example, consider the set of CPS expressions. This is clearly a subset of all possible expressions, but if we were to create a fresh datatype for it, we would not be able to use any existing programs that process expressions— such as the interpreter.

(if (phase-of-the-moon) 10 true)

#### 20.5` `Nominal Versus Structural Systems

In our initial type-checker, two types were considered equivalent if they had the same structure.

Where was this notion of equality exercised?

In fact, we offered no mechanism for naming types at all, so it is not clear what alternative we had.

(define-type NB1 (U Number Boolean)) (define-type NB2 (U Number Boolean))

(define: v : NB1 5)

(define: (f [x : NB2]) : NB2 x)

There are two perfectly reasonable interpretations. One is to say that v was declared to be of type NB1, which is a different name than NB2, and hence should be considered a different type, so the above application should result in an error. Such a system is called nominal, because the name of a type is paramount for determining type equality.

In contrast, another interpretation is that because the
structure of NB1 and NB2 are identical, there is
no way for a developer to write a program that behaves differently on
values of these two types, so these two types should be considered
identical.If you want to get especially careful, you
would note that there is a difference between being considered the
same and actually being the same. We won’t go into this issue here,
but consider the implication for a compiler writer choosing
representations of values, especially in a language that allows
run-time inspection of the static types of values. Such a type
system is called structural, and would successfully type the above
expression. (Typed Racket follows a structural discipline, again to
reduce the burden of importing existing untyped code, which—

The difference between nominal and structural typing is most commonly
contentious in object-oriented languages, and we will return to this
issue briefly later [REF]. However, the point of this section is to
illustrate that these questions are not intrinsically about
“objects”. Any language that permits types to be named—

#### 20.6` `Intersection Types

Since we’ve just explored union types, you must naturally wonder whether there are also intersection types. Indeed there are.

If a union type means that a value (of that type) belongs to one of the types in the union, an intersection type clearly means the value belongs to all the types in the intersection: a conjunction, or “and”. This might seem strange: how can a value belong to more than one type?

(U (Number, Number -> Number)

(String, String -> String))

((Number U String), (Number U String) -> (Number U String))

In what way does this type lose precision?

Observe that with this type, the return type on all invocations is (Number U String). Thus, on every return we must distinguish beween numeric and string returns, or else we will get a type error. Thus, even though we know that if given two numeric arguments we will get a numeric result, this information is lost to the type system.

More subtly, this type permits each argument’s type to be chosen independently of the other. Thus, according to this type, the invocation 3 + "x" is perfectly valid (and produces a value of type (Number U String)). But of course the addition operation we have specified is not defined for these inputs at all!

(∧ (Number, Number -> Number)

(String, String -> String))

#### 20.7` `Recursive Types

leaf U node(Number, BinTree, BinTree)

BinTree = leaf U node(Number, BinTree, BinTree)

μ BinTree : leaf U node(Number, BinTree, BinTree)

BinTree = μ BinTree : leaf U node(Number, BinTree, BinTree)

BinTree = μ T : leaf U node(Number, T, T)

NumL = μ T : empty U link(Number, T)

μ T : empty U link(Number, T)

= empty U link(Number, μ T : empty U link(Number, T))

= empty U link(Number, empty)

U link(Number, link(Number, μ T : empty U link(Number, T)))

= ...

Observe that even with this informal understanding of μ, we can now provide a type to ω, and hence to Ω.

Ascribe types to ω and Ω.

#### 20.8` `Subtyping

(struct: mt ()) (struct: nd ([v : Number] [l : BT] [r : BT])) (define-type BT (U mt nd))

> (mt) |

- : mt |

#<mt> |

> (nd 5 (mt) (mt)) |

- : nd |

#<nd> |

Obviously, it is not coincidental that we have defined BT in terms of mt and nd. However, it does indicate that when type-checking, we cannot simply be checking for function equality, at least not as we have so far. Instead, we must be checking that one type “fits into” the other. This notion of fitting is called subtyping (and the act of being fit, subsumption).

The essence of subtyping is to define a relation, usually denoted by
<:, that relates pairs of types. We say S <: T if a
value of type S can be given where a value of type T is
expected: in other words, subtyping formalizes the notion of
substitutability (i.e., anywhere a value of type T was
expected, it can be replaced with—

Subtyping has a pervasive effect on the type system. We have to
reexamine every kind of type and understand its interaction with
subtyping. For base types, this is usually quite obvious: disjoint
types like Number, String, etc., are all unrelated to
each other. (In languages where one base type is used to represent
another—

In fact, even our very diction about types has to change. Suppose we have an expression of type T. Normally, we would say that it produces values of type T. Now, we should be careful to say that it produces values of up to or at most T, because it may only produce values of a subtype of T. Thus every reference to a type should implicitly be cloaked in a reference to the potential for subtyping. To avoid pestering you I will refrain from doing this, but be wary that it is possible to make reasoning errors by not keeping this implicit interpretation in mind.

##### 20.8.1` `Subtyping Unions

mt <: BT nd <: BT

S <: (S U T)

T <: (S U T)

##### 20.8.2` `Subtyping Intersections

(S ∧ T) <: S

(S ∧ T) <: T

Why are the following two not valid subsumptions?

(S U T) <: S

T <: (S ∧ T)

The first is not valid because a value of type T is a perfectly valid element of type (S U T). For instance, a number is a member of type (string U number). However, a number cannot be supplied where a value of type string is expected.

As for the second, in general, a value of type T is not also a value of type S. Any consumer of a (S ∧ T) value is expecting to be able to treat it as both a T and a S, and the latter is not justified. For instance, given our overloaded + from before, if T is Number, Number -> Number, then a function of this type will not know how to operate on strings.

##### 20.8.3` `Subtyping Functions

(S1 -> T1) <: (S2 -> T2)

Consider a use of the f2 type. It returns values of type T2. Thus, the context surrounding the function application is satisfied with values of type T2. Clearly, if T1 is the same as T2, the use of f2 would continue to type; similarly, if T1 consists of a subset of T2 values, it would still be fine. The only problem is if T1 has more values than T2, because the context would then encounter unexpected values that would result in undefined behavior. In other words, we need that T1 <: T2. Observe that the “direction” of containment is the same as that for the entire function type; this is called covariance (both vary in the same direction). This is perhaps precisely what you expected.

By the same token, you might expect covariance in the argument position as well: namely, that S1 <: S2. This would be predictable, and wrong. Let’s see why.

An application of a function with f2 type is providing
parameter values of type S2. Suppose we instead substitute the
function with one of type f1. If we had that S1 <: S2,
that would mean that the new function accepts only values of
typeS1—

(S2 <: S1) and (T1 <: T2) => (S1 -> T1) <: (S2 -> T2)

##### 20.8.4` `Implementing Subtyping

Until now all of our type rules have been syntax-driven, which is what enabled us to write a recursive-descent type-checker. Now, however, we have a rule that applies to all expressions, so we can no longer be sure when to apply it.

There could be many levels of subtyping. As a result, it is no longer obvious when to “stop” subtyping. In particular, whereas before type-checking was able to calculate the type of an expression, now we have many possible types for each expression; if we return the “wrong” one, we might get a type error (due to that not being the type expected by the context) even though there exists some other type that was the one expected by the context.

#### 20.9` `Object Types

{add1 :: Number -> Number, sub1 :: Number -> Number}

Self-reference. What is the type of self? It must the same type as the object itself, since any operation that can be applied to the object from the “outside” can also be applied to it from the “inside” using self. This means object types are recursive types.

Access controls: private, public, and other restrictions. These lead to a distinction in the type of an object from “outside” and “inside”.

Inheritance. Not only do we have to give a type to the parent object(s), what is visible along inheritance paths may, again, differ from what is visible from the “outside”.

The interplay between multiple-inheritance and subtyping.

The relationship between classes and interfaces in languages like Java, which has a run-time cost.

Mutation.

Casts.

A full exposition of these issues will take much more room than we have here. For now, we will limit ourselves to one interesting question. Remember that we said subtyping forces us to consider every type constructor? The structural typing of objects introduces one more: the object type constructor. We therefore have to understand its interaction with subtyping.

Before we do, let’s make sure we understand what an object type even means. Consider the type addsub above, which lists two methods. What objects can be given this type? Obviously, an object with just those two methods, with precisely those two types, is eligible. Equally obviously, an object with only one and not the other of those two methods, no matter what else it has, is not. But the phrase “no matter what else it has” is meant to be leading. What if an object represents an arithmetic package that also contains methods add and mult, in addition to the above two (all of the appropriate type)? In that case we certainly have an object that can supply those two methods, so the arithmetic package certainly has type addsub. Its other methods are simply inaccessible using type addsub.

{add1 :: Number -> Number,

sub1 :: Number -> Number,

add :: Number, Number -> Number,

mult :: Number, Number -> Number}

{add1 :: Number -> Number, {add1 :: Number -> Number,

sub1 :: Number -> Number, <: sub1 :: Number -> Number}

add :: Number, Number -> Number,

mult :: Number, Number -> Number}

To understand why this is sound, it helps to develop the intuition that the “larger” the type, the fewer values it can have. Every object that has the four methods on the left clearly also has the two methods on the right. However, there are many objects that have the two methods on the right that fail to have all four on the left. If we think of a type as a constraint on acceptable value shapes, the “bigger” type imposes more constraints and hence admits fewer values. Thus, though the types may appear to be of the wrong sizes, everything is well because the sets of values they subscribe are of the expected sizes.

More generally, this says that by dropping fields from an object’s type, we obtain a supertype. This is called width subtyping, because the subtype is “wider”, and we move up the subtyping hierarchy by adjusting the object’s “width”. We see this even in the nominal world of Java: as we go up the inheritance chain a class has fewer and fewer methods and fields, until we reach Object, the supertype of all classes, which has the fewest. Thus for all class types C in Java, C <: Object.Somewhat confusingly, the terms narrowing and widening are sometimes used, but with what some might consider the opposite meaning. To widen is to go from subtype to supertype, because it goes from a “narrower” (smaller) to a “wider” (bigger) set. These terms evolved independently, but unfortunately not consistently.

As you might expect, there is another important form of subtyping, which is within a given member. This simply says that any particular member can be subsumed to a supertype in its corresponding position. For obvious reasons, this form is called depth subtyping.

Construct two examples of depth subtyping. In one, give the field itself an object type, and use width subtyping to subtype that field. In the other, give the field a function type.

Java has limited depth subtyping, preferring types to be invariant down the object hierarchy because this is a safe option for conventional mutation.

The combination of width and depth subtyping cover the most interesting cases of object subtyping. A type system that implemented only these two would, however, needlessly annoy programmers. Other convenient (and mathematically necessary) rules include the ability to permute names, reflexivity (every type is a subtype of itself, because it is more convenient to interpret the subtype relationship as ⊆), and transitivity. Languages like Typed JavaScript employ all these features to provide maximum flexibility to programmers.

#### 20.10` `Explicit Parametric Polymorphism

List<String>

List<String>

List<String>

Actually, none of these is quite the same. But the first and third are very alike, because the first is in Java and the third in typed Pyret, whereas the second, in C++, is different. All clear? No? Good, read on!

##### 20.10.1` `Parameterized Types

((A -> B), List<A> -> List<B>)

((Number -> String), List<Number> -> List<String>)

((Number -> (Number -> Number)),

List<Number> -> List<(Number -> Number)>)

((String -> String), List<String> -> List<String>)

Obviously, it is impossible to load all these functions into our standard library: there’s an infinite number of these! We’d rather have a way to obtain each of these functions on demand. Our naming convention offers a hint: it is as if map takes two parameters, which are types. Given the pair of types as arguments, we can then obtain a map that is customized to that particular type. This kind of parameterization over types is called parametric polymorphism.Not to be confused with the “polymorphism” of objects, which we will discuss below [REF].

##### 20.10.2` `Making Parameters Explicit

fun map(A :: ???, B :: ???, f :: (A -> B), l :: List<A>)

-> List<B>:

...;

Observe that once we start parameterizing, more code than we expect
ends up being parameterized. For instance, consider the type of the
humble link. Its type really is parametric over the type of
values in the list (even though it doesn’t actually depend on those
values!—

##### 20.10.3` `Rank-1 Polymorphism

Instead, we will limit ourselves to one particularly useful and tractable point in this space, which is the type system of Standard ML, of earlier versions of Haskell, roughly that of Java and C# with generics, and roughly that obtained using templates in C++. This language defines what is called predicative, rank-1, or prenex polymorphism.

∀ A, B : ((A -> B), List<A> -> List<B>)

In rank-1 polymorphism, the type variables can only be substituted with monotypes. (Furthermore, these can only be concrete types, because there would be nothing left to substitute any remaining type variables.) As a result, we obtain a clear separation between the type variable-parameters and regular parameters. We don’t need to provide a “type annotation” for the type variables because we know precisely what kind of thing they can be. This produces a relatively clean language that still offers considerable expressive power.Impredicative languages erase the distinction between monotypes and polytypes, so a type variable can be instantiated with another polymorphic type.

fun<T> id(x :: T) -> T: x; |

##### 20.10.4` `Interpreting Rank-1 Polymorphism as Desugaring

id-num = id<Number>

id-str = id<String>

However, this approach has two important limitations.

- Let’s try to define a recursive polymorphic function, such as filter. Earlier we have said that we ought to instantiate every single polymorphic value (such as even cons and empty) with types, but to keep our code concise we’ll rely on the fact that the underlying typed language already does this, and focus just on type parameters for filter. Here’s the code:
fun<T> filter(pred :: (T -> Bool), l :: List<T>) -> List<T>:

cases (List) l:

| empty => empty

| link(f, r) =>

if pred(f):

link(f, filter<T>(pred, r))

else:

filter<T>(pred, r);

end

end

Observe that at the recursive uses of filter, we must instantiate it with the appropriate type.This is a perfectly good definition. There’s just one problem. If we try to use it—e.g., filter-num = filter<Number>

the implementation will not terminate. This is because the desugarer is repeatedly trying to make new copies of the code of filter at each recursive call.ExerciseIf, in contrast, we define a local helper function that performs the recursion, this problem can be made to disappear. Can you figure out that version?

Consider two instantiations of the identity function. They would necessarily be different because they are two different pieces of code residing at different locations in memory.Indeed, the use of parametric polymorphism in C++ is notorious for creating code bloat. However, all this duplication is unnecessary! There’s absolutely nothing in the body of id, for instance, that actually depends on the type of the argument. Indeed, the entire infinite family of id functions can share just one implementation. The simple desugaring strategy fails to provide this.

In other words, the desugaring based strategy, which is essentially an implementation by substitution, has largely the same problems we saw earlier with regards to substitution as an implementation of parameter instantiation (From Substitution to Environments). However, in other cases substitution also gives us a ground truth for what we expect as the program’s behavior. The same will be true with polymorphism, as we will soon see [REF].

Observe that one virtue to the desugaring strategy is that it does not
require our type checker to “know” about polymorphism. Rather, the
core type language can continue to be monomorphic, and all the
(rank-1) polymorphism is handled entirely through expansion. This
offers a cheap strategy for adding polymorphism to a language,
though—

Finally, though we have only focused on functions, the preceding discussion applies equally well to data structures.

##### 20.10.5` `Alternate Implementations

There are other implementation strategies that don’t suffer from these problems. We won’t go into them here, but the essence is to memoize expansion ([REF]). Because we can be certain that, for a given set of type parameters, we will always get the same typed body, we never need to instantiate a polymorphic function at the same type twice. This avoids the infinite loop. If we type-check the instantiated body once, we can avoid checking at other instantiations of the same type (because the body will not have changed). Furthermore, we do not need to retain the instantiated sources: once we have checked the expanded program, we can dispose of the expanded terms and retain just one copy at run-time. This avoids all the problems discussed in the pure desugaring strategy shown above, while retaining the benefits.

Actually, we are being a little too glib. One of the benefits of
static types is that they enable us to pick more precise run-time
representations. For instance, in most languages a static type can
tell us whether we have a 32-bit or 64-bit number, or for that matter
a 32-bit value or a 1-bit value (effectively, a boolean). A compiler
can then generate specialized code for each representation, taking
advantage of how the bits are laid out (for example, 32 booleans can
use a ☛ packed representation to fit into a single
32-bit word). Thus, after type-checking at each used type, the
polymorphic instantiator may keep track of all the special types at
which a function or data structure was used, and provide this
information to the compiler for code-generation. This will then
result in several copies of the function, but only as many as those
for which the compiler can generate distinct, efficient
representations—

##### 20.10.6` `Relational Parametricity

There’s one last detail we must address regarding polymorphism.

We earlier said that a function like cons doesn’t depend on the specific values of its arguments. This is also true of map, filter, and so on. When map and filter want to operate on individual elements, they take as a parameter another function which in turn is responsible for making decisions about how to treat the elements; map and filter themselves simply obey their parameter functions.

One way to “test” whether this is true is to substitute some different values in the argument list, and a correspondingly different parameter function. That is, imagine we have a relation between two sets of values; we convert the list elements according to the relation, and the parameter function as well. The question is, will the output from map and filter also be predictable by the relation? If, for some input, this was not true of the output of map, then it must be that map inspected the actual values and did something with that information. But in fact this won’t happen for map, or indeed most of the standard polymorphic functions.

Functions that obey this relational rule are called relationally parametricRead Wadler’s Theorems for Free! and Reynolds’s Types, Abstraction and Parametric Polymorphism.. This is another very powerful property that types give us, because they tell us there is a strong limit on the kinds of operations such polymorphic functions can perform: essentially, that they can drop, duplicate, and rearrange elements, but not directly inspect and make decisions on them.

At first this sounds very impressive (and it is!), but on inspection
you might realize this doesn’t square with your experience. In Java,
for instance, a polymorphic method can still use instanceof to
check which particular kind of value it obtained at run-time, and
change its behavior accordingly. Such a method would indeed not be
relationally parametric!On the Web, you will often find
this property described as the inability of a function to inspect the
argument—