On this page:
20.1 Types as a Static Discipline
20.2 A Classical View of Types
20.2.1 A Simple Type Checker
20.2.2 Type-Checking Conditionals
20.2.3 Recursion in Code A First Attempt at Typing Recursion Program Termination Typing Recursion
20.2.4 Recursion in Data Recursive Datatype Definitions Introduced Types Selectors Pattern-Matching and Desugaring
20.2.5 Types, Time, and Space
20.2.6 Types and Mutation
20.3 The Central Theorem:   Type Soundness
20.4 Union Types
20.4.1 Structures as Types
20.4.2 Untagged Unions
20.4.3 Discriminating Untagged Unions
20.4.4 Retrofitting Types
20.4.5 Design Choices
20.5 Nominal Versus Structural Systems
20.6 Intersection Types
20.7 Recursive Types
20.8 Subtyping
20.8.1 Subtyping Unions
20.8.2 Subtyping Intersections
20.8.3 Subtyping Functions
20.8.4 Implementing Subtyping
20.9 Object Types
20.10 Explicit Parametric Polymorphism
20.10.1 Parameterized Types
20.10.2 Making Parameters Explicit
20.10.3 Rank-1 Polymorphism
20.10.4 Interpreting Rank-1 Polymorphism as Desugaring
20.10.5 Alternate Implementations
20.10.6 Relational Parametricity

20 Checking Program Invariants Statically: Types

    20.1 Types as a Static Discipline

    20.2 A Classical View of Types

      20.2.1 A Simple Type Checker

      20.2.2 Type-Checking Conditionals

      20.2.3 Recursion in Code A First Attempt at Typing Recursion Program Termination Typing Recursion

      20.2.4 Recursion in Data Recursive Datatype Definitions Introduced Types Selectors Pattern-Matching and Desugaring

      20.2.5 Types, Time, and Space

      20.2.6 Types and Mutation

    20.3 The Central Theorem: Type Soundness

    20.4 Union Types

      20.4.1 Structures as Types

      20.4.2 Untagged Unions

      20.4.3 Discriminating Untagged Unions

      20.4.4 Retrofitting Types

      20.4.5 Design Choices

    20.5 Nominal Versus Structural Systems

    20.6 Intersection Types

    20.7 Recursive Types

    20.8 Subtyping

      20.8.1 Subtyping Unions

      20.8.2 Subtyping Intersections

      20.8.3 Subtyping Functions

      20.8.4 Implementing Subtyping

    20.9 Object Types

    20.10 Explicit Parametric Polymorphism

      20.10.1 Parameterized Types

      20.10.2 Making Parameters Explicit

      20.10.3 Rank-1 Polymorphism

      20.10.4 Interpreting Rank-1 Polymorphism as Desugaring

      20.10.5 Alternate Implementations

      20.10.6 Relational Parametricity

As programs grow larger or more subtle, developers need tools to help them describe and validate statements about program invariants. For example, when we write x :: Number, we mean that x will always hold a Number, and that all parts of the program that depend on x can rely on this statement being enforced. As we will see, types are just one point in a spectrum of invariants we might wish to state, and static type checking—itself a diverse family of techniques—is also a point in a spectrum of methods we can use to enforce the invariants.

20.1 Types as a Static Discipline

In this chapter, we will focus especially on static type checking: that is, checking (declared) types before the program even executes.This is an extremely rich subject of modern investigation. For further study, I strongly recommend reading Pierce’s Types and Programming Languages. We have already experienced a form of this in our programs by virtue of using a typed programming language. We will explore some of the design space of types and their trade-offs. Finally, though static typing is an especially powerful and important form of invariant enforcement, we will also examine some other techniques that we have available.

Consider this Pyret program:

fun f(n :: Number) -> Number:

  n + 3




We would like to receive a type error before the program begins execution.Pyret does not currently perform static type checking, but this will soon change. The same program (without the type annotations) can fail only at run-time:

fun f(n):

  n + 3





How would you test the assertions that one fails before the program executes while the other fails during execution?

Now consider the following Pyretprogram:

fun f n:

  n + 3


This too fails before program execution begins, with a parse error. Though we think of parsing as being somehow distinct from type-checking—usually because a type-checker assumes it has a parsed program to begin with—it can be useful to think of parsing as being simply the very simplest kind of type-checking: determining (typically) whether the program obeys a context-free syntax. Type-checking then asks whether it obeys a context-sensitive (or richer) syntax. In short, type-checking is a generalization of parsing, in that both are concerned with syntactic methods for enforcing disciplines on programs.

20.2 A Classical View of Types

We will begin by introducing a traditional core language of types. Later, we will explore both extensions [REF] and variations [REF].

20.2.1 A Simple Type Checker

Before we can define a type checker, we have to fix two things: the syntax of our typed core language and, hand-in-hand with that, the syntax of types themselves.

To begin with, we’ll return to our language with functions-as-values (Functions Anywhere), before we added mutation and other complications (some of which we’ll return to later). To this language we have to add type annotations. Conventionally, we don’t impose type annotations on constants or on primitive operations such as addition; instead, we impose them on the boundaries of functions or methods. Over the course of this study we will explore why this is a good locus for annotations.

Given this decision, our typed core language becomes:

data TyExprC:

  | numC (n :: Number)

  | plusC (l :: TyExprC, r :: TyExprC)

  | multC (l :: TyExprC, r :: TyExprC)

  | idC (s :: String)

  | appC (f :: TyExprC, a :: TyExprC)

  | fdC (arg :: String, at :: Type, rt :: Type, body :: TyExprC)


That is, every procedure is annotated with the type of argument it expects and type of argument it purports to produce.

Now we have to decide on a language of types. To do so, we follow the tradition that the types abstract over the set of values. In our language, we have two kinds of values:

data Value:

  | numV (n :: Number)

  | closV (f :: ExprC(is-fdC), e :: List<Binding>)


It follows that we should have two kinds of types: one for numbers and the other for functions.

What information does a number type need to record? In most languages, there are actually many numeric types, and indeed there may not even be a single one that represents “numbers”. However, we have ignored these gradations between numbers (Changing Representations), so it’s sufficient for us to have just one. Having decided that, do we record additional information about which number? We could in principle, but that would mean for types to check, we would have to be able to decide whether two expressions compute the same number—a problem that reduces to the Halting Problem.In some specialized type systems, however, we do record some information about the number.

As for functions, we have more information: the type of expected argument, and the type of claimed result. We might as well record this information we have been given until and unless it has proven to not be useful. Combining these, we obtain the following abstract language of types:

data Type:

  | numT

  | funT (arg :: Type, ret :: Type)


Now that we’ve fixed both the term and type structure of the language, let’s make sure we agree on what constitute type errors in our language (and, by fiat, everything not a type error must pass the type checker). There are three obvious forms of type errors:
  • One or both arguments of + is not a number, i.e., is not a numT.

  • One or both arguments of * is not a number.

  • The expression in the function position of an application is not a function, i.e., is not a funT.

Do Now!

Any more?

We’re actually missing one:
  • The expression in the function position of an application is a function but the type of the actual argument does not match the type of the formal argument expected by the function.

Do Now!

Any more?

What about:
  • The expression in the function position of an application is a function but its return type does not match the type expected by the expression that invokes the function?

It seems clear all other programs in our language ought to type-check.

A natural starting type for the type-checker would be that it is a procedure that consumes an expression and returns a boolean value indicating whether or not the expression type-checked. Because we know expressions contain identifiers, it soon becomes clear that we will want a type environment, which maps names to types, analogous to the value environment we have seen so far.


Define the types and functions associated with type environments.

Thus, we might begin our program as follows:
<tc-take-1> ::=

    fun tc(e :: TyExprC, tnv :: List<TyBinding>) -> Bool:

      cases (TyExprC) e:






As the abbreviated set of cases above suggests, this approach will not work out. We’ll soon see why.

Let’s begin with the easy case: numbers. Does a number type-check? Well, on its own, of course it does; it may be that the surrounding context is not expecting a number, but that error would be signaled elsewhere. Thus:
<tc-take-1-numC> ::=

    | numC(n) => true

Now let’s handle identifiers. Is an identifier well-typed? Again, on its own it would appear to be, provided it is actually a bound identifier; it may not be what the context desires, but hopefully that too would be handled elsewhere. Thus we might write
<tc-take-1-idC> ::=

    | idC(s) =>

      if ty-lookup(s, tnv):



        raise("unbound identifier: " + s);

This should make you a little uncomfortable: ty-lookup already signals an error if an identifier isn’t bound, so there’s no need to repeat it (indeed, we will never get to this raise). But let’s push on.

Now we tackle applications. We should type-check both the function part, to make sure it’s a function, then ensure that the actual argument’s type is consistent with what the function declares to be the type of its formal argument. How does the code look?
<tc-take-1-appC> ::=

    | appC(f, a) =>

      ft = tc(f, tnv)


The recursive call to tc can only tell us whether the function expression type-checks or not. If it does, how do we know what type it has? If we have an immediate function, we could reach into its syntax and pull out the argument (and return) types. But if we have a complex expression, we need some procedure that will calculate the resulting type of that expression. Of course, such a procedure could only provide a type if the expression is well-typed; otherwise it would not be able to provide a coherent answer. In other words, our type “calculator” has type “checking” as a special case!

We should therefore strengthen the inductive invariant on tc: that it not only tells us whether an expression is typed, but also what its type is. Indeed, by giving any type at all it confirms that the expression types, and otherwise it signals an error.

Let’s now define this richer notion of a type “checker”.
<tc> ::=

    fun tc(e :: TyExprC, tnv :: List<TyBinding>) -> Type:

      cases (TyExprC) e:









Now let’s fill in the pieces. Numbers are easy: they have the numeric type.
<tc-numC> ::=

    | numC(n) => numT

Similarly, identifiers have whatever type the environment says they do (and if they aren’t bound, looking them up signals an error).
<tc-idC> ::=

    | idC(s) => ty-lookup(s, tnv)

Observe, so far, the similarity to and difference from interpreting: in the identifier case we did essentially the same thing (except we returned a type rather than an actual value), whereas in the numeric case we returned the abstract “number” (numT) rather than indicate which specific number it was.

Let’s now examine addition. We must make sure both sub-expressions have numeric type; only if they do will the overall expression evaluate to a number itself.
<tc-plusC> ::=

    | plusC(l, r) =>

      lt = tc(l, tnv)

      rt = tc(r, tnv)

      if is-numT(lt) and is-numT(rt):



        raise("addition: must both be numbers");

We’ve usually glossed over multiplication after considering addition, but now it will be instructive to handle it explicitly:
<tc-multC> ::=

    | multC(l, r) => ...

      lt = tc(l, tnv)

      rt = tc(r, tnv)

      if is-numT(lt) and is-numT(rt):



        raise("multiplication: must both be numbers");

Do Now!

Did you see what’s different?

That’s right: practically nothing! That’s because, from the perspective of type-checking (in this type language), there is no difference between addition and multiplication, or indeed between any two operations that consume two numbers and return one.

Observe another difference between interpreting and type-checking. Both care that the arguments be numbers. The interpreter then returns a precise sum or product, but the type-checker is indifferent to the differences between them: therefore the expression that computes what it returns (numT) is a constant, and the same constant in both cases.

Finally, the two hard cases: application and funcions. We’ve already discussed what application must do: compute the value of the function and argument expressions; ensure the function expression has function type; and check that the argument expression is of compatible type. If all this holds up, then the type of the overall application is whatever type the function body would return (because the value that eventually returns at run-time is the result of evaluating the function’s body).
<tc-appC> ::=

    | appC(f, a) =>

      ft = tc(f, tnv)

      at = tc(a, tnv)

      if not(is-funT(ft)):

        raise("application: must be a function")

      else if not(ft.arg == at):

        raise("application: argument types must match")



That leaves function definitions. The function has a formal parameter; unless this is bound in the type environment, any use of that parameter in body would result in a type error. Thus we have to extend the type environment with the formal name bound to its type, and in that extended environment type-check the body. Whatever value this computes must be the same as the declared type of the body. If that is so, then the function itself has a function type from the type of the argument to the type of the body.

<tc-fdC> ::=

    | fdC(a, at, rt, b) =>

      body-type = tc(b, xtnd-ty-env(tybind(a, at), tnv))

      if body-type == rt:

        funT(at, rt)


        raise("function: body does not match declared return type");

Observe another curious difference between the interpreter and type-checker. In the interpreter, application was responsible for evaluating the argument expression, extending the environment, and evaluating the body. Here, the application case does check the argument expression, but leaves the environment alone, and simply returns the type of the body without traversing it. Instead, the body is actually traversed by the checker when checking a function definition, so this is the point at which the environment actually extends.


Why is the time of traversal different between interpretation and type-checking?

20.2.2 Type-Checking Conditionals

Suppose we extend the above language with conditionals. Even the humble if introduces several design decisions. We’ll discuss two here, and return to one of them later [REF].

  1. What should be the type of the test expression? In some languages it must evaluate to a boolean value, in which case we have to enrich the type language to include booleans (which would probably be a good idea anyway). In other languages it can be any value, and some values are considered “truthy” while others “falsy”.

  2. What should be the relationship between the then- and else-branches? In some languages they must be of the same type, so that there is a single, unambiguous type for the overall expression (which is that one type). In other languages the two branches can have distinct types, which greatly changes the design of the type-language and -checker, but also of the nature of the programming language itself.


Add booleans to the type language. What does this entail at a minimum, and what else might be expected in a typical language?


Add a type rule for conditionals, where the test expression is expected to evaluate to a boolean and both then- and else-branches must have the same type, which is the type of the overall expression.

20.2.3 Recursion in Code

Now that we’ve obtained a basic programming language, let’s add recursion to it. We saw earlier [REF] that this could be done easily through desugaring. It’ll prove to be a more complex story here. A First Attempt at Typing Recursion

Let’s now try to express a simple recursive function. The simplest is, of course, one that loops forever. Can we write an infinite loop with just functions? We already could simply with this program—

(fun(x): x(x) end)(fun(x):  x(x) end)

which we know we can represent in our language with functions as values.


Why does this construct an infinite loop? What subtle dependency is it making about the nature of function calls?

Now that we have a typed language, and one that forces us to annotate all functions, let’s annotate it. For simplicity, from now on we’ll assume we’re writing programs in a typed surface syntax, and that desugaring takes care of constructing core language terms.

Observe, first, that we have two identical terms being applied to each other. Historically, the overall term is called Ω (capital omega in Greek) and each of the identical sub-terms is called ω (lower-case omega in Greek). It is not a given that identical terms must have precisely the same type, because it depends on what invariants we want to assert of the context of use. In this case, however, observe that x binds to ω, so the second ω goes into both the first and second positions. As a result, typing one effectively types both.

Therefore, let’s try to type ω; let’s call this type γ. It’s clearly a function type, and the function takes one argument, so it must be of the form φ -> ψ. Now what is that argument? It’s ω itself. That is, the type of the value going into φ is itself γ. Thus, the type of ω is γ, which is φ -> ψ, which expands into (φ -> ψ) -> ψ, which further expands to ((φ -> ψ) -> ψ) -> ψ, and so on. In other words, this type cannot be written as any finite string!

Do Now!

Did you notice the subtle but important leap we just made? Program Termination

We observed that the obvious typing of Ω, which entails typing γ, seems to run into serious problems. From that, however, we jumped to the conclusion that this type cannot be written as any finite string, for which we’ve given only an intuition, not a proof. In fact, something even stranger is true: in the type system we’ve defined so far, we cannot type Ω at all!

This is a strong statement, but we can actually say something much stronger. The typed language we have so far has a property called strong normalization: every expression that has a type will terminate computation after a finite number of steps. In other words, this special (and peculiar) infinite loop program isn’t the only one we can’t type; we can’t type any infinite loop (or even potential infinite loop). A rough intuition that might help is that any type—which must be a finite string—can have only a finite number of ->’s in it, and each application discharges one, so we can perform only a finite number of applications.

If our language permitted only straight-line programs, this would be unsurprising. However, we have conditionals and even functions being passed around as values, and with those we can encode almost every program we’re written so far. Yet, we still get this guarantee! That makes this a somewhat astonishing result.


Try to encode lists using functions in the untyped and then in the typed language. What do you see? And what does that tell you about the impact of this type system on the encoding?

This result also says something deeper. It shows that, contrary to what you may believe—that a type system only prevents a few buggy programs from running—a type system can change the semantics of a language. Whereas previously we could write an infinite loop in just one to two lines, now we cannot write one at all. It also shows that the type system can establish invariants not just about a particular program, but about the language itself. If we want to absolutely ensure that a program will terminate, we simply need to write it in this language and pass the type checker, and the guarantee is ours!

What possible use is a language in which all programs terminate? For general-purpose programming, none, of course. But in many specialized domains, it’s a tremendously useful guarantee to have. For instance, suppose you are implementing a complex scheduling algorithm; you would like to know that your scheduler is guaranteed to terminate so that the tasks being scheduled will actually run. There are many other domains, too, where we would benefit from such a guarantee: a packet-filter in a router; a real-time event processor; a device initializer; a configuration file; the callbacks in single-threaded JavaScript; and even a compiler or linker. In each case, we have an almost unstated expectation that these programs will always terminate. And now we have a language that can offer this guarantee—something it is impossible to test for, no less!These are not hypothetical examples. In the Standard ML language, the language for linking modules uses essentially this typed language for writing module linking specifications. This means developers can write quite sophisticated abstractions—they have functions-as-values, after all!—while still being guaranteed that linking will always terminate, producing a program. Typing Recursion

What this says is, whereas before we were able to handle rec entirely through desugaring, now we must make it an explicit part of the typed language. For simplicity, we will consider a special case of recwhich nevertheless covers the common uses—whereby the recursive identifier is bound to a function. Thus, in the surface syntax, one might write

(rec (Σ num (n num)

        (if0 n


             (n + (Σ (n + -1)))))

  (Σ 10))

for a summation function, where Σ is the name of the function, n its argument, and num the type consumed by and returned from the function. The expression (Σ 10) represents the use of this function to sum the numbers from 10 until 0.

How do we type such an expression? Clearly, we must have n bound in the body of the function as we type it (but not of course, in the use of the function, due to static scope); this much we know from typing functions. But what about Σ? Obviously it must be bound in the type environment when checking the use ((Σ 10)), and its type must be num -> num. But it must also be bound, to the same type, when checking the body of the function. (Observe, too, that the type returned by the body must match its declared return type.)

Now we can see how to break the shackles of the finiteness of the type. It is certainly true that we can write only a finite number of ->’s in types in the program source. However, this rule for typing recursion duplicates the -> in the body that refers to itself, thereby ensuring that there is an inexhaustible supply of applications.It’s our infinite quiver of arrows.

The code to implement this rule would be as follows. Assuming f is bound to the function’s name, aT is the function’s argument type and rT is its return type, b is the function’s body, and u is the function’s use:
<tc-recC> ::=

    | recC (f, v, at, rt, b, c) =>

      extended-env = xtnd-ty-env(tybind(f, funT(at, rt)), tnv)

      if not(rt == tc(b, xtnd-ty-env(tybind(v, at), extended-env))):

        raise("rec: function return type not correct")


        tc(c, extended-env);

20.2.4 Recursion in Data

We have seen how to type recursive programs, but this doesn’t yet enable us to create recursive data. We already have one kind of recursive datum—the function type—but this is built-in. We haven’t yet seen how developers can create their own recursive datatypes. Recursive Datatype Definitions

When we speak of allowing programmers to create recursive data, we are actually talking about three different facilities at once:
  • Creating a new type.

  • Letting instances of the new type have one or more fields.

  • Letting some of these fields refer to instances of the same type.

In fact, once we allow the third, we must allow one more:
  • Allowing non-recursive base-cases for the type.

This confluence of design criteria leads to what is commonly called an algebraic datatype, such as the types supported by our typed language. For instance, consider the following definition of a binary tree of numbers:Later [REF], we will discuss how types can be parameterized.

data BinTree:

  | leaf

  | node (value :: Number,

          left :: BinTree,

          right :: BinTree)


Observe that without a name for the new datatype, BinTree, we would not have been able to refer back ot it in node. Similarly, without the ability to have more than one kind of BinTree, we would not have been able to define leaf, and thus wouldn’t have been able to terminate the recursion. Finally, of course, we need multiple fields (as in node) to construct useful and interesting data. In other words, all three mechanisms are packaged together because they are most useful in conjunction. (However, some langauges do permit the definition of stand-alone structures. We will return to the impact of this design decision on the type system later [REF].)

This concludes our initial presentation of recursive types, but we have not actually explained where this new type, BinTree, comes from. It is obviously impractical to pretend that it is baked into our type-checker, because we can’t keep changing it for each new recursive type definition—it would be like modifying our interpreter each time the program contains a recursive function! Instead, we need to find a way to make such definitions intrinsic to the type language. We will return to this problem later [REF].

This style of data definition is sometimes also known as a sum of products. “Product” refers to the way fields combine in one variant: for instance, the legal values of a node are the cross-product of legal values in each of the fields, supplied to the node constructor. The “sum” is the aggregate of all these variants: any given BinTree value is just one of these. (Think of “product” as being “and”, and “sum” as being “or”.) Introduced Types

Now, what impact does a datatype definition have? First, it introduces a new type; then it uses this to define several constructors, predicates, and selectors. For instance, in the above example, it first introduces BinTree, then uses it to ascribe the following types:

leaf :: BinTree  # a constant

node :: Number, BinTree, BinTree -> BinTree

is-leaf :: BinTree -> Bool

is-node :: BinTree -> Bool

.value :: BinTree(is-node) -> Number

.left :: BinTree(is-node) -> BTnum

.right :: BinTree(is-node) -> BTnum

Do Now!

In what two ways are the last three entries above fictitious?

Observe a few salient facts:
  • Both the constructors create instances of BinTree, not something more refined. We will discuss this design tradeoff later [REF].

  • Both predicates consume values of type BinTree, not “any” value. This is because the type system can already tell us what type a value is. Thus, we only need to distinguish between the variants of that one type.

  • The selectors really only work on instances of the relevant variant—e.g., .value can work only on instances of node, not on instances of leafbut we don’t have a way to express this in the static type system for lack of a suitable static type. Thus, applying these can only result in a dynamic error, not a static one caught by the type system.

There is more to say about recursive types, which we will return to shortly [REF]. Selectors

.value, .left, and .right are selectors: they select parts of the record by name. But here are the two ways in which they are fictitious. First, syntactically: in most languages with “dotted field access”, there is no such stand-alone operator as .value: e.g., you cannot write .value(...). But even setting aside this syntactic matter (which could be addressed by arguing that writing v.value is just an obscure syntax for applying this operator) the more interesting subtlety is the semantic one.

Above, we have given a very particular type to .value. Suppose, however, that this datatype was also defined in the same program:

data Payment:

  | cash(value :: Number)

  | card(number :: Number, value :: Number)

This too appears to define a .value operator with the type:

.value :: Payment(is-cash) -> Number

.value :: Payment(is-card) -> Number

or equivalently,

.value :: Payment -> Number

Will the real .value please stand up? How many .value operations are there? Indeed, it would appear that this “operator” freely cross-cuts every datatype definition, and even every module boundary!These issues are not really specific to types: the cross-cutting nature of field access is independent of it. However, ascribing types forces us to confront these issues, because we cannot ignore the difficulty of typing the operation.

To put this in perspective, consider two other very different styles of handling selectors:
  • A characteristic of scripting languages is that objects are merely hash tables, and all field access is turned into a hash-table reference on the string representing the field-name. Hence, o.f is just syntactic sugar for looking up the value indexed by "f" in the dictionary associated with o.

  • In Racket, the structure definitions such as
    (struct cash (value))
    (struct card (number value))
    generate distinct selectors: in this case, cash-value and card-value, respectively. Now there is no longer any potential for confusion, because they have different names that can each have distinct types.

Compiling between these languages then highlights these distinctions. Compiling from, say, Pyret or Java to, say, JavaScript is easy, because all field dereferences turn into dictionary lookups. Compiling from (untyped) Pyret to Racket is especially easy because the languages are so similar—until we get to dotted access. Then, assuming we wish to compile Pyret data definitions to Racket’s corresponding structure definitions, the compiler would have to traverse the Pyret program to gather up all fields with a common name, and turn them into a discriminating selector: for instance, v.value might compile to Racket’s (->value v), where ->value is defined as (given the above two data definitions):
(define (->value v)
    [(node? v) (node-value v)]
    [(cash? v) (cash-value v)]
    [(card? v) (card-value v)]))
In contrast, going in the other direction is easy: (node-value v) would check that v is indeed a node, and then access v.value. Pattern-Matching and Desugaring

Once we have understood the names introduced by datatype definitions, and the nature of selectors, the only thing left is to provide an account of pattern-matching. For instance, we can write the expression

cases (BinTree) t:

  | leaf => e1

  | node(v, l, r) => e2


This simply expands into uses of the above predicates, and binding the pieces:

if is-leaf(t):


else if is-node(t):

  v = t.value

  l = t.left

  r = t.right



In short, this can be done by desugaring, so pattern-matching does not need to be in the core language. This, in turn, means that one language can have many different pattern-matching mechanisms.

Except, that’s not quite so easy. Somehow, the desugaring that generates the code above in terms of if needs to know that the three positional selectors for a node are value, left, and right, respectively. This information is explicit in the type definition but only implicitly present in the use of the pattern-matcher (that, indeed, being the point). Thus, somehow this information must be communicated from definition to use. Thus the desugarer needs something akin to the type environment to accomplish its task.

Observe, furthermore, that expressions such as e1 and e2 cannot be type-checked—indeed, cannot even be reliable identified as expressionsuntil desugaring expands the use of cases. Thus, desugaring depends on the type environment, while type-checking depends on the result of desugaring. In other words, the two are symbiotic and need to happen, not quite in “parallel”, but rather in lock-step. Thus, building desugaring for a typed language when the syntactic sugar makes assumptions about types is a little more intricate than doing so for an untyped language.

20.2.5 Types, Time, and Space

Types bestow a performance benefit in safe languages. That is because the checks that would have been performed at run-time—e.g., + checking that both its arguments are indeed numbers—are now performed statically. In a typed language, an annotation like :: Number already answers the question of whether or not something is of a particular a type; there is nothing to check at run-time. As a result, these type-level predicates can disappear entirely, and with them any need to use them in programs.

This is at some cost to the developer, who must convince the static type system that their program does not induce type errors; due to the limitations of decidability, even programs that might have run without error might run afoul of the type system. Nevertheless, for programs for which this can be done, types provide a notable execution time saving.

Now let’s discuss space. Until now, the language run-time system has needed to store information attached to every value indicating what its type is. This is how it can implement type-level predicates such as is-number, which may be used both by developers and by primitives. If those predicates disappear, so does the space needed to hold information to implement them. Thus, type-tags are no longer necessary.They would, however, still be needed by the garbage collector, though other representations such as BIBOP [REF] can greatly reduce their space impact.

The type-like predicates still left are those for variants: is-leaf and is-node, in the example above. These must indeed be applied at run-time. For instance, as we have noted, selectors like .value must perform this check. Of course, some more optimizations are possible. Consider the code generated by desugaring the pattern-matcher: there is no need for the three selectors to implement this check, because control could only have gotten to them after is-node returned a true vlaue. Thus, the run-time system could provide just the desugaring level access to special unsafe primitives that do not perform the check, resulting in generated code such as this:

if is-leaf(t):


else if is-node(t):

  v = t.value-unsafe

  l = t.left-unsafe

  r = t.right-unsafe



The net result, however, is that the run-time representation must still store enough information to accurately answer these questions. However, previously it needed to use enough bits to record every possible type (and variant). Now, because the types have been statically segregated, for a type with no variants (e.g., there is only one kind of string), there is no need to store any variant information at all; that means the run-time system can use all available bits to store actual dynamic values.I must reiterate that in practice this representational frugality is reduced by a garbage collector’s need for tags.

In contrast, when variants are present, the run-time system must sacrifice bits to distinguish between the variants, but the number of variants within a type is obviously far smaller than the number of variants and types across all types. In the BinTree example above, there are only two variants, so the run-time system needs to use only one bit to record which variant of BinTree a value represents.

Observe, in particular, that the type system’s segregation prevents confusion. If there are two different datatypes that each have two variants, in the untyped world all these four variants require distinct representations. In contrast, in the typed world their representations can overlap across types, because the static type system will ensure one type’s variants are never confused for that the another. Thus, types have a genuine space (saving representation) and time (eliminating run-time checks) performance benefit for programs.

20.2.6 Types and Mutation

We have now covered most of the basic features of our core language other than mutation. In some ways, types have a simple interaction with mutation, and this is because in a classical setting, they don’t interact at all. Consider, for instance, the following untyped program:


  var x = 10

  x := 5

  x := "something"


What is “the type” of x? It doesn’t really have one: for some time it’s a number, and later (note the temporal word) it’s a string. We simply can’t give it a type. In general, type checking is an atemporal activity: it is done once, before the program runs, and must hence be independent of the specific order in which programs execute. Keeping track of the precise values in the store is hence beyond the reach of a type-checker.

The example above is, of course, easy to statically understand, but we should never be mislead by simple examples. Suppose instead we had a program like


  var x = 10

  if is-even(read-number("Enter a number")):

    x := 5


    x := "something"



Now it is literally impossible to reach any static conclusion about the type of x after the conditional finishes, because only at run-time can we know what the user might have entered.

To avoid this morass, traditional type checkers adopt a simple policy: types must be invariant across mutation. That is, a mutation operation—whether variable mutation or structure mutation—cannot change the type of the mutant. Thus, the above examples would not type. How much flexibility this gives the programmer is, however, a function of the type language. For instance, if we were to admit a more flexible type that stands for “number or string”, then the examples above would type, but x would always have this, less precise, type, and all uses of x would have to contend with its reduced specificity, an issue we will return to later [REF].

In short, mutation is easy to account for in a traditional type system because its rule is simply that, while the value can change in ways below the level of specificity of the type system, the type cannot change. In the case of an operation like := (or our core language’s setC (Interpreting Variables)) this means the type of the expression whose value is being assigned must match that of the mutant. In the case of structure mutation, such as boxes, it means the assigned value must match that the box’s contained type.

20.3 The Central Theorem: Type Soundness

We have seen earlier (Program Termination) that certain type languages can offer very strong theorems about their programs: for instance, that all programs in the language terminate. In general, of course, we cannot obtain such a guarantee (indeed, we added general recursion precisely to let ourselves write unbounded loops). However, a meaningful type system—indeed, anything to which we wish to bestow the noble title of a type systemWe have repeatedly used the term “type system”. A type system is usually a combination of three components: a language of types, a set of type rules, and an algorithm that applies these rules to programs. By largely presenting our type rules embedded in a function, we have blurred the distinction between the second and third of these, but they should still be thought of as intellectually distinct, because they are often presented that way for languages: the former provides a declarative description while the latter an executable one. The distinction becomes relevant in Implementing Subtyping.ought to provide some kind of meaningful guarantee that all typed programs enjoy. This is the payoff for the programmer: by typing this program, she can be certain that certain bad things will certainly not happen. Short of this, we have just a bug-finder; while it may be useful, it is not a sufficient basis for building any higher-level tools (e.g., for obtaining security or privacy or robustness guarantees).

What theorem might we want of a type system? Remember that the type checker runs over the static program, before execution. In doing so, it is essentially making a prediction about the program’s behavior: for instance, when it states that a particular complex term has type Number, it is predicting that when run, that term will produce a numeric value. How do we know this prediction is sound, i.e., that the type checker never lies? Every type system should be accompanied by a theorem that proves this.

There is a good reason to be suspicious of a type system, beyond general skepticism. There are many differences between the way a type checker and a program evaluator work:
  • The type checker only sees program text, whereas the evaluator runs over actual stores.

  • The type environment binds identifiers to types, whereas the evaluator’s environment binds identifiers to values or locations.

  • The type checker compresses (even infinite) sets of values into types, whereas the evaluator treats the elements of these sets distinctly.

  • The type checker always terminates, whereas the evaluator might not.

  • The type checker passes over the body of each expression only once, whereas the evaluator might pass over each body anywhere from zero to infinite times.

Therefore, it is unwise to assume that these two will correspond, and historically, they have often failed to do so.

The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t.

The standard way of proving this theorem is to prove it in two parts, known as progress and preservation. Progress says that if a term passes the type-checker, it will be able to make a step of evaluation (unless it is already a value); preservation says that the result of this step will have the same type as the original. If we interleave these steps (first progress, then preservation; rinse and repeat), we can conclude that the final answer will indeed have the same type as the original, so the type system is indeed sound.

For instance, consider this expression: 5 + (2 * 3). It has the type Number. In a sound type system, progress offers a proof that, because this term types, and is not already a value, it can take a step of execution—which it clearly can. After one step the program reduces to 5 + 6. Sure enough, as preservation proves, this has the same type as the original: Number. Progress again says this can take a step, producing 11. Preservation again shows that this has the same type as the previous (intermediate) expressions: Number. Now progress finds that we are at an answer, so there are no steps left to be taken, and our answer is of the same type as that given for the original expression.

However, this isn’t the entire story. There are two caveats:
  1. The program may not produce an answer at all; it might loop forever. In this case, the theorem strictly speaking does not apply. However, we can still observe that every intermediate term has the same type as the whole expression, so the program is computing meaningfully even if it isn’t producing a value.

  2. Any rich enough language has properties that cannot be decided statically (and others that perhaps could be, but the language designer chose to put off until run-time to reduce the burden on the programmer to make programs pass the type-checker). When one of these properties fails—e.g., the array index being within bounds—there is no meaningful type for the program. Thus, implicit in every type soundness theorem is some set of published, permitted exceptions or error conditions that may occur. The developer who uses a type system implicitly signs on to accepting this set.

As an example of the latter set, the user of a typical typed language acknowledges that vector dereference, list indexing, and so on may all yield exceptions.A different type system design might make this set a parameter.

The latter caveat looks like a cop-out. In fact, it is easy to forget that it is really a statement about what cannot happen at run-time: any exception not in this set will provably not be raised. Of course, in languages designed with static types in the first place, it is not clear (except by loose analogy) what these exceptions might be, because there would be no need to define them. But when we retrofit a type system onto an existing programming language—especially languages with only dynamic enforcement, such as Racket or Python—then there is already a well-defined set of exceptions, and the type-checker is explicitly stating that some set of those exceptions (such as “non-function found in application position” or “method not found”) will simply never occur. This is therefore the payoff that the programmer receives in return for accepting the type system’s syntactic restrictions.

20.4 Union Types

Suppose we want to construct a list of zoo animals, of which there are many kinds: armadillos, boa constrictors, and so on. Currently, we are forced to create a new datatype: “In Texas, there ain’t nothing in the middle of the road but a yellow line and dead armadillos.”—Jim Hightower

data Animal:

  | armadillo (is-alive :: Bool)

  | boa (length :: Number)


and make a list of these: List<Animal>. The type Animal therefore represents a “union” of armadillo and boa, except the only way to construct such unions is to make a new type every time: if we want to represent the union of animals and plants, we need

data LivingThings:

  | animal (a :: Animal)

  | plant (p :: Plant)


so an actual animal is now one extra “level” deep (wrapped by the constructor a). These datatypes are called tagged unions or discriminated unions, because we must introduce explicit tags (or discriminators)—represented in the program by constructors—such as animal and plant, to tell them apart. In turn, a structure can only reside inside a datatype declaration; we sometimes have to create datatypes with just one variant, such as

data Constraint:

  | eq-con (lhs :: Term) (rhs :: Term)


and everywhere we’d had to use the type Constraint because eq-con is not itself a type, only a variant that can be distinguished at run-time.

Either way, the point of a union type is to represent a disjunction, or “or”. A value’s type is one of the types in the union. A value usually belongs to only one of the types in the union, though this is a function of precisely how the union types are defined, whether there are rules for normalizing them, and so on.

20.4.1 Structures as Types

A natural reaction to this might be, why not lift this restriction? Why not allow each structure to exist on its own, and define a type to be a union of some collection of structures? After all, in languages ranging from C to Racket, programmers can define stand-alone structures without having to wrap them in some other type with a tag constructor! For instance, in raw Racket, we can write
(struct armadillo (alive?))
(struct boa (length))
and a comment that says
;; An Animal is either
;; - (armadillo <boolean>)
;; - (boa <number>)
but without enforced static types, the comparison is messy. However, we can more directly compare with Typed Racket, a typed form of Racket that is built into DrRacket. Here is the same typed code:
(struct: armadillo ([alive? : Boolean]))
(struct: boa ([length : Real])) ;; feet
We can now define functions that consume values of type boa without any reference to armadillos:
;; http://en.wikipedia.org/wiki/Boa_constrictor#Size_and_weight
(define: (big-one? [b : boa]) : Boolean
  (> (boa-length b) 8))
In fact, if we apply this function to any other type, including an armadillo—(big-one? (armadillo true))we get a static error. This is because armadillos are no more related to boas than numbers or strings are.

Of course, we can still define a union of these types:
(define-type Animal (U armadillo boa))
and functions over it:
(define: (safe-to-transport? [a : Animal]) : Boolean
    [(boa? a) (not (big-one? a))]
    [(armadillo? a) (armadillo-alive? a)]))
Whereas before we had one type with two variants, now we have three types. It just so happens that two of the types form a union of convenience to define a third.

20.4.2 Untagged Unions

It might appear that we still need to have discriminative tags, but we don’t. In languages with union types, the effect of the Option type constructor is often obtained by combining the intended return type with a disjoint one representing failure or none-ness. For instance, here is the Typed Racket moral equivalent of Pyret’s Option<Number>:
(define-type MaybeNumber (U Number Boolean))
For that matter, Boolean may itself be a union of True and False, as it is in Typed Racket, so a more accurate simulation of the option type would be:

(define-type MaybeNumber (U Number False))

More generally, in Typed Racket we could define
(struct: none ())
(define-type (Maybeof T) (U T none))
which would work for all types, because none is a new, distinct type that cannot be confused for any other. This gives us the same benefit as the Option type, except the value we want is not buried one level deep inside a some structure, but is rather available immediately. For instance, consider member, which has this Typed Racket type:
(All (a) (a (Listof a) -> (U False (Listof a))))
If the element is not found, member returns false. Otherwise, it returns the list starting from the element onward (i.e., the first element of the list will be the desired element):

> (member 2 (list 1 2 3))

'(2 3)

To convert this to use Maybeof, we can write
(define: (t) (in-list? [e : t] [l : (Listof t)]) : (Maybeof (Listof t))
  (let ([v (member e l)])
    (if v
which, if the element is not found, returns the value (none), but if it is found, still returns a list

> (in-list? 2 (list 1 2 3))

'(2 3)

so that there is no need to remove the list from a some wrapper.

20.4.3 Discriminating Untagged Unions

It’s one thing to put values into unions; we have to also consider how to take them out, in a well-typed manner. In Pyret, we use cases to identify and pull apart the pieces. In particular, when we write

fun is-safe-to-transport(a :: Animal) -> Bool:

  cases (Animal) a:

    | armadillo(is-a) => is-a

    | boa(l) => not(is-big-one(l))



the type of a remains the same in the entire expression. The identifiers is-a and l are bound to a boolean and numeric value, respectively, and is-big-one must be written to consume a number, not a boa. Put in different terms, we cannot have a function is-big-one that consumes boas, because there is no such type.

In contrast, with union types, we do have the boa type. Therefore, we follow the principle that the act of applying predicates to a value narrows the type. For instance, in a hypothetical Pyret, if a programmer were to write

if is-boa(a):


though a begins as type Animal, after it passes the is-boa test, the type checker would be expected to narrow its type to just the boa branch, so that the application of is-big-one is well-typed. In turn, in the rest of the conditional its type is not boain this case, that leaves only one possibility, armadillo. This puts greater pressure on the type-checker’s ability to test and recognize certain patterns—known as if-splittingwithout which it would be impossible to program with union typesSee the more complex if-splitting patterns that languages like JavaScript, Python, and Ruby demand.; but it can always default to recognizing just those patterns that the ML-like system would have recognized, such as pattern-matching or type-case.

20.4.4 Retrofitting Types

It is unsurprising that Typed Racket uses union types. They are especially useful when retrofitting types onto existing programming languages whose programs were not defined with an ML-like type discipline in mind, such as in scripting languages. A common principle of such retrofitted types is to statically catch as many dynamic exceptions as possible. Of course, the checker must ultimately reject some programs,Unless it implements an interesting idea called soft typing, which rejects no programs but provides information about points where the program would not have been typeable. and if it rejects too many programs that would have run without an error, developers are unlikely to adopt it. Because these programs were written without type-checking in mind, the type checker may therefore need to go to heroic lengths to accept what are considered reasonable idioms in the language.

Consider the following JavaScript function:

var slice = function (arr, start, stop) {

  var result = [];

  for (var i = 0; i <= stop - start; i++) {

    result[i] = arr[start + i];


  return result;


It consumes an array and two indices, and produces the sub-array between those indices. For instance, slice([5, 7, 11, 13], 0, 2) produces [5, 7, 11].

In JavaScript, however, developers are free to leave out any or all trailing arguments to a function. Every elided argument is given a special value, undefined, and it is up to the function to cope with this. For instance, a typical implementation of splice would let the user drop the third argument; the following definition

var slice = function (arr, start, stop) {

  if (typeof stop == "undefined")

    stop = arr.length - 1;

  var result = [];

  for (var i = 0; i <= stop - start; i++) {

    result[i] = arr[start + i];


  return result;


automatically returns the subarray until the end of the array: thus, slice([5, 7, 11, 13], 2) returns [11, 13].

In Typed JavaScript,Built at Brown by Arjun Guha and others. See our Web site. a programmer can explicitly indicate a function’s willingness to accept fewer arguments by giving a parameter the type U Undefined, giving it the type

∀ t : (Array[t] * Int * (Int U Undefined) -> Array[t])

In principle, this means there is a potential type error at the expression stop - start, because stop may not be a number. However, the assignment to stop sets it to a numeric type precisely when it was elided by the user. In other words, in all control paths, stop will eventually have a numeric type before the subtraction occurs, so this function is well-typed. Of course, this requires the type-checker to be able to reason about both control-flow (through the conditional) and state (through the assignment) to ensure that this function is well-typed; but Typed JavaScript can, and can thus bless functions such as this.

20.4.5 Design Choices

In languages with union types, it is common to have
  • Stand-alone structure types (often represented using classes), rather than datatypes with variants.

  • Ad hoc collections of structures to represent particular types.

  • The use of sentinel values to represent failure.

To convert programs written in this style to an ML-like type discipline would be extremely onerous. Therefore, many retrofitted type systems adopt union types to ease the process of typing.

Of the three properties above, the first seems morally neutral, but the other two warrant more discussion. We will address them in reverse order.
  • Let’s tackle sentinels first. In many cases, sentinels ought to be replaced with exceptions, but in many languages, exceptions can be very costly. Thus, developers prefer to make a distinction between truly exceptional situations—that ought not occur—and situations that are expected in the normal course of operation. Checking whether an element is in a list and failing to find it is clearly in the latter category (if we already knew the element was or wasn’t present, there would be no need to run this predicate). In the latter case, using sentinels is reasonable.

    However, we must square this with the observation that failure to check for exceptional sentinel values is a common source of error—and indeed, security flaws—in C programs. This is easy to reconcile. In C, the sentinel is of the same type (or at least, effectively the same type) as the regular return value, and furthermore, there are no run-time checks. Therefore, the sentinel can be used as a legitimate value without a type error. As a result, a sentinel of 0 can be treated as an address into which to allocate data, thus potentially crashing the system. In contrast, our sentinel is of a truly new type that cannot be used in any computation. We can easily reason about this by observing that no existing functions in our language consume values of type none.

  • Setting aside the use of “ad hoc”, which is pejorative, are different groupings of a set of structures a good idea? In fact, such groupings occur even in programs using an ML-like discipline, when programmers want to carve different sub-universes of a larger one. For instance, Pyret programmers would need to use a type like

    data SExp:

      | num-sexp (n :: Number)

      | str-sexp (s :: String)

      | list-sexp (l :: List<SExp>)


    to represent s-expressions. If a function now operates on just some subset of these terms—say just numbers and lists of numbers—they must create a fresh type, and convert values between the two types even though their underlying representations are essentially identical. As another example, consider the set of CPS expressions. This is clearly a subset of all possible expressions, but if we were to create a fresh datatype for it, we would not be able to use any existing programs that process expressions—such as the interpreter.

In other words, union types appear to be a reasonable variation on the ML-style type system we have seen earlier. However, even within union types there are design variations, and these have consequences. For instance, can the type system create new unions, or are user-defined (and named) unions permitted? That is, can a Racket expression like this
(if (phase-of-the-moon)
be allowed to type (to (U Number Boolean)), or is it a type error to introduce unions that have not previously been named and explicitly identified? Typed Racket provides the former: it will construct truly ad hoc unions. This is arguably better for importing existing code into a typed setting, because it is more flexible. However, it is less clear whether this is a good design for writing new code, because unions the programmer did not intend can occur and there is no way to prevent them. This offers an unexplored corner in the design space of programming languages.

20.5 Nominal Versus Structural Systems

In our initial type-checker, two types were considered equivalent if they had the same structure.


Where was this notion of equality exercised?

In fact, we offered no mechanism for naming types at all, so it is not clear what alternative we had.

Now consider Typed Racket. A developer can write
(define-type NB1 (U Number Boolean))
(define-type NB2 (U Number Boolean))
followed by
(define: v : NB1 5)
Suppose the developer also defines the function
(define: (f [x : NB2]) : NB2 x)
and tries to apply f to v, i.e., (f v): should this application type or not?

There are two perfectly reasonable interpretations. One is to say that v was declared to be of type NB1, which is a different name than NB2, and hence should be considered a different type, so the above application should result in an error. Such a system is called nominal, because the name of a type is paramount for determining type equality.

In contrast, another interpretation is that because the structure of NB1 and NB2 are identical, there is no way for a developer to write a program that behaves differently on values of these two types, so these two types should be considered identical.If you want to get especially careful, you would note that there is a difference between being considered the same and actually being the same. We won’t go into this issue here, but consider the implication for a compiler writer choosing representations of values, especially in a language that allows run-time inspection of the static types of values. Such a type system is called structural, and would successfully type the above expression. (Typed Racket follows a structural discipline, again to reduce the burden of importing existing untyped code, which—in Racket—is usually written with a structural interpretation in mind. In fact, Typed Racket not only types (f v), it prints the result as having type NB1, despite the return type annotation on f!)

The difference between nominal and structural typing is most commonly contentious in object-oriented languages, and we will return to this issue briefly later [REF]. However, the point of this section is to illustrate that these questions are not intrinsically about “objects”. Any language that permits types to be named—as all must, for programmer sanity—must contend with this question: is naming merely a convenience, or are the choices of names intended to be meaningful? Choosing the former answer leads to structural typing, while choosing the latter leads down the nominal path.

20.6 Intersection Types

Since we’ve just explored union types, you must naturally wonder whether there are also intersection types. Indeed there are.

If a union type means that a value (of that type) belongs to one of the types in the union, an intersection type clearly means the value belongs to all the types in the intersection: a conjunction, or “and”. This might seem strange: how can a value belong to more than one type?

As a concrete answer, consider overloaded functions. For instance, in some languages + operates on both numbers and strings; given two numbers it produces a number, and given two strings it produces a string. In such a language, what is the proper type for +? It is not (Number, Number -> Number) alone, because that would reject its use on strings. By the same reasoning, it is not (String, String -> String) alone either. It is not even

(U (Number, Number -> Number)

   (String, String -> String))

because + is not just one of these functions: it truly is both of them. We could ascribe the type

((Number U String), (Number U String) -> (Number U String))

reflecting the fact that each argument, and the result, can be only one of these types, not both. Doing so, however, leads to a loss of precision.

Do Now!

In what way does this type lose precision?

Observe that with this type, the return type on all invocations is (Number U String). Thus, on every return we must distinguish beween numeric and string returns, or else we will get a type error. Thus, even though we know that if given two numeric arguments we will get a numeric result, this information is lost to the type system.

More subtly, this type permits each argument’s type to be chosen independently of the other. Thus, according to this type, the invocation 3 + "x" is perfectly valid (and produces a value of type (Number U String)). But of course the addition operation we have specified is not defined for these inputs at all!

Thus the proper type to ascribe this form of addition is

(∧ (Number, Number -> Number)

   (String, String -> String))

where should be reminiscent of the conjunction operator in logic. This permits invocation with two numbers or two strings; an invocation with two numbers has a numeric result type; one with two strings has a string result type; and nothing else. This corresponds precisely to our intended behavior for overloading (sometimes also called ad hoc polymorphism). Observe that this only handles a finite number of overloaded cases.

20.7 Recursive Types

Now that we’ve seen union types, it pays to return to our original recursive datatype formulation. If we accept the variants as type constructors, can we write the recursive type as a union over these? For instance, returning to BinTree, shouldn’t we be able to describe it as equivalent to

leaf U node(Number, BinTree, BinTree)

thereby showing that node takes three parameters? Except, what are the types of those three parameters? In the type we’ve written above, BinTree is either built into the type language (which is unsatisfactory) or unbound. Perhaps we mean

BinTree = leaf U node(Number, BinTree, BinTree)

Except now we have an equation that has no obvious solution (remember ω (Program Termination)?).

This situation should be familiar from recursion in values (Syntactic Sugar for Recursive Function Definitions). Then, we invented a recursive function constructor (and showed its implementation) to circumvent this problem. We similarly need a recursive type constructor. This is conventionally called μ (the Greek letter “mu”). With it, we would write the above type as

μ BinTree : leaf U node(Number, BinTree, BinTree)

μ is a binding construct; it binds BinTree to the entire type written after it, including the recursive binding of BinTree itself. In practice, of course, this entire recursive type is the one we wish to call BinTree:

BinTree = μ BinTree : leaf U node(Number, BinTree, BinTree)

Though this looks like a circular definition, notice that the name BinTree on the right does not depend on the one to the left of the equation: e.g., we could rewrite this as

BinTree = μ T : leaf U node(Number, T, T)

In other words, this definition of BinTree truly can be thought of as syntactic sugar and replaced everywhere in the program without fear of infinite regress.

At a semantic level, there are usually two very different ways of thinking about the meaning of types bound by μ: they can be interpreted as isorecursive or equirecursive. The distinction between these is, however, subtle and beyond the scope of this chapter. It suffices to note that a recursive type can be treated as equivalent to its unfolding. For instance, if we define a numeric list type as

NumL = μ T : empty U link(Number, T)


  μ T : empty U link(Number, T)

= empty U link(Number, μ T : empty U link(Number, T))

= empty U link(Number, empty)

        U link(Number, link(Number, μ T : empty U link(Number, T)))

= ...

and so on (iso- and equi-recursiveness differ in precisely what the notion of equality is: definitional equality or isomorphism). At each step we simply replace the T parameter with the entire type. As with value recursion, this means we can “get another” NumL constructor upon demand. Put differently, the type of a list can be written as the union of zero or arbitrarily many elements; this is the same as the type that consists of zero, one, or arbitrarily many elements; and so on. Any lists of numbers fits all (and precisely) these types.

Observe that even with this informal understanding of μ, we can now provide a type to ω, and hence to Ω.


Ascribe types to ω and Ω.

20.8 Subtyping

Imagine we have a typical binary tree definition; for simplicity, we’ll assume that all the values are numbers. We will write this in Typed Racket to illustrate a point:
(struct: mt ())
(struct: nd ([v : Number] [l : BT] [r : BT]))
(define-type BT (U mt nd))
Now consider some concrete tree values:

> (mt)

- : mt


> (nd 5 (mt) (mt))

- : nd


Observe that each structure constructor makes a value of its own type, not a value of type BT. But consider the expression (nd 5 (mt) (mt)): the definition of nd declares that the sub-trees must be of type BT, and yet we are able to successfully give it values of type mt.

Obviously, it is not coincidental that we have defined BT in terms of mt and nd. However, it does indicate that when type-checking, we cannot simply be checking for function equality, at least not as we have so far. Instead, we must be checking that one type “fits into” the other. This notion of fitting is called subtyping (and the act of being fit, subsumption).

The essence of subtyping is to define a relation, usually denoted by <:, that relates pairs of types. We say S <: T if a value of type S can be given where a value of type T is expected: in other words, subtyping formalizes the notion of substitutability (i.e., anywhere a value of type T was expected, it can be replaced with—substituted by—a value of type S). When this holds, S is called the subtype and T the supertype. It is useful (and usually accurate) to take a subset interpretation: if the values of S are a subset of T, then an expression expecting T values will not be unpleasantly surprised to receive only S values.

Subtyping has a pervasive effect on the type system. We have to reexamine every kind of type and understand its interaction with subtyping. For base types, this is usually quite obvious: disjoint types like Number, String, etc., are all unrelated to each other. (In languages where one base type is used to represent another—for instance, in some scripting languages numbers are merely strings written with a special syntax, and in other languages, booleans are merely numbers—there might be subtyping relationships even between base types, but these are not common.) However, we do have to consider how subtyping interacts with every single compound type constructor.

In fact, even our very diction about types has to change. Suppose we have an expression of type T. Normally, we would say that it produces values of type T. Now, we should be careful to say that it produces values of up to or at most T, because it may only produce values of a subtype of T. Thus every reference to a type should implicitly be cloaked in a reference to the potential for subtyping. To avoid pestering you I will refrain from doing this, but be wary that it is possible to make reasoning errors by not keeping this implicit interpretation in mind.

20.8.1 Subtyping Unions

Let us see how unions interact with subtyping. Clearly, every sub-union is a subtype of the entire union. In our running example, clearly every mt value is also a BT; likewise for nd. Thus,
mt <: BT
nd <: BT
As a result, (mt) also has type BT, thus enabling the expression (nd 5 (mt) (mt)) to itself type, and to have the type ndand hence, also the type BT. In general,

S <: (S U T)

T <: (S U T)

(we write what seems to be the same rule twice just to make clear it doesn’t matter which “side” of the union the subtype is on). This says that a value of S can be thought of as a value of S U T, because any expression of type S U T can indeed contain a value of type S.

20.8.2 Subtyping Intersections

While we’re at it, we should also briefly visit intersections. As you might imagine, intersections behave dually:

(S ∧ T) <: S

(S ∧ T) <: T

To convince yourself of this, take the subset interpretation: if a value is both S and T, then clearly it is either one of them.

Do Now!

Why are the following two not valid subsumptions?
  1. (S U T) <: S

  2. T <: (S ∧ T)

The first is not valid because a value of type T is a perfectly valid element of type (S U T). For instance, a number is a member of type (string U number). However, a number cannot be supplied where a value of type string is expected.

As for the second, in general, a value of type T is not also a value of type S. Any consumer of a (S ∧ T) value is expecting to be able to treat it as both a T and a S, and the latter is not justified. For instance, given our overloaded + from before, if T is Number, Number -> Number, then a function of this type will not know how to operate on strings.

20.8.3 Subtyping Functions

We have seen one more constructor: functions.We have also seen parametric datatypes [REF]. In this edition, exploring subtyping for them is left as an exercise for the reader. We must therefore determine the rules for subtyping when either type can be a function. Since we usually assume functions are disjoint from all other types, we therefore only need to consider when one function type is a subtype of another: i.e., when is

(S1 -> T1) <: (S2 -> T2)

? For convenience, let us call the type (S1 -> T1) as f1, and (S2 -> T2) as f2. The question then is, if an expression is expecting functions of the f2 type, when can we safely give it functions with the f1 type? It is easiest to think through this using the subset interpretation.

Consider a use of the f2 type. It returns values of type T2. Thus, the context surrounding the function application is satisfied with values of type T2. Clearly, if T1 is the same as T2, the use of f2 would continue to type; similarly, if T1 consists of a subset of T2 values, it would still be fine. The only problem is if T1 has more values than T2, because the context would then encounter unexpected values that would result in undefined behavior. In other words, we need that T1 <: T2. Observe that the “direction” of containment is the same as that for the entire function type; this is called covariance (both vary in the same direction). This is perhaps precisely what you expected.

By the same token, you might expect covariance in the argument position as well: namely, that S1 <: S2. This would be predictable, and wrong. Let’s see why.

An application of a function with f2 type is providing parameter values of type S2. Suppose we instead substitute the function with one of type f1. If we had that S1 <: S2, that would mean that the new function accepts only values of typeS1a strictly smaller set. That means there may be some inputs—specifically those in S2 that are not in S1that the application is free to provide on which the substituted function is not defined, again resulting in undefined behavior. To avoid this, we have to make the subsumption go in the other direction: the substituting function should accept at least as many inputs as the one it replaces. Thus we need S2 <: S1, and say the function position is contravariant: it goes against the direction of subtyping.

Putting together these two observations, we obtain a subtyping rule for functions (and hence also methods):

(S2 <: S1) and (T1 <: T2) => (S1 -> T1) <: (S2 -> T2)

20.8.4 Implementing Subtyping

Of course, these rules assume that we have modified the type-checker to respect subtyping. The essence of subtyping is a rule that says, if an expression e is of type S, and S <: T, then e also has type T. While this sounds intuitive, it is also immediately problematic for two reasons:
  • Until now all of our type rules have been syntax-driven, which is what enabled us to write a recursive-descent type-checker. Now, however, we have a rule that applies to all expressions, so we can no longer be sure when to apply it.

  • There could be many levels of subtyping. As a result, it is no longer obvious when to “stop” subtyping. In particular, whereas before type-checking was able to calculate the type of an expression, now we have many possible types for each expression; if we return the “wrong” one, we might get a type error (due to that not being the type expected by the context) even though there exists some other type that was the one expected by the context.

What these two issues point to is that the description of subtyping we are giving here is fundamentally declarative: we are saying what must be true, but not showing how to turn it into an algorithm. For each actual type language, there is a less or more interesting problem in turning this into algorithmic subtyping: an actual algorithm that realizes a type-checker (ideally one that types exactly those programs that would have typed under the declarative regime, i.e., one that is both sound and complete).

20.9 Object Types

As we’ve mentioned earlier, types for objects are typically riven into two camps: nominal and structural. Nominal types are familiar to most programmers through Java, so I won’t say much about them here. Structural types for objects dictate that an object’s type is itself a structured object, consisting of names of fields and their types. For instance, an object with two methods, add1 and sub1 [REF], would have the type

{add1 :: Number -> Number, sub1 :: Number -> Number}

(For future reference, let’s call this type addsub.) Type-checking would then follow along predictable lines: for field access we would simply ensure the field exists and would use its declared type for the dereference expression; for method invocation we would have to ensure not only that the member exists but that it has a function type. So far, so straightforward.

Object types become complicated for many reasons:Whole books are therefore devoted to this topic. Abadi and Carelli’s A Theory of Objects is important but now somewhat dated. Bruce’s Foundations of Object-Oriented Languages: Types and Semantics is more modern, and also offers more gentle exposition. Pierce covers all the necessary theory beautifully.
  • Self-reference. What is the type of self? It must the same type as the object itself, since any operation that can be applied to the object from the “outside” can also be applied to it from the “inside” using self. This means object types are recursive types.

  • Access controls: private, public, and other restrictions. These lead to a distinction in the type of an object from “outside” and “inside”.

  • Inheritance. Not only do we have to give a type to the parent object(s), what is visible along inheritance paths may, again, differ from what is visible from the “outside”.

  • The interplay between multiple-inheritance and subtyping.

  • The relationship between classes and interfaces in languages like Java, which has a run-time cost.

  • Mutation.

  • Casts.

and so on. Some of these problems simplify in the presence of nominal types, because given a type’s name we can determine everything about its behavior (the type declarations effectively become a dictionary through which the object’s description can be looked up on demand), which is one argument in favor of nominal typing.Note that Java’s approach is not the only way to build a nominal type system. We have already argued that Java’s class system needlessly restricts the expressive power of programmers [REF]; in turn, Java’s nominal type system needlessly conflates types (which are interface descriptions) and implementations. It is, therefore, possible to have much better nominal type systems than Java’s. Scala, for instance, takes significant steps in this direction.

A full exposition of these issues will take much more room than we have here. For now, we will limit ourselves to one interesting question. Remember that we said subtyping forces us to consider every type constructor? The structural typing of objects introduces one more: the object type constructor. We therefore have to understand its interaction with subtyping.

Before we do, let’s make sure we understand what an object type even means. Consider the type addsub above, which lists two methods. What objects can be given this type? Obviously, an object with just those two methods, with precisely those two types, is eligible. Equally obviously, an object with only one and not the other of those two methods, no matter what else it has, is not. But the phrase “no matter what else it has” is meant to be leading. What if an object represents an arithmetic package that also contains methods add and mult, in addition to the above two (all of the appropriate type)? In that case we certainly have an object that can supply those two methods, so the arithmetic package certainly has type addsub. Its other methods are simply inaccessible using type addsub.

Let us write out the type of this package, in full, and call this type asam:

{add1 :: Number -> Number,

 sub1 :: Number -> Number,

 add  :: Number, Number -> Number,

 mult :: Number, Number -> Number}

What we have just argued is that an object of type asam should also be allowed to claim the type addsub, which means it can be substituted in any context expecting a value of type addsub. In other words, we have just said that we want asam <: addsub:

{add1 :: Number -> Number,            {add1 :: Number -> Number,

 sub1 :: Number -> Number,         <:  sub1 :: Number -> Number}

 add  :: Number, Number -> Number,

 mult :: Number, Number -> Number}

This may momentarily look confusing: we’ve said that subtyping follows set inclusion, so we would expect the smaller set on the left and the larger set on the right. Yet, it looks like we have a “larger type” (certainly in terms of character count) on the left and a “smaller type” on the right.

To understand why this is sound, it helps to develop the intuition that the “larger” the type, the fewer values it can have. Every object that has the four methods on the left clearly also has the two methods on the right. However, there are many objects that have the two methods on the right that fail to have all four on the left. If we think of a type as a constraint on acceptable value shapes, the “bigger” type imposes more constraints and hence admits fewer values. Thus, though the types may appear to be of the wrong sizes, everything is well because the sets of values they subscribe are of the expected sizes.

More generally, this says that by dropping fields from an object’s type, we obtain a supertype. This is called width subtyping, because the subtype is “wider”, and we move up the subtyping hierarchy by adjusting the object’s “width”. We see this even in the nominal world of Java: as we go up the inheritance chain a class has fewer and fewer methods and fields, until we reach Object, the supertype of all classes, which has the fewest. Thus for all class types C in Java, C <: Object.Somewhat confusingly, the terms narrowing and widening are sometimes used, but with what some might consider the opposite meaning. To widen is to go from subtype to supertype, because it goes from a “narrower” (smaller) to a “wider” (bigger) set. These terms evolved independently, but unfortunately not consistently.

As you might expect, there is another important form of subtyping, which is within a given member. This simply says that any particular member can be subsumed to a supertype in its corresponding position. For obvious reasons, this form is called depth subtyping.


Construct two examples of depth subtyping. In one, give the field itself an object type, and use width subtyping to subtype that field. In the other, give the field a function type.

Java has limited depth subtyping, preferring types to be invariant down the object hierarchy because this is a safe option for conventional mutation.

The combination of width and depth subtyping cover the most interesting cases of object subtyping. A type system that implemented only these two would, however, needlessly annoy programmers. Other convenient (and mathematically necessary) rules include the ability to permute names, reflexivity (every type is a subtype of itself, because it is more convenient to interpret the subtype relationship as ⊆), and transitivity. Languages like Typed JavaScript employ all these features to provide maximum flexibility to programmers.

20.10 Explicit Parametric Polymorphism

Which of these is the same?
  • List<String>

  • List<String>

  • List<String>

Actually, none of these is quite the same. But the first and third are very alike, because the first is in Java and the third in typed Pyret, whereas the second, in C++, is different. All clear? No? Good, read on!

20.10.1 Parameterized Types

The language we have been programming in already demonstrates the value of parametric polymorphism. For instance, the type of map is given as

((A -> B), List<A> -> List<B>)

which says that for all types A and B, map consumes a function that generates B values from A values, and a list of A values, and generates the corresponding list of B values. Here, A and B are not concrete types; rather, each is a type variable (in our terminology, these should properly be called “type identifiers” because they don’t change within the course of an instantiation; however, we will stick to the traditional terminology).

A different way to understand this is that there is actually an infinite family of map functions. For instance, there is a map that has this type:

((Number -> String), List<Number> -> List<String>)

and another one of this type (nothing says the types have to be base types):

((Number -> (Number -> Number)),

 List<Number> -> List<(Number -> Number)>)

and yet another one of this type (nothing says A and B can’t be the same):

((String -> String), List<String> -> List<String>)

and so on. Because they have different types, they would need different names: map-num-str, map-num-num-to-num, map-str-str, and so on. But that would make them different functions, so we’d have to always refer to a specific map rather than each of the generic one.

Obviously, it is impossible to load all these functions into our standard library: there’s an infinite number of these! We’d rather have a way to obtain each of these functions on demand. Our naming convention offers a hint: it is as if map takes two parameters, which are types. Given the pair of types as arguments, we can then obtain a map that is customized to that particular type. This kind of parameterization over types is called parametric polymorphism.Not to be confused with the “polymorphism” of objects, which we will discuss below [REF].

20.10.2 Making Parameters Explicit

In other words, we’re effectively saying that map is actually a function of perhaps four arguments, two of them types and two of them actual values (a function and a list). In a language with explicit types, we might try to write

fun map(A :: ???, B :: ???, f :: (A -> B), l :: List<A>)

    -> List<B>:


but this raises some questions. First, what goes in place of the ???? These are the types that are going to take the place of A and B on an actual use. But if a and b are bound to types, then what is their type? Do we really want to call map with four arguments every time we invoke it? Do we want to be passing types at the same time as values? If these are types but they are only provided at run-time invocation, how can we type-check clients, who need to know what kind of list they are getting? The answers to these questions actually lead to a very rich space of polymorphic type systems, most of which we will not explore here.

Observe that once we start parameterizing, more code than we expect ends up being parameterized. For instance, consider the type of the humble link. Its type really is parametric over the type of values in the list (even though it doesn’t actually depend on those values!—more on that in a bit [REF]) so every use of link must be instantiated at the appropriate type. For that matter, even empty must be instantiated to create an empty list of the correct type! Of course, Java and C++ programmers are familiar with this pain.

20.10.3 Rank-1 Polymorphism

Instead, we will limit ourselves to one particularly useful and tractable point in this space, which is the type system of Standard ML, of earlier versions of Haskell, roughly that of Java and C# with generics, and roughly that obtained using templates in C++. This language defines what is called predicative, rank-1, or prenex polymorphism.

We first divide the world of types into two groups. The first group consists of the type language we’ve used until, but extended to include type variables; these are called monotypes. The second group, known as polytypes, consists of parameterized types; these are conventionally written with a ∀ prefix, a list of type variables, and then a type expression that might use these variables. Thus, the type of map would be:

∀ A, B : ((A -> B), List<A> -> List<B>)

Since “∀” is the logic symbol for “for all”, you would read this as: “for all types A and B, the type of map is...”.

In rank-1 polymorphism, the type variables can only be substituted with monotypes. (Furthermore, these can only be concrete types, because there would be nothing left to substitute any remaining type variables.) As a result, we obtain a clear separation between the type variable-parameters and regular parameters. We don’t need to provide a “type annotation” for the type variables because we know precisely what kind of thing they can be. This produces a relatively clean language that still offers considerable expressive power.Impredicative languages erase the distinction between monotypes and polytypes, so a type variable can be instantiated with another polymorphic type.

Observe that because type variables can only be replaced with monotypes, they are all independent of each other. As a result, all type parameters can be brought to the front of the parameter list. In Pyret, for instance, the following defines a polymorphic identity function:
<pyret-poly-id> ::=

    fun<T> id(x :: T) -> T: x;

where T is the type parameter. At every use, we separate the provision of type parameters from value parameters by using <...> for the type parameters and (...) for the values. In general, then, we can write types in the form ∀ tv, ... : t where the tv are type variables and t is a monotype (that might refer to those variables). This justifies not only the syntax but also the name “prenex”. It will prove to also be useful in the implementation.

20.10.4 Interpreting Rank-1 Polymorphism as Desugaring

The simplest implementation of this feature is to view it as a form of desugaring: this is essentially the interpretation taken by C++. (Put differently, because C++ has a macro system in the form of templates, by a happy accident it obtains a form of rank-1 polymorphism through the use of templates.) Consider the abve polymorphic identity function (<pyret-poly-id>). Suppose the implementation is that, on every provision of a type to the name, it replaces the type variable with the given type in the body. Thus, given a concrete type for T, suppose the above definition yields a procedure of one argument of type (T -> T) (where T is appropriately substituted). Thus we can instantiate id at many different types—

id-num = id<Number>

id-str = id<String>

thereby obtaining identity functions at each of those types: check: id-num(5) is 5 id-str("x") is "x" end In contrast, expressions like id-num("x") id-str(5) will, as we would expect, fail to type-check (rather than fail at run-time).

However, this approach has two important limitations.

  1. Let’s try to define a recursive polymorphic function, such as filter. Earlier we have said that we ought to instantiate every single polymorphic value (such as even cons and empty) with types, but to keep our code concise we’ll rely on the fact that the underlying typed language already does this, and focus just on type parameters for filter. Here’s the code:

    fun<T> filter(pred :: (T -> Bool), l :: List<T>) -> List<T>:

      cases (List) l:

        | empty => empty

        | link(f, r) =>

          if pred(f):

            link(f, filter<T>(pred, r))


            filter<T>(pred, r);



    Observe that at the recursive uses of filter, we must instantiate it with the appropriate type.

    This is a perfectly good definition. There’s just one problem. If we try to use it—e.g.,

    filter-num = filter<Number>

    the implementation will not terminate. This is because the desugarer is repeatedly trying to make new copies of the code of filter at each recursive call.


    If, in contrast, we define a local helper function that performs the recursion, this problem can be made to disappear. Can you figure out that version?

  2. Consider two instantiations of the identity function. They would necessarily be different because they are two different pieces of code residing at different locations in memory.Indeed, the use of parametric polymorphism in C++ is notorious for creating code bloat. However, all this duplication is unnecessary! There’s absolutely nothing in the body of id, for instance, that actually depends on the type of the argument. Indeed, the entire infinite family of id functions can share just one implementation. The simple desugaring strategy fails to provide this.

In other words, the desugaring based strategy, which is essentially an implementation by substitution, has largely the same problems we saw earlier with regards to substitution as an implementation of parameter instantiation (From Substitution to Environments). However, in other cases substitution also gives us a ground truth for what we expect as the program’s behavior. The same will be true with polymorphism, as we will soon see [REF].

Observe that one virtue to the desugaring strategy is that it does not require our type checker to “know” about polymorphism. Rather, the core type language can continue to be monomorphic, and all the (rank-1) polymorphism is handled entirely through expansion. This offers a cheap strategy for adding polymorphism to a language, though—as C++ shows—it also introduces significant overheads.

Finally, though we have only focused on functions, the preceding discussion applies equally well to data structures.

20.10.5 Alternate Implementations

There are other implementation strategies that don’t suffer from these problems. We won’t go into them here, but the essence is to memoize expansion ([REF]). Because we can be certain that, for a given set of type parameters, we will always get the same typed body, we never need to instantiate a polymorphic function at the same type twice. This avoids the infinite loop. If we type-check the instantiated body once, we can avoid checking at other instantiations of the same type (because the body will not have changed). Furthermore, we do not need to retain the instantiated sources: once we have checked the expanded program, we can dispose of the expanded terms and retain just one copy at run-time. This avoids all the problems discussed in the pure desugaring strategy shown above, while retaining the benefits.

Actually, we are being a little too glib. One of the benefits of static types is that they enable us to pick more precise run-time representations. For instance, in most languages a static type can tell us whether we have a 32-bit or 64-bit number, or for that matter a 32-bit value or a 1-bit value (effectively, a boolean). A compiler can then generate specialized code for each representation, taking advantage of how the bits are laid out (for example, 32 booleans can use a packed representation to fit into a single 32-bit word). Thus, after type-checking at each used type, the polymorphic instantiator may keep track of all the special types at which a function or data structure was used, and provide this information to the compiler for code-generation. This will then result in several copies of the function, but only as many as those for which the compiler can generate distinct, efficient representations—which is usually fixed, and far fewer than the total number of types a program can use. Furthermore, the decision to make these copies reflects a space-time tradeoff.

20.10.6 Relational Parametricity

There’s one last detail we must address regarding polymorphism.

We earlier said that a function like cons doesn’t depend on the specific values of its arguments. This is also true of map, filter, and so on. When map and filter want to operate on individual elements, they take as a parameter another function which in turn is responsible for making decisions about how to treat the elements; map and filter themselves simply obey their parameter functions.

One way to “test” whether this is true is to substitute some different values in the argument list, and a correspondingly different parameter function. That is, imagine we have a relation between two sets of values; we convert the list elements according to the relation, and the parameter function as well. The question is, will the output from map and filter also be predictable by the relation? If, for some input, this was not true of the output of map, then it must be that map inspected the actual values and did something with that information. But in fact this won’t happen for map, or indeed most of the standard polymorphic functions.

Functions that obey this relational rule are called relationally parametricRead Wadler’s Theorems for Free! and Reynolds’s Types, Abstraction and Parametric Polymorphism.. This is another very powerful property that types give us, because they tell us there is a strong limit on the kinds of operations such polymorphic functions can perform: essentially, that they can drop, duplicate, and rearrange elements, but not directly inspect and make decisions on them.

At first this sounds very impressive (and it is!), but on inspection you might realize this doesn’t square with your experience. In Java, for instance, a polymorphic method can still use instanceof to check which particular kind of value it obtained at run-time, and change its behavior accordingly. Such a method would indeed not be relationally parametric!On the Web, you will often find this property described as the inability of a function to inspect the argument—which is not quite right. Indeed, relational parametricity can equally be viewed as a statement of the weakness of the language: that it permits only a very limited set of operations. (You could still inspect the type—but not act upon what you learned, which makes the inspection pointless. Therefore, a run-time system that wants to simulate relational parametricity would have to remove operations like instanceof as well as various proxies to it: for instance, adding 1 to a value and catching exceptions would reveal whether the value is a number.) Nevertheless, it is a very elegant and surprising result, and shows the power of program reasoning possible with rich type systems.