Actually, none of these is quite the same. But the first and third are very alike, because the first is in Java and the third in ML, whereas the second, in C++, is different. All clear? No? Good, read on!
((A -> B), List<A> -> List<B>)
((Number -> String), List<Number> -> List<String>)
((Number -> (Number -> Number)), List<Number> -> List<(Number -> Number)>)
((String -> String), List<String> -> List<String>)
Obviously, it is impossible to load all these functions into our standard library: there’s an infinite number of these! We’d rather have a way to obtain each of these functions on demand. Our naming convention offers a hint: it is as if map takes two type parameters in addition to its two regular value ones. Given the pair of types as arguments, we can then obtain a map that is customized to that particular type. This kind of parameterization over types is called parametric polymorphism.Not to be confused with the “polymorphism” of objects, which we will discuss separately [REF].
fun map(A :: ???, B :: ???, f :: (A -> B), l :: List<A>) -> List<B>: ...;
What goes in place of the ???? These are the types that are going to take the place of A and B on an actual use. But if A and B are bound to types, then what is their type?
Do we really want to call map with four arguments every time we invoke it?
Do we want to be passing types—
which are static— at the same time as dynamic values?
If these are types but they are only provided at run-time invocation, how can we type-check clients, who need to know what kind of list they are getting?
Observe that once we start parameterizing, more code than we expect
ends up being parameterized. For instance, consider the type of the
humble link. Its type really is parametric over the type of
values in the list (even though it doesn’t actually depend on those
Instead, we will limit ourselves to one particularly useful and tractable point in this space, which is the type system of Standard ML, of earlier versions of Haskell, roughly that of Java and C# with generics, and roughly that obtained using templates in C++. This language defines what is called predicative, rank-1, or prenex polymorphism.
∀ A, B : ((A -> B), List<A> -> List<B>)
In rank-1 polymorphism, the type variables can only be substituted with monotypes. (Furthermore, these can only be concrete types, because there would be nothing left to substitute any remaining type variables.) As a result, we obtain a clear separation between the type variable-parameters and regular parameters. We don’t need to provide a “type annotation” for the type variables because we know precisely what kind of thing they can be. This produces a relatively clean language that still offers considerable expressive power.Impredicative languages erase the distinction between monotypes and polytypes, so a type variable can be instantiated with another polymorphic type.
fun<T> id(x :: T) -> T: x;
id-num = id<Number> id-str = id<String>
check: id-num(5) is 5 id-str("x") is "x" end
However, this approach has two important limitations.
- Let’s try to define a recursive polymorphic function, such as filter. Earlier we have said that we ought to instantiate every single polymorphic value (such as even cons and empty) with types, but to keep our code concise we’ll focus just on type parameters for filter. Here’s the code:
fun<T> filter(pred :: (T -> Bool), l :: List<T>) -> List<T>: cases (List) l: | empty => empty | link(f, r) => if pred(f): link(f, filter<T>(pred, r)) else: filter<T>(pred, r); end endObserve that at the recursive uses of filter, we must instantiate it with the appropriate type.This is a perfectly good definition. There’s just one problem. If we try to use it—
filter-num = filter<Number>the implementation will not terminate. This is because the desugarer is repeatedly trying to make new copies of the code of filter at each recursive call.
If, in contrast, we define a local helper function that performs the recursion, this problem can be made to disappear. Can you figure out that version?
Consider two instantiations of the identity function. They would necessarily be different because they are two different pieces of code residing at different locations in memory.Indeed, the use of parametric polymorphism in C++ is notorious for creating code bloat. However, all this duplication is unnecessary! There’s absolutely nothing in the body of id, for instance, that actually depends on the type of the argument. Indeed, the entire infinite family of id functions can share just one implementation. The simple desugaring strategy fails to provide this.
In other words, the desugaring based strategy, which is essentially an implementation by substitution, has largely the same problems we saw earlier with regards to substitution as an implementation of parameter instantiation (From Substitution to Environments). However, in other cases substitution also gives us a ground truth for what we expect as the program’s behavior. The same will be true with polymorphism, as we will soon see.
Observe that one virtue to the desugaring strategy is that it does not
require our type checker to “know” about polymorphism. Rather, the
core type language can continue to be monomorphic, and all the
(rank-1) polymorphism is handled entirely through expansion. This
offers a cheap strategy for adding polymorphism to a language,
Finally, though we have only focused on functions, the preceding discussion applies equally well to data structures.
There are other implementation strategies that don’t suffer from these problems. We won’t go into them here, but the essence is to memoize ([REF]) expansion. Because we can be certain that, for a given set of type parameters, we will always get the same typed body, we never need to instantiate a polymorphic function at the same type twice. This avoids the infinite loop. If we type-check the instantiated body once, we can avoid checking at other instantiations of the same type (because the body will not have changed). Furthermore, we do not need to retain the instantiated sources: once we have checked the expanded program, we can dispose of the expanded terms and retain just one copy at run-time. This avoids all the problems discussed in the pure desugaring strategy shown above, while retaining the benefits.
Actually, we are being a little too glib. One of the benefits of
static types is that they enable us to pick more precise run-time
representations. For instance, in most languages a static type can
tell us whether we have a 32-bit or 64-bit number, or for that matter
a 32-bit value or a 1-bit value (effectively, a boolean). A compiler
can then generate specialized code for each representation, taking
advantage of how the bits are laid out (for example, 32 booleans can
use a ☛ packed representation to fit into a single
32-bit word). Thus, after type-checking at each used type, the
polymorphic instantiator may keep track of all the special types at
which a function or data structure was used, and provide this
information to the compiler for code-generation. This will then
result in several copies of the function, but only as many as those
for which the compiler can generate distinct, efficient
There’s one last detail we must address regarding polymorphism.
We earlier said that a function like cons doesn’t depend on the specific values of its arguments. This is also true of map, filter, and so on. When map and filter want to operate on individual elements, they take as a parameter another function which in turn is responsible for making decisions about how to treat the elements; map and filter themselves simply obey their parameter functions.
One way to “test” whether this is true is to substitute some different values in the argument list, and a correspondingly different parameter function. That is, imagine we have a relation between two sets of values; we convert the list elements according to the relation, and the parameter function as well. The question is, will the output from map and filter also be predictable by the relation? If, for some input, this was not true of the output of map, then it must be that map somehow affected the value itself, not just letting the function do it. But in fact this won’t happen for map, or indeed most of the standard polymorphic functions.
Functions that obey this relational rule are called relationally parametricRead Wadler’s Theorems for Free! and Reynolds’s Types, Abstraction and Parametric Polymorphism.. This is another very powerful property that types give us, because they tell us there is a strong limit on the kinds of operations such polymorphic functions can perform: essentially, that they can drop, duplicate, and rearrange elements, but not directly inspect and make decisions on them.
At first this sounds very impressive (and it is!), but on inspection
you might realize this doesn’t square with your experience. In Java,
for instance, a polymorphic method can still use instanceof to
check which particular kind of value it obtained at run-time, and
change its behavior accordingly. Such a method would not be
relationally parametric!On the Web, you will often find
this property described as the inability of a function to inspect the