### 26Interpreting Functions

#### 26.1Adding Functions to the Language

Now that we have basic expressions and conditionals, let’s grow to have a complete programming languageby adding functions.

##### 26.1.1Defining Data Representations

Imagine we’re modeling a simple programming environment. The developer defines functions in a definitions window, and uses them in an interactions windowFor historic reasons, the interactions window is also called a REPL or “read-eval-print loop”. (which provides a prompt at which they can run expressions). For now, let’s assume all definitions go in the definitions window only (we’ll relax this soon: Functions Anywhere), and all stand-alone expressions in the interactions window only. Thus, running a program simply loads definitions. Our interpreter will correspond to the interactions window prompt and assume it has been supplied with a set of definitions.A set of definitions suggests no ordering, which means, presumably, any definition can refer to any other. That’s what I intend here, but when you are designing your own language, be sure to think about this.

To keep things simple, let’s just consider functions of one argument. Here are some Pyret examples:
fun double(x): x + x end

fun const5(_): 5 end

Exercise

When a function has multiple arguments, what simple but important criterion governs the names of those arguments?

What are the parts of a function definition? It has a name (above, double, quad, and const5), which we’ll represent as a string ("double", etc.); its formal parameter or argument has a name (e.g., x), which too we can model as a string ("x"); and it has a body. We’ll determine the body’s representation in stages, but let’s start to lay out a datatype for function definitions:
data FunDefC:
| fdC(name :: String, arg :: String, body :: ExprC)
end

What is the body? Clearly, it has the form of an arithmetic expression, and sometimes it can even be represented using the existing ArithC language: for instance, the body of const5 can be represented as numC(5). But representing the body of double requires something more: not just addition (which we have), but also “x”. You are probably used to calling this a variable, but we will not use that term for now. Instead, we will call it an identifier.We’ve discussed this terminological difference in From Identifiers to Variables.

Do Now!

Anything else?

Finally, let’s look at the body of quad. It has yet another new construct: a function application. Be very careful to distinguish between a function definition, which describes what the function is, and an application, which uses it. The argument (or actual parameter) in the inner application of double is x; the argument in the outer application is double(x). Thus, the argument can be any complex expression.

Let’s commit all this to a crisp datatype. Clearly we’re extending what we had before (because we still want all of arithmetic). We’ll give a new name to our datatype to signify that it’s growing up:
<datatype> ::=
 data ExprC: | numC(n :: Number) | plusC(l :: ExprC, r :: ExprC) | multC(l :: ExprC, r :: ExprC) | trueC | falseC | ifC(c :: ExprC, t :: ExprC, e :: ExprC) end

Identifiers are closely related to formal parameters. When we apply a function by giving it a value for its parameter, we are in effect asking it to replace all instances of that formal parameter in the body—i.e., the identifiers with the same name as the formal parameter—with that value.Observe that we are being coy about a few issues: what kind of “value” [REF] and when to replace [REF]. To simplify this process of search-and-replace, we might as well use the same datatype to represent both. We’ve already chosen strings to represent formal parameters, so:
<idC-dt> ::=
 | idC(s :: String)

Finally, applications. They have two parts: the function’s name, and its argument. We’ve already agreed that the argument can be any full-fledged expression (including identifiers and other applications). As for the function name, it again makes sense to use the same datatype as we did when giving the function its name in a function definition. Thus:
<appC-dt> ::=
 | appC(f :: String, a :: ExprC)
identifying which function to apply, and providing its argument.

Using these definitions, it’s instructive to write out the representations of the examples we defined above:
• fdC("double", "x", plusC(idC("x"), idC("x")))

• fdC("quad", "x", appC("double", appC("double", idC("x"))))

• fdC("const5", "_", numC(5))

We also need to choose a representation for a set of function definitions. It’s convenient to represent these by a list.Look out! Did you notice that we spoke of a set of function definitions, but chose a list representation? That means we’re using an ordered collection of data to represent an unordered entity. At the very least, then, when testing, we should use any and all permutations of definitions to ensure we haven’t subtly built in a dependence on the order.

Exercise

Extend desugar with support for identifiers and applications.

##### 26.1.2Growing the Interpreter

Now we’re ready to tackle the interpreter proper. First, let’s remind ourselves of what it needs to consume. Previously, it consumed only an expression to evaluate. Now it also needs to take a list of function definitions:
<fof-interp> ::=
 fun interp(e :: ExprC, fds :: List) -> Value: cases (ExprC) e: end end

Let’s revisit our old interpreter. In the case of numbers, clearly we still return the number as the answer. In the addition and multiplication case, we still need to recur (because the sub-expressions might be complex), but which set of function definitions do we use? Because the act of evaluating an expression neither adds nor removes function definitions, the set of definitions remains the same, and should just be passed along unchanged in the recursive calls. Similarly for conditionals.
<fof-interp-body> ::=
 | numC(n) => numV(n) | plusC(l, r) => arith-binop(lam(x, y): x + y end, l, r, fds) | multC(l, r) => arith-binop(lam(x, y): x * y end, l, r, fds) | trueC => boolV(true) | falseC => boolV(false) | ifC(cnd, thn, els) => ic = interp(cnd, fds) if is-boolV(ic): if ic.b: interp(thn, fds) else: interp(els, fds) end else: raise('not a boolean') end

Exercise

Modify arith-binop to pass along fds unchanged in recursive calls.

Now let’s tackle application. First we have to look up the function definition, for which we’ll assume we have a helper function of this type available:
<get-fundef> ::=
 fun get-fundef(name :: String, fds :: List) -> FunDefC: end
Assuming we find a function of the given name, we need to evaluate its body. However, remember what we said about identifiers and parameters? We must “search-and-replace”, a process you have seen before in school algebra called substitution. This is sufficiently important that we should talk first about substitution before returning to the interpreter (The Interpreter, Resumed).

##### 26.1.3Substitution

Substitution is the act of replacing a name (in this case, that of the formal parameter) in an expression (in this case, the body of the function) with another expression (in this case, the actual parameter). Its header is:
<subst> ::=
 fun subst(w :: ExprC, at :: String, in :: ExprC) -> ExprC: end
The first argument is what we want to replace the name with; the second is at what name we want to perform substitution; and the third is in which expression we want to do it.

Do Now!

Suppose we want to substitute 3 for the identifier x in the bodies of the three example functions above. What should it produce?

In double, this should produce 3 + 3; in quad, it should produce double(double(3)); and in const5, it should produce 5 (i.e., no substitution happens because there are no instances of x in the body).A common mistake is to assume that the result of substituting, e.g., 3 for x in double is fun double(x): 3 + 3 end. This is incorrect. We only substitute at the point when we apply the function, at which point the function’s invocation is replaced by its body. The header enables us to find the function and ascertain the name of its parameter; but only its body participates in evaluation. Examine the use of substitution in the interpreter to see how returning a function definition would result in a type error.

These examples already tell us what to do in almost all the cases. Given a number, there’s nothing to substitute. If it’s an identifier, we have to to replace the identifier if it’s the one we’re trying to substitute, otherwise leave it alone. In the other cases, descend into the sub-expressions, performing substitution.

Before we turn this into code, there’s an important case to consider. Suppose the name we are substituting happens to be the name of a function. Then what should happen?

Do Now!

What, indeed, should happen?

There are many ways to approach this question. One is from a design perspective: function names live in their own “world”, distinct from ordinary program identifiers. Some languages (such as C and Common Lisp, in slightly different ways) take this perspective, and partition identifiers into different namespaces depending on how they are used. In other languages, there is no such distinction; indeed, we will examine such languages soon (Functions Anywhere).

For now, we will take a pragmatic viewpoint. If we evaluate a function name, it would result in a number or Boolean. However, these cannot name functions. Therefore, it makes no sense to substitute in that position, and we should leave the function name unmolested irrespective of its relationship to the variable being substituted. (Thus, a function could have a parameter named x as well as refer to another function called x, and these would be kept distinct.)

Now we’ve made all our decisions, and we can provide the body:
<subst-body> ::=
 cases (ExprC) in: | numC(n) => in | plusC(l, r) => plusC(subst(w, at, l), subst(w, at, r)) | multC(l, r) => multC(subst(w, at, l), subst(w, at, r)) | trueC => trueC | falseC => falseC | ifC(cnd, thn, els) => ifC(subst(w, at, cnd), subst(w, at, thn), subst(w, at, els)) | appC(f, a) => appC(f, subst(w, at, a)) | idC(s) => if s == at: w else: in end end

Exercise

Observe that, whereas in the numC case the interpreter returned numV(n), substitution returns in (i.e., the original expression, equivalent at that point to writing numC(n)). Why?

##### 26.1.4The Interpreter, Resumed

Phew! Now that we’ve completed the definition of substitution (or so we think), let’s complete the interpreter. Substitution was a heavyweight step, but it also does much of the work involved in applying a function. It is tempting to write
<fof-interp-appC/alt> ::=
 | appC(f, a) => fd = get-fundef(f, fds) subst(a, fd.arg, fd.body)

Tempting, but wrong.

Do Now!

Do you see why?

Reason from the types. What does the interpreter return? Values. What does substitution return? Oh, that’s right, expressions! For instance, when we substituted in the body of double, we got back the representation of 5 + 5. This is not a valid answer for the interpreter. Instead, it must be reduced to an answer. That, of course, is precisely what the interpreter does:
<fof-interp-appC> ::=
 | appC(f, a) => fd = get-fundef(f, fds) interp(subst(a, fd.arg, fd.body), fds)

Okay, that leaves only one case: identifiers. What could possibly be complicated about them? They should be just about as simple as numbers! And yet we’ve put them off to the very end, suggesting something subtle or complex is afoot.

Do Now!

Work through some examples to understand what the interpreter should do in the identifier case.

Let’s suppose we had defined double as follows:
fun double(x): x + y end

When we substitute 5 for x, this produces the expression 5 + y. So far so good, but what is left to substitute y? As a matter of fact, it should be clear from the very outset that this definition of double is erroneous. The identifier y is said to be free, an adjective that in this setting has negative connotations.

In other words, the interpreter should never confront an identifier. All identifiers ought to be parameters that have already been substituted (known as bound identifiers—here, a positive connotation) before the interpreter ever sees them. As a result, there is only one possible response given an identifier:
<fof-interp-idC> ::=
 | idC(s) => raise("unbound identifier")

And that’s it!

Finally, to complete our interpreter, we should define get-fundef:
<get-fundef-body> ::=
 cases (List) fds: | empty => raise("couldn't find function") | link(f, r) => if f.name == name: f else: get-fundef(name, r) end end

##### 26.1.5Oh Wait, There’s More!

Earlier, we declared subst as:
fun subst(w :: ExprC, at :: String, in :: ExprC) -> ExprC:
...
end

Sticking to surface syntax for brevity, suppose we apply double to 1 + 2. This would substitute 1 + 2 for each x, resulting in the following expression—(1 + 2) + (1 + 2)for interpretation. Is this necessarily what we want?

When you learned algebra in school, you may have been taught to do this differently: first reduce the argument to an answer (in this case, 3), then substitute the answer for the parameter. This notion of substitution might have the following type instead:
fun subst(w :: Value, at :: String, in :: ExprC) -> ExprC:
...
end
In fact, we don’t even have substitution quite right! The version of substitution we have doesn’t scale past this language due to a subtle problem known as “name capture”. Fixing substitution is complex, subtle, and an exciting intellectual endeavor, but it’s not the direction I want to go in here. We’ll instead sidestep this problem in this book. If you’re interested, however, read about the lambda calculus [CITE], which provides the tools for defining substitution correctly.

Exercise

We’ve actually stumbled on a profound distinction in programming languages. The act of evaluating arguments before substituting them in functions is called eager application, while that of deferring evaluation is called lazyand has some variations. For now, we will actually prefer the eager semantics, because this is what most mainstream languages adopt. Later [REF], we will return to talking about the lazy application semantics and its implications.

#### 26.2From Substitution to Environments

Though we have a working definition of functions, you may feel a slight unease about it. When the interpreter sees an identifier, you might have had a sense that it needs to “look it up”. Not only did it not look up anything, we defined its behavior to be an error! While absolutely correct, this is also a little surprising. More importantly, we write interpreters to understand and explain languages, and this implementation might strike you as not doing that, because it doesn’t match our intuition.

There’s another difficulty with using substitution, which is the number of times we traverse the source program. It would be nice to have to traverse only those parts of the program that are actually evaluated, and then, only when necessary. But substitution traverses everything—unvisited branches of conditionals, for instance—and forces the program to be traversed once for substitution and once again for interpretation.

Exercise

Does substitution have implications for the time complexity of evaluation?

There’s yet another problem with substitution, which is that it is defined in terms of representations of the program source. Obviously, our interpreter has and needs access to the source, to interpret it. However, other implementations—such as compilers—have no need to store it for that purpose.Compilers might store versions of or information about the source for other reasons, such as reporting runtime errors, and JITs may need it to re-compile on demand. It would be nice to employ a mechanism that is more portable across implementation strategies.

##### 26.2.1Introducing the Environment

The intuition that addresses the first concern is to have the interpreter “look up” an identifier in some sort of directory. The intuition that addresses the second concern is to defer the substitution. Fortunately, these converge nicely in a way that also addresses the third. The directory records the intent to substitute, without actually rewriting the program source; by recording the intent, rather than substituting immediately, we can defer substitution; and the resulting data structure, which is called an environment, avoids the need for source-to-source rewriting and maps nicely to low-level machine representations. Each name association in the environment is called a binding.This does not mean our study of substitution was useless; to the contrary, many tools that work over programs—such as compilers and analyzers—use substitution. Just not for the purpose of evaluating it at run-time.

Observe carefully that what we are changing is the implementation strategy for the programming language, not the language itself. Therefore, none of our datatypes for representing programs should change, neither—and this is the critical part—should the answers that the interpreter provides. As a result, we should think of the previous interpreter as a “reference implementation” that the one we’re about to write should match. Indeed, we should create a generator that creates lots of tests, runs them through both interpreters, and makes sure their answers are the same: i.e., the previous implementation is an oracle (Oracles for Testing). Ideally, we should prove that the two interpreters behave the same, which is a good topic for advanced study.One subtlety is in defining precisely what “the same” means, especially with regards to failure.

Let’s first define our environment data structure. An environment is a collection of names associated with...what?

Do Now!

A natural question to ask here might be what the environment maps names to. But a better, more fundamental, question is: How to determine the answer to the “natural” question?

Remember that our environment was created to defer substitutions. Therefore, the answer lies in substitution. We discussed earlier (Oh Wait, There’s More!) that we want substitution to map names to answers, corresponding to an eager function application strategy. Therefore, the environment should map names to answers.

data Binding:
| bind(name :: String, value :: Value)
end

type Environment = List<Binding>
mt-env = empty
xtnd-env = link
##### 26.2.2Interpreting with Environments

Now we can tackle the interpreter. One case is easy, but we should revisit all the others:

<fof-env-interp> ::=
 fun interp(e :: ExprC, nv :: Environment, fds :: List) -> Value: cases (ExprC) e: end end

The arithmetic operations are easiest. Recall that before, the interpreter recurred without performing any new substitutions. As a result, there are no new deferred substitutions to perform either, which means the environment does not change:
<fof-env-interp-arith> ::=
 | numC(n) => numV(n) | plusC(l, r) => arith-binop(lam(x, y): x + y end, l, r, nv, fds) | multC(l, r) => arith-binop(lam(x, y): x * y end, l, r, nv, fds)
Conditionals are similarly straightforward:
<fof-env-interp-cond> ::=
 | trueC => boolV(true) | falseC => boolV(false) | ifC(cnd, thn, els) => ic = interp(cnd, nv, fds) if is-boolV(ic): if ic.b: interp(thn, nv, fds) else: interp(els, nv, fds) end else: raise('not a boolean') end

Now let’s handle identifiers. Clearly, encountering an identifier is no longer an error: this was the very motivation for this change. Instead, we must look up its value in the directory:
<fof-env-interp-idC> ::=
 | idC(s) => lookup(s, nv)

Do Now!

Implement lookup.

Finally, application. Observe that in the substitution interpreter, the only case that caused new substitutions to occur was application. Therefore, this should be the case that constructs bindings. Let’s first extract the function definition, just as before:
<fof-env-interp-appC> ::=
 | appC(f, a) => fd = get-fundef(f, fds)
Previously, we substituted, then interpreted. Because we have no substitution step, we can proceed with interpretation, so long as we record the deferral of substitution. Let’s also evaluate the argument:
<fof-env-interp-appC-rest> ::=
 arg-val = interp(a, nv, fds) interp(fd.body, , fds)
That is, the set of function definitions remains unchanged; we’re interpreting the body of the function, as before; but we have to do it in an environment that binds the formal parameter. Let’s now define that binding process:
<fof-env-interp-appC-rest-xtnd> ::=
 xtnd-env(bind(fd.arg, arg-val), nv)
But we’ll return to this. The name being bound is the formal parameter (the same name that was substituted for, before). It is bound to the result of interpreting the argument (because we’ve decided on an eager application semantics). And finally, this extends the environment we already have. Type-checking this helps to make sure we got all the little pieces right.

Once we have a definition for lookup, we’d have a full interpreter. So here’s one:
<fof-env-interp-lookup> ::=
 fun lookup(s :: String, nv :: Environment) -> Value: cases (List) nv: | empty => raise("unbound identifier: " + s) | link(f, r) => if s == f.name: f.value else: lookup(s, r) end end end

Observe that looking up a free identifier still produces an error, but it has moved from the interpreter—which is by itself unable to determine whether or not an identifier is free—to lookup, which determines this based on the content of the environment.

Now we have a full interpreter. You should of course test it make sure it works as you’d expect. Let’s first set up some support code for testing:
<fof-env-interp-tests-setup> ::=
 check: f1 = fdC("double", "x", plusC(idC("x"), idC("x"))) f2 = fdC("quad", "x", appC("double", appC("double", idC("x")))) f3 = fdC("const5", "_", numC(5)) f4 = fdC("f4", "x", s2p2d("(if x 1 0)")) funs = [list: f1, f2, f3, f4] fun i(e): interp(e, mt-env, funs) end
For instance, these tests pass:
<fof-env-interp-tests> ::=
 i(plusC(numC(5), appC("quad", numC(3)))) is numV(17) i(multC(appC("const5", numC(3)), numC(4))) is numV(20) i(plusC(numC(10), appC("const5", numC(10)))) is numV(10 + 5) i(plusC(numC(10), appC("double", plusC(numC(1), numC(2))))) is numV(10 + 3 + 3) i(plusC(numC(10), appC("quad", plusC(numC(1), numC(2))))) is numV(10 + 3 + 3 + 3 + 3)
So we’re done, right?

Do Now!

Spot the bug.

##### 26.2.3Deferring Correctly

Here’s another test:raise is explained earlier: Testing Erroneous Programs.
<fof-env-interp-another-test> ::=
 interp(appC("f1", numC(3)), mt-env, [list: fdC("f1", "x", appC("f2", numC(4))), fdC("f2", "y", plusC(idC("x"), idC("y")))]) raises "unbound identifier: x"
In our interpreter, this evaluates to numV(7). Should it?

Translated into Pyret, this test corresponds to the following two definitions and expression:
fun f1(x): f2(4) end
fun f2(y): x + y end

f1(3)
What should this produce? f1(3) substitutes x with 3 in the body of f1, which then invokes f2(4). But notably, in f2, the identifier x is not bound! Sure enough, Pyret will produce an error.

In fact, so will our substitution-based interpreter!

Why does the substitution process result in an error? It’s because, when we replace the representation of x with the representation of 3 in the representation of f1, we do so in f1 only.This “the representation of” is getting a little annoying, isn’t it? Therefore, I’ll stop saying that, but do make sure you understand why I had to say it. It’s an important bit of pedantry. (Obviously: x is f1’s parameter; even if another function had a parameter named x, that’s a different x.) Thus, when we get to evaluating the body of f2, its x hasn’t been substituted, resulting in the error.

What went wrong when we switched to environments? Watch carefully: this is subtle. We can focus on applications, because only they affect the environment. When we substituted the formal for the value of the actual, we did so by extending the current environment. In terms of our example, we asked the interpreter to substitute not only f2’s substitution in f2’s body, but also the current ones (those for the caller, f1), and indeed all past ones as well. That is, the environment only grows; it never shrinks.

Because we agreed that environments are only an alternate implementation strategy for substitution—and in particular, that the language’s meaning should not change—we have to alter the interpreter. Concretely, we should not ask it to carry around all past deferred substitution requests, but instead make it start afresh for every new function, just as the substitution-based interpreter does. This is an easy change:
<fof-env-interp-appC-rest-xtnd-2> ::=
 xtnd-env(bind(fd.arg, arg-val), mt-env)
Now we have truly reproduced the behavior of the substitution interpreter.

##### 26.2.4Scope

The broken environment interpreter above implements what is known as dynamic scope. This means the environment accumulates bindings as the program executes. As a result, whether an identifier is even bound depends on the history of program execution. We should regard this unambiguously as a flaw of programming language design. It adversely affects all tools that read and process programs: compilers, IDEs, and humans.

In contrast, substitution—and environments, done correctly—give us lexical scope or static scope. “Lexical” in this context means “as determined from the source program”, while “static” in computer science means “without running the program”, so these are appealing to the same intuition. When we examine an identifier, we want to know two things: (1) Is it bound? (2) If so, where? By “where” we mean: if there are multiple bindings for the same name, which one governs this identifier? Put differently, which one’s substitution will give a value to this identifier? In general, these questions cannot be answered statically in a dynamically-scoped language: so your IDE, for instance, cannot overlay arrows to show you this information (the way an IDE like DrRacket does).A different way to think about it is that in a dynamically-scoped language, the answer to these questions is the same for all identifiers, and it simply refers to the dynamic environment. In other words, it provides no useful information. Thus, even though the rules of scope become more complex as the space of names becomes richer (e.g., objects, threads, etc.), we should always strive to preserve the spirit of static scoping.

You might look at our running example and wonder whether we’re creating a tempest in a teapot. In return, you should consider two situations:
1. To understand the binding structure of your program, you may need to look at the whole program. No matter how much you’ve decomposed your program into small, understandable fragments, it doesn’t matter if you have a free identifier anywhere.

2. Understanding the binding structure is not only a function of the size of the program but also of the complexity of its control flow. Imagine an interactive program with numerous callbacks; you’d have to track through every one of them, too, to know which binding governs an identifier.

Need a little more of a nudge? Let’s replace the expression of our example program with this one:
if moon-visible():
f1(10)
else:
f2(10)
end
Suppose moon-visible is a function that evaluates to false on new-moon nights, and true at other times. Then, this program will evaluate to an answer except on new-moon nights, when it will fail with an unbound identifier error.

Exercise

What happens on cloudy nights?

##### 26.2.6The Top-Level Scope

Matters become more complex when we contemplate top-level definitions in many languages. For instance, some versions of Scheme (which is a paragon of lexical scoping) allow you to write this:
 (define y 1) (define (f x) (+ x y))
which seems to pretty clearly suggest where the y in the body of f will come from, except:
 (define y 1) (define (f x) (+ x y)) (define y 2)
is legal and (f 10) produces 12. Wait, you might think, always take the last one! But consider:
 (define y 1) (define f (let ((z y)) (lambda (x) (+ x y z)))) (define y 2)
Here, z is bound to the first value of y whereas the inner y is bound to the second value.Most “scripting” languages exhibit similar problems. As a result, on the Web you will find enormous confusion about whether a certain language is statically- or dynamically-scoped, when in fact readers are comparing behavior inside functions (often static) against the top-level (usually dynamic). Beware! There is actually a valid explanation of this behavior in terms of lexical scope, but it can become convoluted, and perhaps a more sensible option is to prevent such redefinition. Pyret does precisely this, thereby offering the convenience of a top-level without its pain.

##### 26.2.7Exposing the Environment

If we were building the implementation for others to use, it would be wise and a courtesy for the exported interpreter to take only an expression and list of function definitions, and invoke our defined interp with the empty environment. This both spares users an implementation detail, and avoids the use of an interpreter with an incorrect environment. In some contexts, however, it can be useful to expose the environment parameter. For instance, the environment can represent a set of pre-defined bindings: e.g., if the language wishes to provide pi automatically bound to 3.2 (in Indiana).

#### 26.3Functions Anywhere

The introduction to the Scheme programming language definition establishes this design principle:

Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. [REF]

As design principles go, this one is hard to argue with. (Some restrictions, of course, have good reason to exist (Functions and Predictability), but this principle forces us to argue for them, not admit them by default.) Let’s now apply this to functions.

One of the things we stayed coy about when introducing functions (Adding Functions to the Language) is exactly where functions go. We suggested we’re following the model of an idealized programming environment, with definitions and their uses kept separate. But, inspired by the Scheme design principle, let’s examine how necessary that is.

Why can’t functions definitions be expressions? In our current arithmetic-centric language we face the uncomfortable question “What value does a function definition represent?”, to which we don’t really have a good answer. But a real programming language obviously computes more than numbers and Booleans, so we no longer need to confront the question in this form; indeed, the answer to the above can just as well be, “A function value”. Let’s see how that might work out.

What can we do with functions as values? Clearly, functions are a distinct kind of value than a number, so we cannot, for instance, add them. But there is one evident thing we can do: apply them to arguments! Thus, we can allow function values to appear in the function position of an application. The behavior would, naturally, be to apply the function. We are therefore proposing a language where the following would be a valid program (where I’ve used brackets so we can easily identify the function, and made up a syntax for it):
 (+ 2 ([deffun f x (* x 3)] 4))
This would evaluate to (+ 2 (* 4 3)), or 14. (Did you see that I just used substitution?)

##### 26.3.1Functions as Expressions and Values

Let’s first define the core language to include function definitions:
<hof-named-dd> ::=
 data ExprC: | numC(n :: Number) | plusC(l :: ExprC, r :: ExprC) | multC(l :: ExprC, r :: ExprC) | trueC | falseC | ifC(c :: ExprC, t :: ExprC, e :: ExprC) | idC(s :: String) end
For now, we’ll simply copy function definitions into the expression language. We’re free to change this if necessary as we go along, but for now it at least allows us to reuse our existing test cases.
<hof-named-dd-fdC/1> ::=
 | fdC(name :: String, arg :: String, body :: ExprC)
This enables us to now get rid of FunDef.

We also need to determine what an application looks like. What goes in the function position of an application? We want to allow an entire function definition, not just its name. Because we’ve lumped function definitions in with all other expressions, we need the annotation to be ExprC, but we can add a refinement (see (part "annotation-refinement")) to make clear it has to be a function definition:
<hof-named-dd-appC> ::=
 | appC(f :: ExprC%(is-fdC), a :: ExprC)

With this definition of application, we no longer have to look up functions by name, so the interpreter can get rid of the list of function definitions. If we need it we can restore it later, but for now let’s just explore what happens with function definitions are written at the point of application: so-called immediate functions. Thus our interpreter looks like this:
<hof-named-interp/1> ::=
 fun interp(e :: ExprC, nv :: Environment): # removed return annotation of Value because fdC is not a Value! cases (ExprC) e: | numC(n) => numV(n) | plusC(l, r) => arith-binop(lam(x, y): x + y end, l, r, nv) | multC(l, r) => arith-binop(lam(x, y): x * y end, l, r, nv) | trueC => boolV(true) | falseC => boolV(false) | ifC(cnd, thn, els) => ic = interp(cnd, nv) if is-boolV(ic): if ic.b: interp(thn, nv) else: interp(els, nv) end else: raise('not a boolean') end | idC(s) => lookup(s, nv)

Do Now!

Observe that we’ve left out the return annotation on interp. Why do you think this is? Run some examples to figure it out.

We need to add a case to the interpreter for function definitions, and this is a good candidate:
<hof-named-interp-fun/1> ::=
 | fdC(_, _, _) => e
The interpreter now no longer returns just Values; now it also returns function definitions. We could update our definition of Value (and thus restore the annotation), but we’ll soon find that we need to think this through a little more than we have.

When we need to evaluate an application, we can simply evaluate the function position to obtain a function definition, and the rest of the evaluation process can remain unchanged:
<hof-named-interp-app/1> ::=
 | appC(f, a) => fun-val = interp(f, nv) arg-val = interp(a, nv) interp(fun-val.body, xtnd-env(bind(fun-val.arg, arg-val), mt-env))
With that, our former examples works just fine:
check:
f1 = fdC("double", "x", plusC(idC("x"), idC("x")))
f2 = fdC("quad", "x", appC(f1, appC(f1, idC("x"))))
f3 = fdC("const5", "_", numC(5))
f4 = fdC("f4", "x", s2p2d("(if x 1 0)"))
fun i(e): interp(e, mt-env) end

i(plusC(numC(5), appC(f2, numC(3)))) is numV(17)
i(multC(appC(f3, numC(3)), numC(4))) is numV(20)
i(plusC(numC(10), appC(f3, numC(10)))) is numV(10 + 5)
i(plusC(numC(10), appC(f1, plusC(numC(1), numC(2)))))
is numV(10 + 3 + 3)
i(plusC(numC(10), appC(f2, plusC(numC(1), numC(2)))))
is numV(10 + 3 + 3 + 3 + 3)
end

##### 26.3.2A Small Improvement

Do Now!

Is there any part of our interpreter definition that we never use?

Yes there is: the name field of a function definition is never used. This is because we no longer look up functions by name: we obtain their definition through evaluation. Therefore, a simpler definition suffices:
<hof-fun/2> ::=
 | fdC(arg :: String, body :: ExprC)

Do Now!

Do you see what else you need to change?

In addition to the test cases, you also need to alter the interpreter fragment that handles definitions:
<hof-interp-fun/2> ::=
 | fdC(_, _) => e
In other words, our functions are now anonymous.

##### 26.3.3Nesting Functions

The body of a function definition is an arbitrary expression. A function definition is itself an expression. That means a function definition can contain a...function definition. For instance:
inner-fun = fdC("x", plusC(idC("x"), idC("x")))
outer-fun = fdC("x", inner-fun)
which evaluates to
fdC("x", fdC("x", plusC(idC("x"), idC("x"))))
Applying this to numC(4) results in
fdC("x", plusC(idC("x"), idC("x")))
We might try to apply this to a number—which it should double—but we run afoul of the refinement annotation on the function position of an application, which envisioned only immediate functions, not expressions that can evaluate to functions. Therefore, we should remove this restriction:

...

Suppose, however, we use a slightly different function definition:
appC(fdC("x", fdC("y", plusC(idC("x"), idC("y")))), numC(4))
which evaluates to
fdC("y", plusC(idC("x"), idC("y")))
Now we have a clear problem, because x is no longer bound, even though it clearly was in an outer scope. Indeed, if we apply it to any value, we get an error because of the unbound identifier.

##### 26.3.4Nested Functions and Substitution

Consider the last two examples with a substitution-based interpreter instead. If we evaluate the application
appC(fdC("x", fdC("x", plusC(idC("x"), idC("x")))), numC(4))
using substitution, the inner binding masks the outer one, so no substitutions should take place, giving the same result:
fdC("x", plusC(idC("x"), idC("x")))
In the other example—
appC(fdC("x", fdC("y", plusC(idC("x"), idC("y")))), numC(4))
however, substitution would replace the outer identifier, resulting in
fdC("y", plusC(numC(4), idC("y")))
So once again, if we take substitution as our definition of correctness, we see that our interpreter produces the wrong answer.

In other words, we’re again failing to faithfully capture what substitution would have done. A function value needs to remember the substitutions that have already been applied to it. Because we’re representing substitutions using an environment, a function value therefore needs to be bundled with an environment. This resulting data structure is called a closure.“Save the environment! Create a closure today!”—Cormac Flanagan

##### 26.3.5Updating Values

In other words, a function can’t just evaluate to its body: it must evaluate to a closure:
<hof-value> ::=
 data Value: | numV(n :: Number) | boolV(b :: Boolean) | closV(f :: ExprC%(is-fdC), e :: Environment) end
The refinement annotation reflects that we are expecting a very specific kind of expression—that representing a function definition—in a closure.

The interpreter now uses it.Look, we got our return value annotation back! Most cases are unchanged from before:
<hof-interp> ::=
 fun interp(e :: ExprC, nv :: Environment) -> Value: cases (ExprC) e: end end
There are just two interesting cases: closure construction and closure use.

Do Now!

Write out these two cases.

When evaluating a function, we have to create a closure that records the environment at the time of function creation:“[Closures] are relegated to relative obscurity until Java makes them popular by not having them.”—James Iry
<hof-interp-fdC> ::=
 | fdC(_, _) => closV(e, nv)
This leaves function applications. Now the function position could be any expression, so we have to evaluate it first. That produces a value that we expect is an instance of closV. From it we can therefore extract the function’s body (.f.body) and argument name (.f.arg), and we evaluate the body in the environment taken from the closure (clos.e):
<hof-interp-appC> ::=
 | appC(f, a) => clos = interp(f, nv) arg-val = interp(a, nv) interp(clos.f.body, xtnd-env(bind(clos.f.arg, arg-val), clos.e))

Exercise

Observe that the argument to interp is clos.e rather than mt-env. Write a program that illustrates the difference.

This now computes the same answer we would have gotten through substitution.

Do Now!

If we now switch back to using substitution, will we encounter any problems?

Yes, we will. We’ve defined substitution to replace program text in other program text. Strictly speaking we can no longer do this, because Value terms cannot be contained inside ExprC ones. That is, substitution is predicated on the assumption that the type of answers is a form of syntax. It is actually possible to carry through a study of programming under this assumption, but we won’t take that path here.

##### 26.3.6Sugaring Over Anonymity

Now let’s get back to the idea of naming functions, which has evident value for program understanding. Observe that we do have a way of naming things: by passing them to functions, where they acquire a local name (that of the formal parameter). Anywhere within that function’s body, we can refer to that entity using the formal parameter name.

Therefore, we can name a function definion using another...function definition. For instance, the Pyret code
fun double(x): x + x end
double(10)
could first be rewritten as the equivalent
double = lam(x): x + x end
double(10)
which by substitution evaluates to (lam(x): x + x end)(10) or 20.

Indeed, this pattern is a local naming mechanism, and virtually every language has it in some form or another. In languages like Lisp and ML variants, it is usually called let.Note that in different languages, let has different scope rules: in some cases it permits recursive definitions, and in others it doesn’t. For instance, in Racket:
 (let ([double (lambda (x) (+ x x))]) (double 10))
In Pyret, as in several other languages like Java, there is no explicitly named construct of this sort, but any definition block permits local definitions such as this:
fun something():
double = lam(x): x + x end
double(10)
end

Here’s a more complex example, written in Racket to illustrate a point about scope:
 (define (double x) (+ x x)) (define (quad x) (double (double x))) (quad 10)
This could be rewritten as
 (let ([double (lambda (x) (+ x x))]) (let ([quad (lambda (x) (double (double x)))]) (quad 10)))
which works just as we’d expect; but if we change the order, it no longer works—
 (let ([quad (lambda (x) (double (double x)))]) (let ([double (lambda (x) (+ x x))]) (quad 10)))
because quad can’t “see” double. So we see that top-level binding is different from local binding: essentially, the top-level has “infinite scope”. This is the source of both its power and problems.

#### 26.4Recursion and Non-Termination

Hopefully you can convince yourself that our pure expression languages—with only arithmetic and conditionals—could not create non-terminating programs. Why? Because its interpreter is purely structural over a non-cyclic datatype. In contrast, even our very first function interpreter is generative, which therefore opens up the possibility that it can have non-terminating computation.

Do Now!

Construct a non-terminating program for that interpreter.

And, indeed, it can. Here’s a function definition:
il = fdC("inf-loop", "x", appC("inf-loop", numC(0)))
and we just need to get it started:
interp(appC("inf-loop", numC(0)), [list: il])

Exercise

Precisely identify the generative recursion that enables this.

Do Now!

Why does this work? Why is this an infinite loop?

What’s happening here is actually somewhat subtle. The initial call to interp results in the interpreter finding a function and interpreting its body, which results in another call to interp: which finds the function and interprets its body, which results...and so on. If for some reason Pyret did not support recursion (which, historically, some languages did not!), then this would not work. Indeed, there is still something we are leaving to Pyret:

Do Now!

Does this program truly run for “ever” (meaning, as long as the computer is functioning properly), or does it run out of stack space?

Okay, that was easy. Now let’s consider our most recent interpreter. What can it do?

Consider this simple infinite loop in Pyret:
fun loop-forever(): loop-forever() end
loop-forever()
Let’s convert it to use an anonymous function:
 loop-forever = lam(): loop-forever() end loop-forever()
Seems fine, right? Use the let desugaring above:
 (lam(loop-forever): loop-forever() end)(lam(): loop-forever() end)
And now, loop-forever on the last line isn’t bound!

Therefore, Pyret’s = is clearly doing something more than just textual substitution: it is also “tying the loop” for recursive definitions.

Do Now!

Can we try anything else that might succeed?

Actually, we can. Here it is. To make it more readable we’ll first give the important intermediate term a name (and then see that the name isn’t necessary):
little-omega = lam(x): x(x) end
Given this, we can then define:
omega = little-omega(little-omega)

Exercise

Why does this run forever? Consider using substitution to explain why.

Note that we could have written the whole thing without any names at all:
(lam(x): x(x) end)(lam(x): x(x) end)
As the names above suggest, the function is conventionally called ω (little omega in Greek), and the bigger term Ω (capital omega). To understand how we could have arrived at this magical term, see [EMPTY].

#### 26.5Functions and Predictability

We began (Adding Functions to the Language) with a language where at all application points, we knew exactly which function was going to be invoked (because we knew its name, and the name referred to one of a fixed global set). These are known as first-order functions. In contrast, we later moved to a language (Functions Anywhere) with first-class functions: those that had the same status as any other value in the language.

This transition gave us a great deal of new flexiblity. For instance, we saw (Sugaring Over Anonymity) that some seemingly necessary language features could instead be implemented just as syntactic sugar; indeed, with true first-class functions, we can define all of computation ([EMPTY]). So what’s not to like?

The subtle problem is that whenever we increase our expressive power, we correspondingly weaken our predictive power. In particular, when confronted with a particular function application in a program, the question is, can we tell precisely which function is going to be invoked at this point? With first-order functions, yes; with higher-order functions, this is undecidable. Having this predictive power has many important consequences: a compiler can choose to inline (almost) every function application; a programming environment can give substantial help about which function is being called at that point; a security analyzer can definitively rule out known bad functions, thereby reducing the number of useless alerts it generates. Of course, with higher-order functions, all these operations are still sometimes possible; but they are not always possible, and how possible they are depends on the structure of the program and the cleverness of tools.

Exercise

With higher-order functions, why is determining the precise function at an application undecidable?

Exercise

Why does the above reference to inlining say “almost”?