On this page:
15.1 Adding Functions to the Language
15.1.1 Defining Data Representations
15.1.2 Growing the Interpreter
15.1.3 Substitution
15.1.4 The Interpreter, Resumed
15.1.5 Oh Wait, There’s More!
15.2 From Substitution to Environments
15.2.1 Introducing the Environment
15.2.2 Interpreting with Environments
15.2.3 Deferring Correctly
15.2.4 Scope
15.2.5 How Bad Is It?
15.2.6 The Top-Level Scope
15.2.7 Exposing the Environment
15.3 Functions Anywhere
15.3.1 Functions as Expressions and Values
15.3.2 A Small Improvement
15.3.3 Nesting Functions
15.3.4 Nested Functions and Substitution
15.3.5 An Answer Type
15.3.6 Sugaring Over Anonymity
15.4 Functions and Predictability

15 Evaluating Functions

    15.1 Adding Functions to the Language

      15.1.1 Defining Data Representations

      15.1.2 Growing the Interpreter

      15.1.3 Substitution

      15.1.4 The Interpreter, Resumed

      15.1.5 Oh Wait, There’s More!

    15.2 From Substitution to Environments

      15.2.1 Introducing the Environment

      15.2.2 Interpreting with Environments

      15.2.3 Deferring Correctly

      15.2.4 Scope

      15.2.5 How Bad Is It?

      15.2.6 The Top-Level Scope

      15.2.7 Exposing the Environment

    15.3 Functions Anywhere

      15.3.1 Functions as Expressions and Values

      15.3.2 A Small Improvement

      15.3.3 Nesting Functions

      15.3.4 Nested Functions and Substitution

      15.3.5 An Answer Type

      15.3.6 Sugaring Over Anonymity

    15.4 Functions and Predictability

15.1 Adding Functions to the Language

Let’s start creating a real programming language. We could add intermediate features such as conditionals, but to do almost anything interesting we’re going to need functions or their moral equivalent, so let’s get to it.

Exercise

Add conditionals to your language. You can either add boolean datatypes or, if you want to do something quicker, add a conditional that treats 0 as false and everything else as true.

What are the important test cases you should write?

15.1.1 Defining Data Representations

Imagine we’re modeling a simple programming environment. The developer defines functions in a definitions window, and uses them in an interactions windowFor historic reasons, the interactions window is also called a REPL or “read-eval-print loop”. (which provides a prompt at which they can run expressions). For now, let’s assume all definitions go in the definitions window only (we’ll relax this soon [REF]), and all stand-alone expressions in the interactions window only. Thus, running a program simply loads definitions. Our interpreter will correspond to the interactions window prompt, so therefore assume it is supplied with a set of definitions.

A set of definitions suggests no ordering, which means, presumably, any definition can refer to any other. That’s what I intend here, but when you are designing your own language, be sure to think about this.

To keep things simple, let’s just consider functions of one argument. Here are some Pyret examples:

fun double(x): x + x end

 

fun quadruple(x): double(double(x)) end

 

fun const5(_): 5 end

Exercise

When a function has multiple arguments, what simple but important criterion governs the names of those arguments?

What are the parts of a function definition? It has a name (above, double, quadruple, and const5), which we’ll represent as a string ("double", etc.); its formal parameter or argument has a name (e.g., x), which too we can model as a string ("x"); and it has a body. We’ll determine the body’s representation in stages, but let’s start to lay out a datatype for function definitions:

data FunDefC:

  | fdC (name :: String, arg :: String, body :: ExprC)

end

What is the body? Clearly, it has the form of an arithmetic expression, and sometimes it can even be represented using the existing ArithC language: for instance, the body of const5 can be represented as numC(5). But representing the body of double requires something more: not just addition (which we have), but also “x”. You are probably used to calling this a variable, but we will not use that term for now. Instead, we will call it an identifier.I promise we’ll return to this issue of nomenclature later [REF].

Do Now!

Anything else?

Finally, let’s look at the body of quadruple. It has yet another new construct: a function application. Be very careful to distinguish between a function definition, which describes what the function is, and an application, which uses it. The argument (or actual parameter) in the inner application of double is x; the argument in the outer application is double(x). Thus, the argument can be any complex expression.

Let’s commit all this to a crisp datatype. Clearly we’re extending what we had before (because we still want all of arithmetic). We’ll give a new name to our datatype to signify that it’s growing up:
<datatype> ::=

    data ExprC:

      | numC (n :: Number)

      | plusC (l :: ExprC, r :: ExprC)

      | multC (l :: ExprC, r :: ExprC)

      | <idC-dt>

      | <appC-dt>

    end

Identifiers are closely related to formal parameters. When we apply a function by giving it a value for its parameter, we are in effect asking it to replace all instances of that formal parameter in the body—i.e., the identifiers with the same name as the formal parameter—with that value.Observe that we are being coy about a few issues: what kind of “value” [REF] and when to replace [REF]. To simplify this process of search-and-replace, we might as well use the same datatype to represent both. We’ve already chosen strings to represent formal parameters, so:
<idC-dt> ::=

    | idC (s :: String)

Finally, applications. They have two parts: the function’s name, and its argument. We’ve already agreed that the argument can be any full-fledged expression (including identifiers and other applications). As for the function name, it again makes sense to use the same datatype as we did when giving the function its name in a function definition. Thus:
<appC-dt> ::=

    | appC (f :: String, a :: ExprC)

identifying which function to apply, and providing its argument.

Using these definitions, it’s instructive to write out the representations of the examples we defined above:
  • fdC("double", "x", plusC(idC("x"), idC("x")))

  • fdC("quadruple", "x", appC("double", appC("double", idC("x"))))

  • fdC("const5", "_", numC(5))

We also need to choose a representation for a set of function definitions. It’s convenient to represent these by a list.

Look out! Did you notice that we spoke of a set of function definitions, but chose a list representation? That means we’re using an ordered collection of data to represent an unordered entity. At the very least, then, when testing, we should use any and all permutations of definitions to ensure we haven’t subtly built in a dependence on the order.

15.1.2 Growing the Interpreter

Now we’re ready to tackle the interpreter proper. First, let’s remind ourselves of what it needs to consume. Previously, it consumed only an expression to evaluate. Now it also needs to take a list of function definitions:
<subst-interp> ::=

    fun interp(e :: ExprC, fds :: List<FunDefC>) -> Number:

      <interp-body>

Let’s revisit our old interpreter (A First Look at Interpretation). In the case of numbers, clearly we still return the number as the answer. In the addition and multiplication case, we still need to recur (because the sub-expressions might be complex), but which set of function definitions do we use? Because the act of evaluating an expression neither adds nor removes function definitions, the set of definitions remains the same, and should just be passed along unchanged in the recursive calls.
<interp-body> ::=

    cases (ExprC) e:

      | numC(n) => n

      | plusC(l, r) => interp(l, fds) + interp(r, fds)

      | multC(l, r) => interp(l, fds) * interp(r, fds)

      | idC(s) => <idC-interp-subst>

      | appC(f, a) => <appC-interp-subst>

Now let’s tackle application. First we have to look up the function definition, for which we’ll assume we have a helper function of this type available:
<get-fundef> ::=

    fun get-fundef(name :: String, fds :: List<FunDefC>)

        -> FunDefC:

      <get-fundef-body>

    end

Assuming we find a function of the given name, we need to evaluate its body. However, remember what we said about identifiers and parameters? We must “search-and-replace”, a process you have seen before in school algebra called substitution. This is sufficiently important that we should talk first about substitution before returning to the interpreter (The Interpreter, Resumed).

15.1.3 Substitution

Substitution is the act of replacing a name (in this case, that of the formal parameter) in an expression (in this case, the body of the function) with another expression (in this case, the actual parameter). Its header, with meaningful parameter names, would be:
<subst> ::=

    fun subst(with :: ExprC, at :: String, in :: ExprC)

        -> ExprC:

      <subst-body>

    end

The first argument is what we want to replace the name with; the second is at what name we want to perform substitution; and the third is in which expression we want to do it.

Do Now!

Suppose we want to substitute 3 for the identifier x in the bodies of the three example functions above. What should it produce?

In double, this should produce 3 + 3; in quadruple, it should produce double(double(3)); and in const5, it should produce 5 (i.e., no substitution happens because there are no instances of x in the body).

A common mistake is to assume that the result of substituting, e.g., 3 for x in double is fun double(x): 3 + 3 end. This is incorrect. We only substitute at the point when we apply the function, at which point the function’s invocation is replaced by its body. The header enables us to find the function and ascertain the name of its parameter; but only its body participates in evaluation. Examine the use of substitution in the interpreter to see how returning a function definition would result in a type error.

These examples already tell us what to do in almost all the cases. Given a number, there’s nothing to substitute. If it’s an identifier, we haven’t seen an example with a different identifier but you’ve guessed what should happen: it stays unchanged. In the other cases, descend into the sub-expressions, performing substitution.

Before we turn this into code, there’s an important case to consider. Suppose the name we are substituting happens to be the name of a function. Then what should happen?

Do Now!

What, indeed, should happen?

There are many ways to approach this question. One is from a design perspective: function names live in their own “world”, distinct from ordinary program identifiers. Some languages (such as C and Common Lisp, in slightly different ways) take this perspective, and partition identifiers into different namespaces depending on how they are used. In other languages, there is no such distinction; indeed, we will examine such languages soon [REF].

For now, we will take a pragmatic viewpoint. Because expressions evaluate to numbers, that means a function name could turn into a number. However, numbers cannot name functions, only symbols can. Therefore, it makes no sense to substitute in that position, and we should leave the function name unmolested irrespective of its relationship to the variable being substituted. (Thus, a function could have a parameter named x as well as refer to another function called x, and these would be kept distinct.)

Now we’ve made all our decisions, and we can provide the body:
<subst-body> ::=

    cases (ExprC) in:

      | numC(n) => in

      | plusC(l, r) => plusC(subst(with, at, l), subst(with, at, r))

      | multC(l, r) => multC(subst(with, at, l), subst(with, at, r))

      | appC(f, a) => appC(f, subst(with, at, a))

      | idC(s) =>

        if s == at:

          with

        else:

          in

        end

    end

Exercise

Observe that, whereas in the numC case the interpreter returned n, substitution returns in (i.e., the original expression, equivalent at that point to writing numC(n)). Why?

15.1.4 The Interpreter, Resumed

Phew! Now that we’ve completed the definition of substitution (or so we think), let’s complete the interpreter. Substitution was a heavyweight step, but it also does much of the work involved in applying a function. It is tempting to write
<appC-interp/alt> ::=

    | appC(f, a) =>

      fd = get-fundef(f, fds)

      subst(a, fd.arg, fd.body)

Tempting, but wrong.

Do Now!

Do you see why?

Reason from the types. What does the interpreter return? Numbers. What does substitution return? Oh, that’s right, expressions! For instance, when we substituted in the body of double, we got back the representation of 5 + 5. This is not a valid answer for the interpreter. Instead, it must be reduced to an answer. That, of course, is precisely what the interpreter does:
<appC-interp-subst> ::=

    | appC(f, a) =>

      fd = get-fundef(f, fds)

      interp(subst(a, fd.arg, fd.body), fds)

    

Okay, that leaves only one case: identifiers. What could possibly be complicated about them? They should be just about as simple as numbers! And yet we’ve put them off to the very end, suggesting something subtle or complex is afoot.

Do Now!

Work through some examples to understand what the interpreter should do in the identifier case.

Let’s suppose we had defined double as follows:

fun double(x): x + y end

When we substitute 5 for x, this produces the expression 5 + y. So far so good, but what is left to substitute y? As a matter of fact, it should be clear from the very outset that this definition of double is erroneous. The identifier y is said to be free, an adjective that in this setting has negative connotations.

In other words, the interpreter should never confront an identifier. All identifiers ought to be parameters that have already been substituted (known as bound identifiers—here, a positive connotation) before the interpreter ever sees them. As a result, there is only one possible response given an identifier:
<idC-interp-subst> ::=

    | idC(s) => raise("unbound identifier")

And that’s it!

Finally, to complete our interpreter, we should define get-fundef:
<get-fundef-body> ::=

    cases (List<FunDefC>) fds:

      | empty => raise("couldn't find function")

      | link(f, r) =>

        if f.name == name:

          f

        else:

          get-fundef(name, r)

        end

    end

15.1.5 Oh Wait, There’s More!

Earlier, we declared subst as:

fun subst(with :: ExprC, at :: String, in :: ExprC)

    -> ExprC:

  ...

end

Sticking to surface syntax for brevity, suppose we apply double to 1 + 2. This would substitute 1 + 2 for each x, resulting in the following expression—(1 + 2) + (1 + 2)for interpretation. Is this necessarily what we want?

When you learned algebra in school, you may have been taught to do this differently: first reduce the argument to an answer (in this case, 3), then substitute the answer for the parameter. This notion of substitution might have the following type instead:

fun subst(with :: Number, at :: String, in :: ExprC)

    -> ExprC:

  ...

end

Careful now: we can’t put raw numbers inside expressions, so we’d have to constantly wrap the number in an invocation of numC. Thus, it would make sense for subst to have a helper that it invokes after wrapping the first parameter. (In fact, our existing subst would be a perfectly good candidate: because it accepts any ExprC in the first parameter, it will certainly work just fine with a numC.)

In fact, we don’t even have substitution quite right! The version of substitution we have doesn’t scale past this language due to a subtle problem known as “name capture”. Fixing substitution is complex, subtle, and an exciting intellectual endeavor, but it’s not the direction I want to go in here. We’ll instead sidestep this problem in this book. If you’re interested, however, read about the lambda calculus [CITE], which provides the tools for defining substitution correctly.

Exercise

Modify your interpreter to substitute names with answers, not expressions.

We’ve actually stumbled on a profound distinction in programming languages. The act of evaluating arguments before substituting them in functions is called eager application, while that of deferring evaluation is called lazyand has some variations. For now, we will actually prefer the eager semantics, because this is what most mainstream languages adopt. Later [REF], we will return to talking about the lazy application semantics and its implications.

15.2 From Substitution to Environments

Though we have a working definition of functions, you may feel a slight unease about it. When the interpreter sees an identifier, you might have had a sense that it needs to “look it up”. Not only did it not look up anything, we defined its behavior to be an error! While absolutely correct, this is also a little surprising. More importantly, we write interpreters to understand and explain languages, and this implementation might strike you as not doing that, because it doesn’t match our intuition.

There’s another difficulty with using substitution, which is the number of times we traverse the source program. It would be nice to have to traverse only those parts of the program that are actually evaluated, and then, only when necessary. But substitution traverses everything—unvisited branches of conditionals, for instance—and forces the program to be traversed once for substitution and once again for interpretation.

Exercise

Does substitution have implications for the time complexity of evaluation?

There’s yet another problem with substitution, which is that it is defined in terms of representations of the program source. Obviously, our interpreter has and needs access to the source, to interpret it. However, other implementations—such as compilers—have no need to store it for that purpose.Compilers might store versions of or information about the source for other reasons, such as reporting runtime errors, and JITs may need it to re-compile on demand. It would be nice to employ a mechanism that is more portable across implementation strategies.

15.2.1 Introducing the Environment

The intuition that addresses the first concern is to have the interpreter “look up” an identifier in some sort of directory. The intuition that addresses the second concern is to defer the substitution. Fortunately, these converge nicely in a way that also addresses the third. The directory records the intent to substitute, without actually rewriting the program source; by recording the intent, rather than substituting immediately, we can defer substitution; and the resulting data structure, which is called an environment, avoids the need for source-to-source rewriting and maps nicely to low-level machine representations. Each name association in the environment is called a binding.This does not mean our study of substitution was useless; to the contrary, many tools that work over programs—such as compilers and analyzers—use substitution. Just not for the purpose of evaluating it at run-time.

Observe carefully that what we are changing is the implementation strategy for the programming language, not the language itself. Therefore, none of our datatypes for representing programs should change, nor even should the answers that the interpreter provides. As a result, we should think of the previous interpreter as a “reference implementation” that the one we’re about to write should match. Indeed, we should create a generator that creates lots of tests, runs them through both interpreters, and makes sure their answers are the same: i.e., the previous implementation is an oracle [REF]. Ideally, we should prove that the two interpreters behave the same, which is a good topic for advanced study.

One subtlety is in defining precisely what “the same” means, especially with regards to failure.

Let’s first define our environment data structure. An environment is a collection of names associated with...what?

Do Now!

A natural question to ask here might be what the environment maps names to. But a better, more fundamental, question is: How to determine the answer to the “natural” question?

Remember that our environment was created to defer substitutions. Therefore, the answer lies in substitution. We discussed earlier (Oh Wait, There’s More!) that we want substitution to map names to answers, corresponding to an eager function application strategy. Therefore, the environment should map names to answers.

data Binding:

  | bind (name :: String, value :: Number)

end

 

# An Environment is a List<Binding>

mt-env = []

xtnd-env = link

15.2.2 Interpreting with Environments

Now we can tackle the interpreter. One case is easy, but we should revisit all the others:

<interp-env> ::=

    fun interp(e :: ExprC, nv :: List<Binding>, fds :: List<FunDefC>)

        -> Number:

      cases (ExprC) e:

        | numC(n) => n

        <plusC/multC-interp>

        <idC-interp-fof>

        <appC-interp-fof>

      end

    end

The arithmetic operations are easiest. Recall that before, the interpreter recurred without performing any new substitutions. As a result, there are no new deferred substitutions to perform either, which means the environment does not change:
<plusC/multC-interp> ::=

    | plusC(l, r) => interp(l, nv, fds) + interp(r, nv, fds)

    | multC(l, r) => interp(l, nv, fds) * interp(r, nv, fds)

Now let’s handle identifiers. Clearly, encountering an identifier is no longer an error: this was the very motivation for this change. Instead, we must look up its value in the directory:
<idC-interp-fof> ::=

    | idC(s) => lookup(s, nv)

Do Now!

Implement lookup.

Finally, application. Observe that in the substitution interpreter, the only case that caused new substitutions to occur was application. Therefore, this should be the case that constructs bindings. Let’s first extract the function definition, just as before:
<appC-interp-fof> ::=

    | appC(f, a) =>

      fd = get-fundef(f, fds)

      <appC-interp-fof-rest>

Previously, we substituted, then interpreted. Because we have no substitution step, we can proceed with interpretation, so long as we record the deferral of substitution. Let’s also evaluate the argument:
<appC-interp-fof-rest> ::=

    arg-val = interp(a, nv, fds)

    interp(fd.body, <appC-interp-fof-env>, fds)

That is, the set of function definitions remains unchanged; we’re interpreting the body of the function, as before; but we have to do it in an environment that binds the formal parameter. Let’s now define that binding process:
<appC-interp-fof-env/alt> ::=

    xtnd-env(bind(fd.arg, arg-val), nv)

The name being bound is the formal parameter (the same name that was substituted for, before). It is bound to the result of interpreting the argument (because we’ve decided on an eager application semantics). And finally, this extends the environment we already have. Type-checking this helps to make sure we got all the little pieces right.

Once we have a definition for lookup, we’d have a full interpreter. So here’s one:
<lookup> ::=

    fun lookup(s :: String, nv :: List<Binding>) -> Number:

      cases (List<Binding>) nv:

        | empty => raise("unbound identifier: " + s)

        | link(f, r) =>

          if s == f.name:

            f.value

          else:

            lookup(s, r)

          end

      end

    end

Observe that looking up a free identifier still produces an error, but it has moved from the interpreter—which is by itself unable to determine whether or not an identifier is free—to lookup, which determines this based on the content of the environment.

Now we have a full interpreter. You should of course test it make sure it works as you’d expect. Let’s first set up some support code for testing:
<interp-tests-setup> ::=

    check:

      f1 = fdC("double", "x", plusC(idC("x"), idC("x")))

      f2 = fdC("quadruple", "x",

               appC("double", appC("double", idC("x"))))

      f3 = fdC("const5", "_", numC(5))

      funs = [f1, f2, f3]

      fun i(e): interp(e, mt-env, funs) end

      <interp-tests>

For instance, these tests pass:
<interp-tests> ::=

    i(plusC(numC(5), appC("quadruple", numC(3)))) is 17

    i(multC(appC("const5", numC(3)), numC(4))) is 20

    i(plusC(numC(10), appC("const5", numC(10)))) is (10 + 5)

    i(plusC(numC(10), appC("double", plusC(numC(1), numC(2)))))

      is (10 + 3 + 3)

    i(plusC(numC(10), appC("quadruple", plusC(numC(1), numC(2)))))

      is (10 + 3 + 3 + 3 + 3)

    <another-test>

So we’re done, right?

Do Now!

Spot the bug.

15.2.3 Deferring Correctly

Here’s another test:
<another-test> ::=

    interp(appC("f1", numC(3)), mt-env,

      [fdC("f1", "x", appC("f2", numC(4))),

        fdC("f2", "y", plusC(idC("x"), idC("y")))])

      raises "unbound identifier: y"

In our interpreter, this evaluates to 7. Should it?

Translated into Pyret, this test corresponds to the following two definitions and expression:

fun f1(x): f2(4) end

fun f2(y): x + y end

 

f1(3)

What should this produce? f1(3) substitutes x with 3 in the body of f1, which then invokes f2(4). But notably, in f2, the identifier x is not bound! Sure enough, Pyret will produce an error.

In fact, so will our substitution-based interpreter!

Why does the substitution process result in an error? It’s because, when we replace the representation of x with the representation of 3 in the representation of f1, we do so in f1 only.This “the representation of” is getting a little annoying, isn’t it? Therefore, I’ll stop saying that, but do make sure you understand why I had to say it. It’s an important bit of pedantry. (Obviously: x is f1’s parameter; even if another function had a parameter named x, that’s a different x.) Thus, when we get to evaluating the body of f2, its x hasn’t been substituted, resulting in the error.

What went wrong when we switched to environments? Watch carefully: this is subtle. We can focus on applications, because only they affect the environment. When we substituted the formal for the value of the actual, we did so by extending the current environment. In terms of our example, we asked the interpreter to substitute not only f2’s substitution in f2’s body, but also the current ones (those for the caller, f1), and indeed all past ones as well. That is, the environment only grows; it never shrinks.

Because we agreed that environments are only an alternate implementation strategy for substitution—and in particular, that the language’s meaning should not change—we have to alter the interpreter. Concretely, we should not ask it to carry around all past deferred substitution requests, but instead make it start afresh for every new function, just as the substitution-based interpreter does. This is an easy change:
<appC-interp-fof-env> ::=

    xtnd-env(bind(fd.arg, arg-val), mt-env)

Now we have truly reproduced the behavior of the substitution interpreter.Use raises to write tests for expressions that are expected to use raise.

15.2.4 Scope

The broken environment interpreter above implements what is known as dynamic scope. This means the environment accumulates bindings as the program executes. As a result, whether an identifier is even bound depends on the history of program execution. We should regard this unambiguously as a flaw of programming language design. It adversely affects all tools that read and process programs: compilers, IDEs, and humans.

In contrast, substitution—and environments, done correctly—give us lexical scope or static scope. “Lexical” in this context means “as determined from the source program”, while “static” in computer science means “without running the program”, so these are appealing to the same intuition. When we examine an identifier, we want to know two things: (1) Is it bound? (2) If so, where? By “where” we mean: if there are multiple bindings for the same name, which one governs this identifier? Put differently, which one’s substitution will give a value to this identifier? In general, these questions cannot be answered statically in a dynamically-scoped language: so your IDE, for instance, cannot overlay arrows to show you this information (the way an IDE like DrRacket does).A different way to think about it is that in a dynamically-scoped language, the answer to these questions is the same for all identifiers, and it simply refers to the dynamic environment. In other words, it provides no useful information. Thus, even though the rules of scope become more complex as the space of names becomes richer (e.g., objects, threads, etc.), we should always strive to preserve the spirit of static scoping.

15.2.5 How Bad Is It?

You might look at our running example and wonder whether we’re creating a tempest in a teapot. In return, you should consider two situations:
  1. To understand the binding structure of your program, you may need to look at the whole program. No matter how much you’ve decomposed your program into small, understandable fragments, it doesn’t matter if you have a free identifier anywhere.

  2. Understanding the binding structure is not only a function of the size of the program but also of the complexity of its control flow. Imagine an interactive program with numerous callbacks; you’d have to track through every one of them, too, to know which binding governs an identifier.

Need a little more of a nudge? Let’s replace the expression of our example program with this one:

if moon-visible():

  f1(10)

else:

  f2(10)

end

Suppose moon-visible is a function that evaluates to false on new-moon nights, and true at other times. Then, this program will evaluate to an answer except on new-moon nights, when it will fail with an unbound identifier error.

Exercise

What happens on cloudy nights?

15.2.6 The Top-Level Scope

Matters become more complex when we contemplate top-level definitions in many languages. For instance, some versions of Scheme (which is a paragon of lexical scoping) allow you to write this:
(define y 1)
(define (f x) (+ x y))
which seems to pretty clearly suggest where the y in the body of f will come from, except:
(define y 1)
(define (f x) (+ x y))
(define y 2)
is legal and (f 10) produces 12. Wait, you might think, always take the last one! But consider:
(define y 1)
(define f (let ((z y)) (lambda (x) (+ x y z))))
(define y 2)
Here, z is bound to the first value of y whereas the inner y is bound to the second value.Most “scripting” languages exhibit similar problems. As a result, on the Web you will find enormous confusion about whether a certain language is statically- or dynamically-scoped, when in fact readers are comparing behavior inside functions (often static) against the top-level (usually dynamic). Beware! There is actually a valid explanation of this behavior in terms of lexical scope, but it can become convoluted, and perhaps a more sensible option is to prevent such redefinition. Pyret does precisely this, thereby offering the convenience of a top-level without its pain.

15.2.7 Exposing the Environment

If we were building the implementation for others to use, it would be wise and a courtesy for the exported interpreter to take only an expression and list of function definitions, and invoke our defined interp with the empty environment. This both spares users an implementation detail, and avoids the use of an interpreter with an incorrect environment. In some contexts, however, it can be useful to expose the environment parameter. For instance, the environment can represent a set of pre-defined bindings: e.g., if the language wishes to provide pi automatically bound to 3.2 (in Indiana).

15.3 Functions Anywhere

The introduction to the Scheme programming language definition establishes this design principle:

Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. [REF]

As design principles go, this one is hard to argue with. (Some restrictions, of course, have good reason to exist (Functions and Predictability), but this principle forces us to argue for them, not admit them by default.) Let’s now apply this to functions.

One of the things we stayed coy about when introducing functions (Adding Functions to the Language) is exactly where functions go. We may have suggested we’re following the model of an idealized programming environment, with definitions and their uses kept separate. But, inspired by the Scheme design principle, let’s examine how necessary that is.

Why can’t functions definitions be expressions? In our current arithmetic-centric language we face the uncomfortable question “What value does a function definition represent?”, to which we don’t really have a good answer. But a real programming language obviously computes more than numbers, so we no longer need to confront the question in this form; indeed, the answer to the above can just as well be, “A function value”. Let’s see how that might work out.

What can we do with functions as values? Clearly, functions are a distinct kind of value than a number, so we cannot, for instance, add them. But there is one evident thing we can do: apply them to arguments! Thus, we can allow function values to appear in the function position of an application. The behavior would, naturally, be to apply the function. Thus, we’re proposing a language where the following would be a valid program (where I’ve used brackets so we can easily identify the function)
(+ 2 ([define (f x) (* x 3)] 4))
and would evaluate to (+ 2 (* 4 3)), or 14. (Did you see that I just used substitution?)

15.3.1 Functions as Expressions and Values

Let’s first define the core language to include function definitions:
<expr-type> ::=

    data ExprC:

      | numC (n :: Number)

      | plusC (l :: ExprC, r :: ExprC)

      | multC (l :: ExprC, r :: ExprC)

      | idC (s :: String)

      <hof-fun/1>

      <hof-app>

    end

For now, we’ll simply copy function definitions into the expression language. We’re free to change this if necessary as we go along, but for now it at least allows us to reuse our existing test cases.
<hof-fun/1> ::=

    | fdC (name :: String, arg :: String, body :: ExprC)

We also need to determine what an application looks like. What goes in the function position of an application? We want to allow an entire function definition, not just its name. Because we’ve lumped function definitions in with all other expressions, let’s allow an arbitrary expression here, but with the understanding that we want only function definition expressions:

We might consider more refined datatypes that split function definitions apart from other kinds of expressions. This amounts to trying to classify different kinds of expressions, which we will return to when we study types Checking Program Invariants Statically: Types.

<hof-app> ::=

    | appC (f :: ExprC, a :: ExprC)

With this definition of application, we no longer have to look up functions by name, so the interpreter can get rid of the list of function definitions. If we need it we can restore it later, but for now let’s just explore what happens with function definitions are written at the point of application: so-called immediate functions. Thus our interpreter looks like this:
<hof-interp/1> ::=

    fun interp(e :: ExprC, nv :: List<Binding>):

      cases (ExprC) e:

        | numC(n) => n

        | plusC(l, r) => interp(l, nv) + interp(r, nv)

        | multC(l, r) => interp(l, nv) * interp(r, nv)

        | idC(s) => lookup(s, nv)

        <hof-interp-fun/1>

        <hof-interp-app/1>

      end

    end

We need to add a case to the interpreter for function definitions, and this is a good candidate:
<hof-interp-fun/1> ::=

    | fdC(_, _, _) => e

Do Now!

What impact does this have on the interpreter’s return type?

The interpreter no longer returns just numbers; now it also returns function definitions. Therefore, when we need to evaluate an application, we can simply evaluate the function position to obtain a function definition, and the rest of the evaluation process can remain unchanged:
<hof-interp-app/1> ::=

    | appC(f, a) =>

      fd = interp(f, nv)

      arg-val = interp(a, nv)

      interp(fd.body, xtnd-env(bind(fd.arg, arg-val), mt-env))

With that, our former examples works just fine:

check:

  dbl = fdC("dbl", "x", plusC(idC("x"), idC("x")))

  quad = fdC("quad", "x", appC(dbl, appC(dbl, idC("x"))))

  c5 = fdC("c5", "_", numC(5))

  fun i(e): interp(e, mt-env) end

  i(plusC(numC(5), appC(quad, numC(3)))) is 17

  i(multC(appC(c5, numC(3)), numC(4))) is 20

  i(plusC(numC(10), appC(c5, numC(10)))) is 15

  i(plusC(numC(10), appC(dbl, plusC(numC(1), numC(2)))))

    is 16

  i(plusC(numC(10), appC(quad, plusC(numC(1), numC(2)))))

    is 22

end

15.3.2 A Small Improvement

Do Now!

Is there any part of our interpreter definition that we never use?

Yes there is: the name field of a function definition is never used. This is because we no longer look up functions by name: we obtain their definition through evaluation. Therefore, a simpler definition suffices:
<hof-fun/2> ::=

    | fdC (arg :: String, body :: ExprC)

Do Now!

Do you see what else you need to change?

In addition to the test cases, you also need to alter the interpreter fragment that handles definitions:
<hof-interp-fun/2> ::=

    | fdC(_, _) => e

In other words, our functions are now anonymous.

Exercise

The type of our environment is wrong, as is that of lookup. Construct an example that demonstrates this.

15.3.3 Nesting Functions

The body of a function definition is an arbitrary expression. A function definition is itself an expression. That means a function definition can contain a...function definition. For instance:

appC(fdC("x", fdC("x", plusC(idC("x"), idC("x")))), numC(4))

This just evaluates to

fdC("x", plusC(idC("x"), idC("x")))

which, if applied to a number, will double it. Suppose, however, we use a slightly different function definition:

appC(fdC("x", fdC("y", plusC(idC("x"), idC("y")))), numC(4))

which evaluates to

fdC("y", plusC(idC("x"), idC("y")))

Now we have a clear problem, because x is no longer bound, even though it clearly was in an outer scope. Indeed, if we apply it to any value, we get an error because of the unbound identifier.

15.3.4 Nested Functions and Substitution

Consider the last two examples with a substitution-based interpreter instead. If we evaluate the application

appC(fdC("x", fdC("x", plusC(idC("x"), idC("x")))), numC(4))

using substitution, the inner binding masks the outer one, so no substitutions should take place, giving the same result:

fdC("x", plusC(idC("x"), idC("x")))

In the other example—

appC(fdC("x", fdC("y", plusC(idC("x"), idC("y")))), numC(4))

however, substitution would replace the outer identifier, resulting in

fdC("y", plusC(numC(4), idC("y")))

So once again, if we take substitution as our definition of correctness, we see that our interpreter produces the wrong answer.

In other words, we’re again failing to faithfully capture what substitution would have done. A function value needs to remember the substitutions that have already been applied to it. Because we’re representing substitutions using an environment, a function value therefore needs to be bundled with an environment. This resulting data structure is called a closure.

15.3.5 An Answer Type

Let us therefore define a type to represent answers. The interpreter returns either numbers or closures:

data Value:

  | numV (n :: Number)

  | closV (f :: ExprC, e :: List<Binding>) # ExprC must be an fdC

end

The interpreter now uses it:
<hof-interp> ::=

    fun interp(e :: ExprC, nv :: List<Binding>) -> Value:

      cases (ExprC) e:

        <hof-interp-numC>

        <hof-interp-plusC/multC>

        <hof-interp-idC>

        <hof-interp-fdC>

        <hof-interp-appC>

      end

    end

When the expression is a number the answer is still the same number, except we have to represent it using the value type:
<hof-interp-numC> ::=

    | numC(n) => numV(n)

Similarly, arithmetic operations must handle this type:
<hof-interp-plusC/multC> ::=

    | plusC(l, r) => plus-v(interp(l, nv), interp(r, nv))

    | multC(l, r) => mult-v(interp(l, nv), interp(r, nv))

This is easily done by helper functions:

fun plus-v(v1, v2): numV(v1.n + v2.n) end

fun mult-v(v1, v2): numV(v1.n * v2.n) end

Looking up an identifier remains unchanged:
<hof-interp-idC> ::=

    | idC(s) => lookup(s, nv)

When evaluating a function, we have to create a closure:
<hof-interp-fdC> ::=

    | fdC(_, _) => closV(e, nv)

This leaves function applications.
Do Now!

Write the interpreter for function application.

<hof-interp-appC> ::=

    | appC(f, a) =>

      clos = interp(f, nv)

      arg-val = interp(a, nv)

      interp(clos.f.body,

        xtnd-env(bind(clos.f.arg, arg-val),

        clos.e))

Exercise

Observe that the argument to interp is clos.e rather than mt-env. Write a program that illustrates the difference.

This now computes the same answer we would have gotten through substitution.

Do Now!

If we now switch back to using substitution, will we encounter any problems?

Yes, we will. We’ve defined substitution to replace program text in other program text. Strictly speaking we can no longer do this, because Value terms cannot be contained inside ExprC ones. That is, substitution is predicated on the assumption that the type of answers is a form of syntax. It is actually possible to carry through a study of programming under this assumption, but we won’t take that path here.

15.3.6 Sugaring Over Anonymity

Now let’s get back to the idea of naming functions, which has evident value for program understanding. Observe that we do have a way of naming things: by passing them to functions, where they acquire a local name (that of the formal parameter). Anywhere within that function’s body, we can refer to that entity using the formal parameter name.

Therefore, we can name a function definion using another...function definition. For instance, the Pyret code

fun double(x): x + x end

double(10)

could first be rewritten as the equivalent

double = fun (x): x + x end

double(10)

which by substitution evaluates to (fun (x): x + x end)(10) or 20.

Indeed, this pattern is a local naming mechanism, and virtually every language has it in some form or another. In languages like Lisp and ML variants, it is usually called let:Note that in different languages, let has different scope rules: in some cases it permits recursive definitions, and in others it doesn’t.
(let ([double (lambda (x) (+ x x))])
  (double 10))
In Pyret, as in several other languages like Java, there is no explicitly named construct of this sort, but any definition block permits local definitions such as this:

fun something():

  double = fun (x): x + x end

  double(10)

end

Here’s a more complex example, written in Racket to illustrate a point about scope:
(define (double x) (+ x x))
(define (quadruple x) (double (double x)))
(quadruple 10)
This could be rewritten as
(let ([double (lambda (x) (+ x x))])
  (let ([quadruple (lambda (x) (double (double x)))])
    (quadruple 10)))
which works just as we’d expect; but if we change the order, it no longer works—
(let ([quadruple (lambda (x) (double (double x)))])
  (let ([double (lambda (x) (+ x x))])
    (quadruple 10)))
because quadruple can’t “see” double. So we see that top-level binding is different from local binding: essentially, the top-level has “infinite scope”. This is the source of both its power and problems.

There is another, subtler, problem: it has to do with recursion. Consider this simple infinite loop in Pyret:

fun loop-forever(): loop-forever() end

loop-forever()

Let’s convert it use an anonymous function:
loop-forever = fun(): loop-forever() end
loop-forever()
Seems fine, right? Use the proposed desugaring above:
(fun (loop-forever): loop-forever() end)(fun (): loop-forever() end)
And now, loop-forever on the last line isn’t bound!

Therefore, Pyret’s = is clearly doing something more than just textual substitution: it is also “tying the loop” for recursive definitions. We can understand this either in terms of the Y-combinator (Shrinking the Language [EMPTY]) or study recursion directly (Recursion and Cycles).

15.4 Functions and Predictability

We began (Adding Functions to the Language) with a language where at all application points, we knew exactly which function was going to be invoked (because we knew its name, and the name referred to one of a fixed global set). These are known as first-order functions. In contrast, we later moved to a language (Functions Anywhere) with first-class functions: those that had the same status as any other value in the language.

This transition gave us a great deal of new flexiblity. For instance, we saw (Sugaring Over Anonymity) that some seemingly necessary language features could instead be implemented just as syntactic sugar; indeed, with true first-class functions, we can define all of computation (Shrinking the Language [EMPTY]). So what’s not to like?

The subtle problem is that whenever we increase our expressive power, we correspondingly weaken our predictive power. In particular, when confronted with a particular function application in a program, the question is, can we tell precisely which function is going to be invoked at this point? With first-order functions, yes; with higher-order functions, this is undecidable. Having this predictive power has many important consequences: a compiler can choose to inline (almost) every function application; a programming environment can give substantial help about which function is being called at that point; a security analyzer can definitively rule out known bad functions, thereby reducing the number of useless alerts it generates. Of course, with higher-order functions, all these operations are still sometimes possible; but they are not always possible, and how possible they are depends on the structure of the program and the cleverness of tools.

Exercise

With higher-order functions, why is determining the precise function at an application undecidable?

Exercise

Why does the above reference to inlining say “almost”?