# Leaving the Nest: variables with interleaving scopes

A paper [PDF] with this title, co-written with Jamie Gabbay and Daniela Petrişan, will appear in Computer Science Logic (CSL 2015). Actually the full title is Leaving the Nest: Nominal techniques for variables with interleaving scopes. The “nominal techniques” part is in reference to a new mathematics of names pioneered by Gabbay and Pitts in the late 1990s. In this post I will give a very brief and informal taster/introduction to nominal techniques then tell you why our paper is actually quite interesting.

Names are a peculiar data structure. There are only two things you can do with them. Given two names you can compare them, and given a finite set of names you can find a fresh name relative to them, i.e. a name that is not in that set. There is a perhaps subtle distinction to be made between names and concrete representation of names, which can be strings or integers. Concrete representation might allow other operations as well, such as concatenation or addition, but qua names they only have equality testing and fresh generation.

Because there are only very few things we can do with names and to names, nominal structures, i.e. data structures incorporating names have an interesting property called equivariance, which means informally that all meaningful properties of such structures are preserved by name permutations. What does this really mean? Consider the following OCaml program:

`let x = ref () in let y = ref () in x == y;; `

A new unit reference, a pointer is created and assigned to `x`, then another one to `y`, then they are compared for equality. This little program is always equivalent to `false`. No matter what pointer-values are returned by the memory management runtime of OCaml, they will always be different. The actual value of the pointers qua names must be irrelevant! This is why invariance is under permutation: they can change names but they will not map different names to equal names.  This simple and elegant principle leads to a powerful and elegant theory of names about which you can read in Andy Pitt’s excellent book.

Nominal techniques really come into their own when dealing with syntax which uses locally-scoped names, such that of the lambda calculus (e.g. λx.x) or predicate logic (e.g. ∀x.x=x). In a syntactic context, the principle of equivariance leads to an equivalence of terms up-to-renaming, commonly known as alpha-equivalence. So terms such as λx.x and λy.y are equivalent. Moreover, this equivalence is also a congruence (is preserved by syntax) so that we can work with equivalence classes under alpha-equivalence instead. This is done often informally in the study of syntax, but a solid mathematical understanding of names is actually required to ensure that recursive definitions over nominal structures — i.e. something as banal as substitution — are actually well defined.

A common theme in the literature, motivated by syntax, has been the use of names in inductively-defined structures, where name abstraction is a term-forming operation. By definition, these names are always well scoped, which means that their scopes are well nested. However, names can be used in a different way, not in the language itself but in the semantics, at the meta level, to mediate access to resources. Memory locations, file handlers, network sockets, etc.  are all examples of such resources which, much like names, can only be tested for equality and created: they are names. However, unlike syntactic names they can be not just created but also destroyed. And the pattern of creation and destruction doesn’t need to be well nested anymore.

For example, this is a sequence where the scopes of a, b, c are interleaved (we mark name-creation with ⟨ and name-destruction with ⟩): ⟨a ⟨b a⟩ ⟨c b⟩ c⟩. The name b is introduced after a is created and when is destroyed it is still “active”. With the (obvious) intuition of scope and alpha-equivalence, we should expect the sequence above to be equivalent to ⟨a ⟨c a⟩ ⟨b c⟩ b⟩.

In our paper we give a mathematical treatment of this phenomenon, focussing on what we believe to be the most interesting aspects: What does it mean for a variable to be free or bound in this setting? What is the right notion of alpha-equivalence when scopes are not well nested? How can we define proper functions on sequences that use names with interleaved scopes? We give syntactic, relational and algebraic models for alpha equivalence which coincide — this is certainly reassuring that we gave the right definitions.

An example which illustrates alpha equivalence and name capture quite well is interleaving of sequences with names. For example, what happens if we want to interleave the sequence ⟨a a⟩ with itself? The standard definition of sequence interleaving gives us two sequences: ⟨a a⟩ ⟨a a⟩ and ⟨a ⟨a a⟩ a⟩. However, taking into account the fact that ⟨a a⟩ = ⟨b b⟩ this should be the same as interleaving those two, which seems to give us six sequences: ⟨a a⟩ ⟨b b⟩, ⟨a ⟨b b⟩ a⟩, ⟨a ⟨b a⟩ b⟩, ⟨b b⟩ ⟨a a⟩, ⟨b ⟨a a⟩ b⟩ and ⟨b ⟨a b⟩ a⟩, but quotienting by alpha-equivalence (in the obvious way) we are left with three: ⟨a a⟩ ⟨b b⟩, ⟨a ⟨b b⟩ a⟩, ⟨a ⟨b a⟩ b⟩, which is still not the same as two. What is going on here?

⟨a a⟩ ⟨b b⟩ = ⟨a a⟩ ⟨a a⟩  – we can re-use bound variable ‘a’ instead of ‘b’
⟨a ⟨b b⟩ a⟩ = ⟨a ⟨a a⟩ a⟩  – we can reuse bound variable ‘a’ instead of ‘b’, noting that there is some name-shadowing going on

So we are left with ⟨a ⟨b a⟩ b⟩, which is the only one which genuinely requires two names in a situation akin to variable capture in the lambda calculus. With a proper notion of alpha-equivalence our definition of interleaving just works and we can skip such tedious analyses.

This has a neat diagrammatic representation — if you have read this blog you may have guessed that I like diagrams! The sequence ⟨a a⟩ represents the diagram , with the two endpoints delimiting the scope. This is nice because the name ‘a’ no longer appears, so it is “obvious” that

• ⟨a a⟩ ⟨b b⟩ = ⟨a a⟩ ⟨a a⟩ =
• ⟨a ⟨b b⟩ a⟩ = ⟨a ⟨a a⟩ a⟩ =
• ⟨a ⟨b a⟩ b⟩ =

In terms of more serious applications, formalisations of game semantics and other trace semantics should now further improve.

Before I sign off, a shout out to my student Bertie Wheen for proving some of the syntactic results in Agda.