# The case for a system-level semantics

In an earlier post I was deploring the way research in programming language semantics fetishises syntax. Because adding to a language anything other than syntactic sugar changes contextual equivalence in the language, the semantic model induced by syntactic context is (sometimes fundamentally) different. Defining a language becomes a neat game of matching a sensible syntax with a sensible semantics; this meet-in-the-middle approach is the most productive in leading to full abstraction results. I cannot think of a better example than the many versions of Idealized ALGOL: with or without “snap-back state”, with or without side effects in expressions, with or without “bad variables”. The differences are tiny yet the models are incredibly different. This is not great.

But what is the alternative? Can we have a meaningful notion of context which does away with syntax?

Lets go back to the conventional definition of equality for terms:

$M\equiv N$ iff $\forall C[-].C[M]\equiv C[N].$

This definition works because $C[M], C[N]$ are programs, i.e. closed terms, which can be executed in the operational semantics. They will either reduce to some ground-type value which can be compared directly, or will diverge, a non-terminating computation. The quantification over all context is just a way to get around the fact that we cannot execute open terms $M, N$.

My claim is that we can execute open terms. Real-life software is rarely self-contained. We can certainly compile open terms, this is how we get libraries. Then we link open object code against other open object code to get executables–but this linking can be actually done at run time if we use dynamic libraries. So the system (compiler+linker+RTS+operating system+computer) has a way to run open terms. Why doesn’t the semantics match the behaviour of the system?

Curiously enough, the technical apparatus for “executing open terms” has been developed. Not one, but several such techniques exist: trace semantics, operational game semantics, environmental bisimulation and of course game semantics. What they all have in common is that they model the interaction between an open term and the environment, usually in an effective way — you can build an interpreter out of the description. These are all very powerful techniques and they can describe very powerful, expressive environments.

What usually happens in such papers (and it’s not meant to be a very harsh criticism because I’ve written a couple of papers just like that) is that a lot of technical effort is dedicated to restricting the power of the environment, by adding various rules about what it can and what it cannot do, so that its expressivity matches perfectly the discriminating power of the syntactic context: full abstraction!

As a technical tour de force, full abstraction is usually a great accomplishment. In terms of usefulness, not so much, for reasons already discussed.

An alternative approach is to take the semantic power of the environment at face value and consider that it is a model of the computational environment. This is what the Dolev-Yao model of security is: a purely semantic description of the environment, i.e. the attacker. Precisely the same idea is eminently suitable for a programming language system-level context. The model is unrestricted computationally, only informationally (or epistemically); it can do whatever it wants with the information at hand but you can still hide things from it.

So when you specify a programming language you must explain not only what computations occur, as is normally the case in operational semantic specifications, but also what information is leaked to the environment and what sanity checks are built in the language when handling input from the environment. A specification for a language must be tied in with a high-level specification for a compiler and run-time system! This may sound like a lot, and it does involve more work than the usual operational semantics, but the result is a realistic basis for verification of language properties, especially against possibly antagonistic environments. More details in our paper and in this lecture.

One final comment: is a syntactic context ever appropriate? Yes! If we are talking about a calculus, which is not meant to be compiled and executed, but just studied in the abstract, a syntactic context is a perfect way to study its equational theory. In fact I would go as far as to say that the key difference between a calculus and a programming language is that the right notion of context for the former is syntactic and for the latter is systemic.

## About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in system level semantics. Bookmark the permalink.

### 4 Responses to The case for a system-level semantics

1. Dan Ghica says:

Uday: The system level context is the context in which your program (term) operates, whether you like it or not. When it interacts with the ambient system that ambient system is not restricted behaviourally by the syntax of your language or of any language. Of course a highly constrained context will validate a richer equational theory, but that equational theory is a fiction: those restricted contexts which you would like to assume cannot be enforced. To the extent that you want to reason about realistic security or safety properties of programs the system context is the sum total of all the assumptions you can make.

And indeed, more often than not the system context gives a simpler semantics than the syntax context. This is perhaps because the syntactic context is contrived. Anyway, I don’t see something being simpler as a disadvantage, quite the opposite.

As for the challenge, you can take it as either a scientific or engineering one, depending on whether you want to read a system-level semantics as descriptive or prescriptive. In both cases, it is actually substantial.

• Uday Reddy says:

Being simpler for the theoretician doesn’t make it simpler for the programmer. If you make the entire program open to inspection by the context, then two programs would be equivalent if and only if they are identical. That makes the theoretician’s problem trivial. But it doesn’t make life any simpler for the programmer. That is not the kind of simplicity we shoot for in semantics.

If the system contexts open up too much of the program structure, then we should be telling the system builders about how many equivalences they are breaking, not celebrating them!

• Dan Ghica says:

That is actually false in a reasonable system level semantics, and that is the point of the paper! You can have nontrivial equivalences under a reasonable SLS. Of course, you can also have a trivial SLS in which what you say is true. And conversely you could perhaps have in some cases a SLS which coincides with the fully abstract syntactic-context semantics. The main questions are whether particular SLS’s are achievable and whether they are sensible. We give criteria for the latter, the former is work in progress.

2. Uday Reddy says:

Unfortunately, these rants are not hitting any buttons for me. Full abstraction is a way to measure the level of abstraction achieved by a semantic model. If the model is fully abstract for the original language, it is well and good. If not, we can try to measure how much it falls short by extending the language and proving full abstraction for the extended language. For example, by saying that the Scott semantics would be fully abstract for PCF if we added parallel-or, we are saying that it falls short in modelling sequentiality. If we ignore sequentiality, then it is as abstract as it could be. That tells us something about what Scott semantics achieves and also about what it doesn’t achieve. We also see that, by enlarging the language and, thereby, widening the contexts, we simplify the problem; we don’t make it any harder.

So, if you want to say we should widen the contexts even further by considering system-level contexts not expressible in the programming language, that is fine. There could be good reasons for doing so. But I don’t see you posing any challenges. Widening the contexts simplifies the problem.

Pretty often, I want to go the opposite way. Even if the real-life language allows more contexts, I want to cut down the contexts to certain well-behaved ones, so that I can come up with more abstract semantics that works under those contexts. (For example, the language might have only side-effecting expressions, but I might consider side-effect-free expressions. The language might have jumps but I might eliminate them.) What is the point of that? It allows any programmer that restricts herself to the well-behaved contexts to use the more abstract semantics. As soon as the programmer goes outside the well-behaved subset, fewer equivalences would hold and the semantics would become unsound.

So, I don’t see what is the big deal about widening contexts to system-level contexts.