Global state considered a fiction

Here is what my colleague Uday Reddy says in his paper “Global State Considered Unnecessary” [pdf]:

The traditional semantics of imperative features is based on the notion of global store. A variable is an index into the store and the store maps such indices to classical static values. The problem with this approach is that the store becomes a conceptual bottleneck (recall the “von Neumann bottleneck” of [Backus'78]). Different portions of the program work on different parts of the store, possibly extending them in separate ways, and threading a single global store through all of them becomes a heavy burden. In our approach, we will decompose the store into tiny pieces (as small as individual variables) and these pieces can be manipulated independently.

This paper was ahead of its time and its view of the semantics of state visionary. This paper preceded game-semantic models of state, which perfected, conceptually, this very idea, producing fully abstract models of imperative computation [Abramsky, McCusker '96]. As an aside, Reddy’s model was in fact also fully abstract, although this was only proved to be so later [McCusker '02].

Note here that global state was considered “unnecessary”, and in fact cumbersome, as a concept: it was claimed to be difficult (or impossible) to prove full abstraction of imperative computation using global state. To some extent this consideration was mistaken, as explained by my other colleague Paul Levy in his paper “Global state considered helpful” [Levy '08], which shows a simplified game semantics for imperative computation using global state.

But “unnecessary” and “helpful” are two notions that don’t contradict each other, quite the opposite! Even though it is helpful to use global state as a concept, Reddy (and subsequently game semantics) show that we can do without it. And we must do without it! Here is why.

In contemporary computer architectures global state has disappeared, or rather disintegrated into a hierarchy of memories. To the extent it is intended, the hierarchical structure of the memory is meant to compensate for its horrible latency relative to the CPU (hundreds of clock cycles). But much of it comes contingently and accidentally, as a by-product of incremental evolution of devices and architectures. So what do we have on a normal desktop PC in the way of memory? We have the RAM and we have the CPU cache, this is obvious and perhaps not that interesting since caches are transparent to the programmer.  But we also have, usually, a massively parallel GPU and a substantial video memory for it. The GPU has its own local on-chip memory that, unlike the CPU, needs to be explicitly programmed. The memory transfer between the CPU RAM and the GPU RAM also needs to be explicitly programmed. So there is no single “global” memory.

But this is perhaps not compelling. We still have a coarse-grained division of the global state into a small number of disjoint global-ish states. This doesn’t seem to require the radical rethinking of state advocated by Reddy. But it is worth mentioning as a preliminary.

The absence of global state, in any meaningful form, becomes strikingly obvious in the context of hardware compilation: synthesizing circuits from conventional programming languages. When you create a circuit the last thing you want is to move all your memory off-chip in a global memory! On the contrary, you want to save the latency and the resource contention that a global memory and a bus to it mean. You want to distribute your memory throughout your design so that it can be accessed wherever it is needed. You want to realize precisely the vision of an unnecessary global state.

But there is another, deeper issue regarding global memory. A program uses memory (state) not only explicitly via assignments, but also implicitly as closures and activation records, to deal with function definition and call. This state also cannot be, in hardware compilation, global. In a heterogenous environment, such as the CPU+GPU architecture mentioned earlier, it is also unclear where in memory should these bookkeeping records for functions be kept. This is why GPU programming languages (e.g. CUDA) are so restrictive in the way they allow function definition and call.

The localization of the state as used for function calls was explained in my first paper on the Geometry of Synthesis [Ghica '07]. The idea is that linear function call on its own can be described in a purely static way, by managing the bureaucracy of the interconnect in a certain way. Diagrams of boxes and wires obey a certain mathematical structure called “closed monoidal category“; without getting into technicalities, the word closed here means just that: this mathematical structure can model function definition and application. The reasons are deep and fascinating. To understand better how higher-order computation has a mathematical structure which can be recovered from communicating processes represented as boxes and wires (or just boxes and wires!) I recommend Samson Abramsky‘s papers explaining Girard’s Geometry of Interaction. To get a grasp of monoidal categories I recommend Peter Selinger’s wonderful tutorial.

So linear function call to some extent is trivial. The key word here is linear, i.e. the function uses its argument precisely once. Moving from linear to affine, i.e. functions that use their arguments at most once is not a big deal. The point that I make in the GoS I paper is that sequentially reentrant functions, i.e. functions that don’t call themselves (as in f(f(x)) and don’t get called concurrently, can be managed using devices with local (and finite) state, which turn out to correspond precisely to the notion of a bus in hardware.

I think if we look forward into the future and we extrapolate the evolution of computing devices and architectures we can see how the global memory is becoming an abstraction farther and farther removed from the reality of computing architectures, which are increasingly heterogeneous and distributed aggregate devices. At some point in the very near future global state is bound to stop being an abstraction and will become a fiction.

About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in game semantics, Geometry of Synthesis. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>