Types: computation vs. interaction

Type Theory is at the moment the workhorse of programming language theory. It is an elegant, clever and incredibly useful framework for thinking about programming (for full disclosure, I have written several papers with the word “type” in the title). In this post I will not elaborate on the positives of Types but I will try to find at what point they become less useful. The argument below should be accessible to any programmer. To the theorist it will say nothing new, but it will say things that are not said often enough or clearly enough — I think. I hope it will not be too impolite. 

For me, this is an important exercise, because Type Theory is a collection of beautiful and seductive ideas, and as such it has the potential to get one carried away to extremes — like Marxism or Libertarianism. The beauty of Type Theory is not a pragmatic beauty. One is not (yet) in awe at what has been achieved with Type Theory. C and Javascript have this kind of pragmatic beauty, akin to Democratic Capitalism or Social Democracy. They succeed despite being messy and riddled with internal inconsistencies. They emerged, were not designed — or at least not in a principled way. Type Theory on the other hand has a beauty arising out of internal coherence, elegance and simplicity. The messy, illogical stuff is frowned upon. But can it be avoided? 

What does a computer do when it does not compute?

Lets ask a silly question first: What does a computer do? Like all simple questions, this one is steeped in ambiguity, so a simple answer cannot follow absent clarifications. What is a computer? Here there are two answers, depending whether we mean a conceptual computer, so the answer could be A Turing machine! or A von Neuman architecture! These are fine answers. But if you mean a physical computer you could say A MacBook! or An Android phone! These are also fine answers. But the language is already playing tricks on us. A conceptual computer computes, but a physical computer computes and interacts with its physical environment. 

A physical computer that merely computes will still interact with the environment, but in an unsatisfactory way: by consuming energy and warming up. For us, as users of physical computers, the computation is the boring part. Computation is what happens when a spinning wheel appears on the screen and the device becomes unresponsive. We don’t like computers when they compute, we like them when they interact. We consider them to do useful stuff when they drive physical devices. When they draw. When they sing and dance. When they forward that packet to the right node on the network. When they retrieve that interesting nugget of information from the TB hard-drive. 

Computation v. interaction provides some creative tension in the world of programming languages. Programming languages come from two directions. Some of them were birthed by logicians who wanted to get a grip on the foundations of mathematics, and who understood that computation played an important role there. This was the genesis of the lambda calculus. Modern programming languages that hail from this tradition include Haskell and the ML family (OCaml, SML, F#). Other programming languages were birthed by engineers who made physical computers and they needed to write programs for them. Fortran and C came about this way. And, of course, nowadays the two families seem to converge. Computational languages (aka “functional”) reluctantly allow interaction-oriented features (“imperative”) and all languages seem to boast of “lambdas” and “closures”, hallmarks of computational/functional languages.

Much energy has been spent proposing and defending precise definition of “functional” (or “declarative”) versus “imperative” languages, so lets not go there. In a futile attempt to avoid this controversy I will say “computational” languages for those languages and language features that focus on computation, and “interaction” languages for those languages and features that focus on interacting with the environment.

What are Types for?

The concept of types, broadly speaking, is essentially computational. It was introduced by Bertrand Russell to avoid certain paradoxes in the foundations of mathematics. In programming languages, types are syntactic guarantees that certain errors will not happen. For example

  • F is code defining a function that takes an integer as an argument and produces an integer as a result.
  • M is code computing an integer.
  • Therefore F(M) is code in which the function F is guaranteed to run correctly, as it receives an integer according to expectations, and which produces an integer as a result.

So just like in Principia Mathematica, types can be used to prevent certain kinds of errors. This is very reasonable, although there are some strings attached. Because type systems are defined compositionally on the syntax they will rule out some programs that would not cause run-time problems. They are sound (no faulty programs accepted) but not complete (some non-faulty programs are rejected). For example, here is a failing OCaml program:

# if true then 0 else 'a';;
Error: This expression has type char but an expression was expected of type int

The first error is caused because in the process of calculating the type of the program the two branches of the if statement must have compatible types, even though the ‘else’ branch is irrelevant dead code. Here is how Python handles this problem:

>>> 0 if 1 else 'a'
0

The type is not checked and the program runs alright. Everyone happy?

People who think they understand something about the theory of programming languages, including me, tend to agree that what Python does is wrong. Of course, millions of happy Python users disagree and get things done with it on a daily basis. Such is life — if only they listened to the experts.

Is Curry-Howard an isomorphism?

But is this all that there is to be said about types, that they inject a bit of sanity and discipline in programming languages? No, there is more to the story. Enter Curry-Howard. These two logicians noticed a quite fascinating analogy between type systems of programming languages and logics. You can formulate the types of a programming language using logical connectives: A→B can be read either as “A implies B” or “function of argument A and result B”. And then, the calculations required to determine the type of a program, where the type is written as some formula P, are closely related to the calculations required to prove that the logical formula P is true. This is truly an amazing, fascinating analogy and it holds remarkably well for a broad range of logics. Via the correspondence we can derive some truly elegant programming languages, for example Agda, truly a Shangri-La — or maybe a land of the Lotophagi, depending on one’s taste — of types.

But is it really all that surprising that a rich formal system, such as a logic’s proof system, can compute? Any complex enough formal system can compute, see e.g. rewrite systems. Without taking anything away from the elegance of the approach, the fact that it works should not be all that shocking. But the Curry-Howard method teaches us how to awaken the computation that is lurking within any logical system. Only a fool wouldn’t see the beauty and the usefulness of this.

But what about the opposite direction, can we always associate a type system with a “computer program”? That is a harder question because all sorts of things can be used to compute, some of them quite extravagant. Neural networks are an old example, as old as conventional computers. If neural networks compute, what is a “program”? Can we associate a type system with it? Not that I know of. The very question seems strange. How about chemical or DNA-based computing? Types are intrinsically associated with composition, and if a computing device is not made out of components, hoping for a meaningful type system to emerge seems to me unjustified.

To many programmers the notion of program seems to be more basic than that of type, and the two are not intrinsically connected. If a program is a proof, then what does type inference do? We already have the proof, but we don’t know if it is a proof of something? This is a curious thing, isn’t it?

Effects

To start questioning the utility, or the possibility, of a type system we don’t need to go as far as computers made of slime. Whenever we need to employ programming languages which are computation-oriented (“functional”) to interact with the real world there is a general feeling of unease. What is the right way to do it? At one end of the spectrum we have ML-languages which simply ignore, at the level of the type system, the interaction aspects. A program of type int->int will take an int, produce an int, and may or may not interact with the world. This is reasonable but to many it seems like an opportunity lost. Why not use the type system to say something about interaction as well?

Haskell does that, by using a type-theoretic device called a monad. Each monad indicates one particular kind of interaction, such as using memory or perform input-output. I am not a fan of monads, I confess. Monads, introduced by Eugenio Moggi, can be used as a language mechanism, the way Haskell uses them, or as a meta-language construct to formulate semantic models of languages that combine computation and interaction, a la ML. The latter is unquestionably useful, a matter of mathematical necessity. The former, I am not sure. Some monads (I/O) package real interaction, some have nothing to do with interaction (the “maybe” monad) and others are mock interactions (the “state” monad). The first one is reasonable, the second one is also reasonable but for completely different reasons (which I suppose is already becoming reasonable), and the third one is unreasonable on the face of it. It is not the same thing to program with state or as if there was state available, for the same reasons that driving a car simulator and a real car is not quite the same thing.

In the research lit there are many other effect systems. They are clever pieces of mathematical thinking, and very elegant within the fairly restricted bounds of what they can do. But to me, at the heart of it, they seem like languages of mock interaction rather than real interaction. They are reasonable to use as meta-languages, to describe certain interactions, but not as programming languages where the interaction should be real.

Compositionality, but at a cost

Compositionality, another word for plug-and-play, is a very pleasant property of systems. It guarantees that if the parts have some good property, the whole will have a good property. It is nice, but it comes at a cost. For example, it is a fact that global optimisations are always more efficient than composing local optimisations and global analyses are more precise than composing local analyses.

If we want to compose, we need to push all the properties at the interface and ignore the innards. This requires abstraction. This is a cost. For example, a language like OCaml won’t let you write a heterogeneously branched if statement because a type cannot be assigned to it.

if x then 0 else 'a'

For the same reason it won’t let you create a heterogeneous list

# [0; 'a'; fun x -> x + 1];;
Error: This expression has type char but an expression was expected of type int

However, unlike heterogeneous if statements, which are silly, heterogeneous lists could be useful. Here is a fairly exhaustive list of how various languages handle heterogeneous lists. In fact you can program heterogeneous lists in dependently typed languages, but it’s unreasonably complicated. Python makes no complaints:

>>> g = lambda x: x**2
>>> [1, 'a', "hello", g] [1, 'a', 'hello', <function <lambda> at 0x103e4aed8>]

To me this is one methodological weakness of type theory, the commitment of having types for all terms in the language. Why is that? Types are designed to facilitate composition, but the natural unit for program composition is not the term but the function, or even the module.  It makes sense to assign types to modules — it is methodologically consistent. But how we calculate these types could conceivably be more flexible, allowing a global perspective within the unit of composition. Types, as properties at the interface could be calculated using the entire arsenal of (decidable!) logics and static analyses. I understand it would make things like language spec and error reporting more difficult, but it should make the programming easier, if the success of languages such as Python is teaching us anything.

Compositionality v. interaction

To me it seems that the abstraction penalty induced by compositionality plays an important role in the questionable suitability of type theories for capturing interaction. Interaction with real-life systems means interacting with messy systems. In order to interact with them we need an interface language that is suitable both for the programming language and for the real-life physical system. This is not impossible, but only if we don’t require the interaction language to model the physical system itself.

Hardware design is an interesting example, as it works on two levels of abstraction. At the high level we have hardware description languages (HDL) which deal with the logical description of the circuit — the language of the Booleans, sometimes enhanced with extra values such as “undefined”. In this language the order of events can be described, but not precise timings. In this language combinatorial feedback loops, are almost always banned because the precise behaviour depends on extra-logical factors, such as precise timings. Yet without combinatorial feedback loops flip-flops and other memory elements cannot be defined.

SR flip-flop

SR flip-flop

The result is to provide memory elements as black box library components, defined and tested in a different, lower-level, language. But at the interface the two levels of abstraction are reconciled and composition can occur.

In a follow-on post I will explain how a similar approach to types can be taken in the case of programming languages.

About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in programming languages. Bookmark the permalink.

One Response to Types: computation vs. interaction

  1. Pingback: So what is a “pure programming language” anyway? | The Lab Lunch

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>