# What things compute?

I suppose we can agree that computers compute. We know that they do that because we make them do that. We have a program, on paper or in our head, and we have certain facilities to load that program into the computer memory. This is called implementing the program. Then, when the program runs we can compare the actual behaviour of the physical computer with the expected behaviour defined by the abstract program. Note that this expected behaviour we can determine “by hand” via manual calculations, at least in principle. Programs in the abstract are mathematical objects, programs in the concrete are behaviours of physical devices we call computers. I suspect trained computer scientists will find very little controversial (or exciting) in the clarification above. This is the so-called “implementation theory” of computing, advocated among others by the influential cog-sci philosopher David Chalmers.

This definition is, I would say, sound in the sense that things and behaviours that are described by it are always computation. But is it complete? Are there notions of computation that fall outside of it? If we say “no” then we should expand the definition somehow — lets get back to that. But if we say “yes”, then what are the consequences? Implementation theory assumes that the abstract program defines the concrete program, which is an instance of it. A computer is then a device that accelerates computation by some physical means. The crucial assumption is that the abstract program, as a mathematical object, requires design and therefore intent. Accordingly, only artificial things can compute. We cannot say, for example, that a brain computes. There seems to be something that resembles “information processing” or “computation” in the way a brain behaves, but we don’t actually have the anchoring reference that is the abstract program to compare it with and say that it is an implementation of it. This may seem like nit-picking but it’s a genuine problem.

Outside artificial things designed to compute, i.e. computers, what else computes? What is computation, if anything, as a natural phenomenon? I haven’t seen a good answer so far. I have seen however two classes of bad answers.

The first bad answer is something that has to do with a “system” having “inputs” and “outputs”. This is bad on two accounts — without even going into the deeper philosophical problems of causation as a scientific concept. The first problem is that any natural system has many things that can be considered as “inputs” and “output”. For example if we throw a rock, and we take the initial velocity vector as an input and its space-time coordinates as an output, we can say that it is a system that computes a parabola tangent to that vector. But there is necessary intent in the choice of what measurements are considered as inputs and output. Any object has a trajectory in a highly-dimensional space of properties, and any projection of that trajectory on a lower-dimensional space of measurements defines a different computation. There is no one special computation the system performs. If we take intent out of the equation we have an incomprehensibly large state space and an incomprehensibly complicated trajectory through it, tracking everything down to atomic-level configurations. The second problem is that such broad definitions can quickly degenerate into pancomputationalism: everything computes. And, if everything computes, computation is an uninteresting concept. The brain computes, but so does a piece of wood. Computation is simply another word for behaviour subject to (natural) laws. The Universe is a big computer, computing its own future — grand but somewhat vacuous.

The second bad answer is something that has to do with “processing symbolic information”. As in the case of selecting particular measurements of a system as inputs and outputs, the notion of “symbol” requires not only intent but also, outside of a formal setting, very special powers of of abstraction — think about how many concrete shapes a symbol such as the letter ‘a’ can take. We know the human brain can process symbolic information, but we are far from certain any other kinds of brain can do that, so this definition rules them out as computing things. Also, the process of defining and recognising symbols itself is a prerequisite of computation, but not part of computation itself — which strikes me as odd. Whereas the input-output attempted definition is too broad, this is both too precise and too reliant on concepts that require further clarification. It is tantamount to saying “computation is what the human brain can do”. It does not explain the brain using computation, it uses the brain as an explanation for computation.

In conclusion I want to shatter our presumed consensus: computers do not compute. Not even computers. We compute. Computers just help us along. Implementation theory gives a lucid definition of computation: a relation between a pre-existing mathematical concept and the behaviour of a physical thing. The pre-existence of the mathematical concept is essential. Trying to fit a mathematical model on a pre-existing physical behaviour has a different name: scientific discovery. Where does this live computationalismthe theory that, broadly speaking, the brain is a computer and the mind a computation? I think it exposes its vacuity. A “computational theory of the mind” is along the same lines as having “a computational theory of the weather”, in which the earth is seen as a computer and the weather what it computes. It’s a metaphor that gives the illusion of understanding, without telling us whether tomorrow will rain or not — akin to using élan vital to “explain” life or scansoriality to “explain” propensity to climbing.

But perhaps I am being harsh. “Computation” is vacuous only when used as an explanation, as a first-class scientific concept. But it is not vacuous as an abstract concept or as a characterisation of a style of mathematical model or as a paradigm. The parallel between energy/work as used in physics and information/computation as used in cognitive science is quite striking. In his Lectures on Physics, Feynman has this to say about energy and energy conservation principles [PDF]. I think similar considerations apply to information and computation:

[Conservation of Energy] is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. [...]

It is important to realize that in physics today, we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. However, there are formulas for calculating some numerical quantity, and when we add it all together it gives “28″—always the same number. It is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas.

For a good survey of the problem of defining computation I recommend Piccinini’s chapter on Computationalism in the Oxford Handbook of Philosophy and Cognitive Science.