Why computers can’t (really) think (II)

This is a follow on to my previous post Why computers can’t think. The reactions and comments to it helped me to tighten and clarify the argument.

I am going to use the term mental event in order to denote a subjective first person experience which is delimited in time, so that we can talk of a beginning and an end for this experience. Like when you stick your hand in ice-cold water you start feeling the cold, and when you withdraw it you stop feeling the cold, perhaps after a few seconds delay. It is not important that the temporal delimitation is precise, just finite.

Suppose for the sake of argument that consciousness is a computational phenomenon, so mental events can occur in a computer. For simplicity, let us reduce this computer to a Universal Turing Machine (UTM) and a (very long) tape — the Church-Turing thesis says that no matter how complicated a computer or a computer program we can always reduce them to UTMs. This means that when we run the UTM on that particular tape a mental event will occur. Let us consider the state of the UTM when the mental event occurs (the internal state, the tape state, the head position) and also let us discard all the tape that is not used by the UTM during the occurrence of the mental event.

The behaviour of a UTM is determined by its state so, necessarily, whenever we run the UTM from that internal state on that fragment of tape the mental event will occur. But a UTM with a finite tape is equivalent, computationally, to a finite state machine, to an automaton. So for any mental event we can build an automaton that creates that mental event.

I hope this consequence is striking on at lest two accounts. First, we have a FSM which can experience consciousness. Second, we have a stunted kind of consciousness which consists of one single mental state all the time — it doesn’t even have the absence of that state. These two consequences are not quite a contradiction, logically, but I find them unacceptable. If we want to accept conscious UTMs we must accept conscious finite state automata and we must accept the fact that consciousness can be dissected into atomic mental events which can be reproduced by automata.


  1. In the above we need to make an important distinction which sometimes gets lost: I am not talking about whether computers can pass the Turing test (I think they will be eventually be able to do that) so I am not talking about whether computers can exhibit something like consciousness. I am also not talking about whether we know how to detect for sure whether a computer can have conscious thoughts. What I am talking about is whether a computer can have a subjective first-person experience — or, more precisely still, what are the other things that we need to accept if we believe computers to be able to have consciousness. The other topics are of course interesting and important, they are just not the subject of this argument.
  2. The mind-as-computation view is its most prevalent materialist explanation, so what are we left with if we reject it. God? Magic? Not necessarily. One unavoidable consequence is that our brains do more than computation. They do a lot of computation, that is established, but they do more than that. They ‘do’ consciousness in a way that a computer without a brain cannot do. We don’t know the details though. Materialism means that reality can be explained as particles moving in fields of force, so the necessary consequence to this argument is that there are yet unknown particles or fields involved in the operation of the brain. Penrose, in The Emperor’s New Mind, tries to blame for example consciousness on quantum effects, which is rather unconvincing, but it is not silly as an attempt to find physical distinctions between brains and computers.
  3. Much of the mind-as-computation view (Dennett is a prime exponent of it) rests on the premiss that there is a qualitative distinction between simple and complex systems. However, the Church-Turing thesis makes it quite clear that the only difference is in the length and contents of the tape of a UTM. Concepts such as complex sytem or virtual machine or internal architecture sound mysterious and ungraspable so it seems plausible to attribute other mysterious and ungraspable phenomena such as consciousness to them. But if you go through a book by Dennett and replace complex system with a simple controller with a very long tape it is quite apparent how silly those arguments are.

About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in anticomputationalism, armchair philosophy. Bookmark the permalink.

One Response to Why computers can’t (really) think (II)

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>