Why computers can’t think

I have a weakness for philosophy of the mind, but I find most of the writing on the topic either impenetrable or laughable. Recently, a good friend who is a philosopher recommended Searle’s book Mind: A Brief Introduction. It is indeed an excellent book. It makes lucid and careful arguments and, gratifying for a non-philosopher, it shows why most influential theories of the mind are wrong. The programme it sets is very modest and meant to delegate matters from philosophy to science as quickly as possible, but more on this later.

Searle is famous, among other things, for the Chinese Room argument, which is meant to clarify that mimicking understanding and having the experience of understanding are not the same thing. I found the argument compelling. The argument is also meant to have a farther reaching consequence, that the mind cannot be (just) a computational process.

The second part of the argument is less convincing, because you cannot in general prove impossibility by example. It’s close to a straw-man fallacy. I think a different argument is needed here and I will try to formulate it. It takes the Church-Turing thesis as its starting point: anything that can be computed can be computed by a Turing machine. If the mind is a computational phenomenon it must be exhibited by an appropriately programmed Turing machine.

If you don’t know what a Turing Machine is, have a look at one such device. It consists of a tape, a read-write head, and a controller (which can be as small as two-state, three-symbol automaton). I take it as obvious that this device cannot experience thought (i.e. consciousness), but let us elaborate for the sake of argument. The subjective experience of thought should come, if it arises at all, from the contents of the tape, the software. But that is not possible because TM computation is a local phenomenon: at any moment the machine is only “aware” of the controller state and the tape contents. The TM is not “aware” of the global contents of the state. Which would mean that the tape experiences thought. But the software on the tape can be a mind only insofar as it works for a particular encoding of the TM, e.g. the set of symbols. So the “seat” of artificial consciousness is not in the hardware nor in the software, but in the relation between them. But a mind is a real concept, and cannot be contingent on an abstract one. A TM and its tape cannot be or have a mind.

Searle doesn’t quite make this point, but he makes a similar one. A computation is defined by the intention to compute. A computer is a thing that is used by someone to compute — just like a tool is an object with an intended function, otherwise it is just stuff or perhaps an artistic artefact. Any physical system, to the extent that it has a state that changes according to laws that are known, can be used to compute. But in the absence of intent it is not a computer, it is simply a physical system. So the reason why mind cannot be explained as a computational phenomenon is that computation presupposes intent, which presupposes a mind. This would be a circular definition.

A computation is a supra-conscious behaviour: it express the (conscious) intent of another mind. For someone who practices and teaches mathematical subjects the distinction between conscious and algorithmic thought is quite clear. A student (but also a mathematician) can carry out a computation (or a proof) by following an algorithm either deterministically (a simple calculation) or heuristically (manipulation of an expression by symbol-pushing to achieve a proof-objective). This activity is not experienced as conscious in the same way that a mathematical activity involving genuine understanding is. There is no a-ha! moment; the quality of it is not intellectual excitement but a vague sense of boredom. However, this activity is not unconscious either in the sense that one is aware of something that is going on and can start it or stop it at will. I think a new term, such as supra-conscious, is required to capture this experience. I think this term also describes the experience of the Chinese Room operator.

What is mind then? I find Searle’s modest proposal acceptable: the mind is a biological phenomenon, a function of the brain. We don’t understand the details of this function, but it is not of a (purely) computational nature. The brain is not (just) a computer, so no computational simulations of the brain will produce a mind. 

About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in anticomputationalism, armchair philosophy. Bookmark the permalink.

6 Responses to Why computers can’t think

  1. Pingback: Why computers can’t think (III) | The Lab Lunch

  2. Pingback: Why computers can’t (really) think (II) | The Lab Lunch

  3. Dan Ghica says:

    This was from Jon Rowe via email:

    The Chinese Room argument.

    I don’t find this in the least compelling. It is “obvious” to me that it is the total operation of the room (of which the person is just a small part) that is performing the intelligent activity, and even is conscious. The sleight of hand that Searle pulls is in the description of the rules that get followed which seem rather simple. It seems to me that for the experiment to work, the rules would have to actually encode a full simulation of a native Chinese speaker.

    Can a Turing Machine be conscious?

    I take it as “obvious” that they can. I do not understand your argument in a couple of places. Firstly that Turing Machine computation is local – well to a large extent so is mine! I am certainly not aware of all my internal states, and I strongly suspect that I am aware of very few, and even those I probably delude myself about. Secondly, that consciousness is in the relation between the hardware and software – I am not sure I got your argument for this, but it does not seem to me to be a problem if this were indeed the case. Why can’t my consciousness not be in the relationship between my brain and the systems it implements? Your argument against it is that “mind is a real concept and cannot be contingent on an abstract one”. I don’t know what that means. All concepts are “abstract” in some sense.

    Computation is intentional

    Searle’s argument appears to be that because people make computers then using computation as an explanation for the mind is somehow unacceptable. I don’t see why. The way a bacteria’s flagella work to propel it along has been described as a molecular rotary motor. Just because motors are built by humans with particular intentions in mind doesn’t mean that’s a bad explanation.

    A biological function?

    To say that the mind is a biological function seems true to me but doesn’t explain much. If we could say that that biology implements certain algorithms then we would have said a great deal more. So I don’t really get Searle’s point here.

    What is an explanation?

    I have a suspicion that some of our disagreement arises because of different views of what would constitute an “explanation” or “theory” of mind (or indeed anything). I would say that it is coming up with concepts and relationships between them that we map onto parts of the world, in the hope that it describes and predicts those parts of the world (at least approximately). One can say of certain gene expression reactions that they are “computing” some function of their environment, and that is a useful explanation in that it helps predict what it will do and how it fits in with what is going on around it. I don’t see why it is impossible in principle for a similar computational account of mind to be valid.

    What is consciousness?

    I think our biggest difference is our intuitions about the nature of consciousness. You see it as a single indivisible thing that something either has or doesn’t. I see it as a complex varying thing that different systems have in different ways and different degrees. My consciousness is a little different from yours. Both of ours is considerably different to a baby’s and even more so from a dog’s. If a Turing Machine had consciousness it would be even more different, not least because it is operating in a laborious sequential fashion with little or no input from the environment – completely unlike you and me.

    However, here, as in most philosophy, we re basing our ideas on what seems “obvious” to us. I have made too many errors in my research career by assuming things to be obvious to think that this is an appropriate basis for finding things out. Until we have much more experimental evidence concerning the nature of consciousness I doubt we can make much progress with understanding it!

    • Dan Ghica says:

      Except for the intentionality of computation I think all the other points indeed may boil down to what we precisely mean by ‘consciousness’. What I mean is the qualitative aspect of experience. The fact that ‘redness’ is qualitatively different than ‘loudness’ or ‘sadness’ or ‘confusion’. In a machine they boil down to the value of some sensor, but that is not the same as having a qualitative subjective experience. This is the essence of the Chinese Room argument: the subjective experience of someone following an algorithm is different than the subjective experience of someone who understands. The setup mimics conscious behaviour, but do you really truly think it has the subjective experience of understanding?

      This is difficult to pin down perhaps because as scientists we are very committed to what is observable, and the only observable subjective experience is your own. This presents a difficulty in making arguments very clear and formal, but I think that you can relate to the two distinct experiences of following an algorithm versus really understanding something. This appeal to subjective experience may be unacceptable to you, case in which I think a debate would be stuck in disagreeing premises — my argument would be totally unacceptable, borderline nonsense.

      Just keep in mind that when I say ‘consciousness’ I mean here ‘experiencing consciousness’ not ‘exhibiting consciousness’. If I consider the latter then your objections are perfectly sensible and I agree, but if I consider the former then I don’t. I could go blow-by-blow but if we are in disagreement on words it would be a waste of time. So: ‘experiencing consciousness’.


      Computation is intentional. Searle’s argument appears to be that because people make computers then using computation as an explanation for the mind is somehow unacceptable. I don’t see why. The way a bacteria’s flagella work to propel it along has been described as a molecular rotary motor. Just because motors are built by humans with particular intentions in mind doesn’t mean that’s a bad explanation.

      The analogy does not quite work because in the case of flagella/engines we don’t have a circular definition. It makes sense to say ‘a flagellum is a kind of engine’ because whether engines are intentional or accidental does not matter. But it is circular to say ‘intent (as in that of a mind) is a kind of intent (as in that of a computation)’. Flagella/engines are not about intent, whereas mind/computation are about intent, so intentionality matters only for the latter.

  4. Dan Ghica says:

    Facebook comments on this post.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>