Why computers can’t think (III)

This is the final instalment of a series of posts on machine consciousness. It’s a fascinating topic but now that the Easter holidays are over I will need to refocus my attention on the equally fascinating topics I am actually paid to research.

The motivation for this series of posts is the recent personal realisation that consciousness (having subjective first-person experiences) cannot be explained as a purely computational phenomenon. The consciousness-as-computation view is widely held, especially among computer scientists. It is a very convenient materialist and physicalist answer to some of the deepest questions about the mind. I used to subscribe to this view, but now I realise it is based on some misunderstandings of what computation is. The gist of the argument is that a computing device, if it’s complex enough, could somehow develop consciousness. Our own human consciousness is given as an instance of this, because our brains, as computing devices, have the required level of complexity. To emphasise: we are not talking about the ability to mimic a consciousness, we are talking about having a consciousness.

In the first post on this topic I argued that from the Church-Turing thesis it follows that any complex computing device can be reduced to a Universal Turing Machine, which is a simple device, with a very very long tape — so there is nothing qualitatively special about the complexity of the device. In the second post I argued that even the universality of a Turing Machine is not a special property, because given a conscious UTM we can construct a finite state machine which can have some of the subjective experiences of the UTM. Neither of these two arguments are conclusively proving that a computer cannot think, but they show that the common view that computers can think is based on some bad arguments, a misunderstanding of what computers are.

No doubt, our brain does a lot of computation. As a computer it is an enormous finite state machine. The question is: does our consciousness arise out of the computation it does, or does it arise out of it being a brain? In other words, if we were to simulate all its computations in a computer, would that computer have the subjective experiences we have (joy, love, despair, confusion, etc.) or is it a non-computational function of the brain biological processes? Considering the arguments in my previous two posts, I incline towards the latter. It’s not an idealist or dualist position, it’s still materialist and physicalist, but it entails that there is a lot about the brain that we don’t know.

I will attempt now a stronger argument that machines cannot think by trying to derive a contradiction. Because there is nothing special about UTMs as far as consciousness goes, and because the brain itself is a FSM, let us focus on the (im)possibility of FSMs to have a subjective experience. For simplicity let us further focus on deterministic FSMs since they are simpler and as expressive as the non-deterministic ones.

Before we proceed, let us set some things straight. A FSM is commonly understood as an abstract structure, whereas we will consider the physical realisation of a FSM as a physical system with inputs, outputs and changing internal state. The following is not obvious, but essential: if a FSM is to have a subjective experience (consciousness) and if consciousness is computational then the nature of that subjective experience will only depend on the abstract FSM and not on the way the FSM is physically realised. So if a particular FSM is realised as a brain and it has certain subjective experiences, then if we realised it as a silicone computer it will have precisely the same. This is required for the sake of argument: if we allow the subjective experience to depend on the physical realisation of the FSM then it is no longer a computational phenomenon and we are done. Another even less obvious assumption is that if a FSM can give rise to a subjective experience then that particular subjective experience is not associated with the FSM as a structure, but must be associated with a particular run of that FSM. In other words if the FSM has a subjective experience it should arise only out of its interaction with the environment and the evolution of its internal state, i.e. the computation. As before, the argument is that if it arises otherwise then it makes consciousness not of a computational nature, and we are done. A final and essential clarification, of what it means to implement an abstract FSM as a concrete physical system: it means to establish a FSM homomorphism between the latter and the former. This means that if we can observe the states, inputs and outputs of the concrete FSM we can “decode” the states, inputs and outputs of the abstract FSM (using the homomorphism) so that the transitions are preserved.

Let us now try to derive a contradiction out of the existence of consciousness-producing FSMs. More precisely, we will show that the consciousness of the brain cannot arise simply out of its computational activity. Computationally, we know that the brain must be a FSM. Let us take two runs of that FSM, of equal length, but corresponding to clearly distinct mental states (e.g. one is taken on a happy day, one on a sad day). These runs correspond to traces of inputs, outputs and internal states. If consciousness is computation, whenever we run a FSM isomorphic to the brain-as-a-FSM so that the two traces are realised, that isomorphic FSM will feel happy, respectively sad.

But if, as we established, the subjective experiences depends on the run rather than on the FSM, in each case we can discard all the states that are not used and end up with two non-isomorphic FSMs which can realise two distinct subjective experiences if fed with particular inputs, and some other behaviour we don’t care about if fed with other inputs. Now let us construct the product of these two FSMs, i.e. the FSM with the transition

f(s_1,s_2)(i_1, i_2)=p(f_1 s_1 i_1, f_2 s_2 i_2)

where f_i are the two transition functions and p is the isomorphism  p ((s_1, o_1), (s_2, o_2)) = ((s_1, s_2), (o_1, o_2))

And this gives us the contradiction: a physical realisation of the product automaton is an implementation of both automata, by the composition of the implementation homomorphism with each projection homomorphism. But if subjective experience is purely computational then the same run of the product automaton must give rise to two distinct yet simultaneous subjective experiences. But having two distinct subjective experiences simultaneously — assuming that even makes sense — is not the same experience as having just one of the two alone. To me, this is a contradiction.

A FSM cannot have a subjective experience just by virtue of its computational behaviour. I suppose this must be for the same reason that a computer doesn’t get wet if it simulates a liquid and it doesn’t become radioactive if it simulates a nuclear explosion — no matter how precise the simulation. Why is a brain different? Why or how does a brain “make” consciousness? We only know that it does, because we experience it first hand, but we know little else.

A final qualification: when I say that machines cannot think I mean that they cannot think qua machines. This argument doesn’t rule out panpsychism — the doctrine that everything is more-or-less conscious — or idealism — the doctrine that the mind has a non-physical nature. But this argument doesn’t require them either. For details, I refer the reader to the work of Searle which proposes a modest and sensible programme which starts from the doctrine that consciousness is a biological process specific to the brain.

Acknowledgements. Steve Zdancewic and Lennart Augstsson pointed out that the brain must be a FSM thus simplifying this argument quite a lot.

About Dan Ghica

Reader in Semantics of Programming Languages // University of Birmingham // https://twitter.com/danghica // https://www.facebook.com/dan.ghica
This entry was posted in anticomputationalism, armchair philosophy. Bookmark the permalink.

6 Responses to Why computers can’t think (III)

  1. Ulrik Buchholtz says:

    Not so fast! I’m sure you’ll agree that when there are two people in a room you have two simultaneous conscious experiences going on in the room. Thus when we have any description of the room that allows us to split it into subsystems containing the individuals, then your supposed contradiction would follow.

    It’s interesting that quantum mechanics in fact disallows this kind of decomposition of systems into subsystems (not every state is a product state because of the possibility of entanglement). But note, I’m not saying that that is evidence that consciousness is “quantum” in nature, just that it’s interesting!

    • Dan Ghica says:

      And I hope you agree that two different people having two different experiences is not the same as a single person having two different experiences! In my example I am talking about a single concrete FSM!

      • Ulrik Buchholtz says:

        Of course! But the single machine is a composite of two other machines, and it seems to me that you are saying that any single machine must have exactly one stream of experience, and that is what I don’t see would follow. (Because a single physical system obviously can have that.)

        What do you take to be the relevant difference between a composite physical system instantiating multiple streams of consciousness and a composite computational system doing so?

        If consciousness were computational, shouldn’t we expect a single execution to potentially embody multiple streams of consciousness?

      • Dan Ghica says:

        Ulrik, I guess this is exactly what I’m saying: if consciousness is computational then a single execution must be able to embody “multiple streams” of consciousness. Unlike you, I find this problematic on two accounts. The first one is lets say ‘naturalistic’ in the sense that the only first person subjective experiences I know are my own, and I have never experienced multiple streams of consciousness (whatever that means). But maybe this is just my personal prejudice. The second one is that the decoding homomorphisms are intentional, so the subjective experience of the FSM depends on an external entity assigning a certain abstract computational meaning to the concrete computation. This is deeply troubling: I don’t know about you, but my personal experiences are definitely not dependent on someone assigning meaning to them.

      • Ulrik Buchholtz says:

        I didn’t say I didn’t find it problematic; just that your argument didn’t seem to address this.

        To my knowledge I also have never experienced multiple streams of consciousness, but I’m not sure that tells us anything.

        To your second point, that the decomposition is intensional, I think the same issue arises in many physical models, so the problem is not unique to computationalism.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>