Please don’t insist that the brain is a computer

September 25, 2018 – 11:29 am

Thinking aloud on Twitter, a colleague expressed his frustration thus: “It has bothered me for a while when people say that “the brain as a computer” is a metaphor that has had its day. It’s not the same as saying the brain is a clockwork machine or whatever metaphors there were in the past. It’s not a metaphor.”

This led to an exchange with involvement of several people, and, inevitably, the substance of the matter got lost in the abbreviated exchange of polemical statements. In the exchange, it was obvious that I represented a minority opinion. Indeed I was insistent that “the brain is a computer” is indeed a metaphor, and that it must be seen as such as long as alternative positions are available. It is this position that I wish to elaborate upon here.

I do not set out to convince anybody that their view of the brain as a computer is wrong. My concern is that alternative views, views that cannot be aligned with this position, are available, are representable (in the sense that coherent arguments can be made for them) and that the core of the debate is a metaphysical disagreement, not an empirical one, so that it is important that space be made for divergent interpretations. To do this in a brief post, I think it necessary that I do these things:

* demonstrate that I understand what is being claimed when it is asserted that “the brain is a computer” and that I recognize the power of that view
* demonstrate that that view has metaphysical commitments of its own that are not universally shared
* briefly say a few words about how this might appear if one had different metaphysical commitments, and
* outline why the original argument is politically important, and why I will continue to insist that the original statement is a metaphor and, in an ethical debate, must be seen as such.

  1. What is asserted when we say the brain is a computer?

As my colleague noted, the brain has been interpreted using a variety of metaphors. It has been seen as a telephone switchboard, as a hologram, as an orchestra, and as mechanisms of various kinds. When he insists that the brain “is” a computer, and that this is no longer a metaphor, something important has happened. What justifies this preference?

I suspect that the conviction arises from the feeling that this metaphor, as opposed to the other ones, gets it “right.”  That conviction is hard to examine or tease apart.  We could discuss the metaphor (computation/computer), and I will do so below, but in a rather more trivial sense, I think we might all agree that some metaphors are better than others, and when we find a particularly satisfying one, one in which there seems to be no unexplained residual, then we might claim that we have found an accurate picture, a picture indeed that we can no longer do without, and at that point, we begin to speak of literal truth, rather than metaphorical explication.

My favourite example here is how we understand the heart. We no longer regard the heart – within somatic medicine at least – as the seat of the passions, but as a pump, whose function is to circulate blood within the body. Once this way of viewing the heart became available through the work of Harvey, other metaphors appear misguided, and there is a sense in which one might assert “the heart is a pump! This is no longer metaphorical!”

I wonder if my interlocutors here will agree that this is the sense in which the bold claim that this metaphor (computer) is better than all previous ones, and thus deserves to be taken “literally.”  I ask, because it seems trivially true that we are mapping from a theoretical model (computer) onto something embodied (the brain) and that we are thus dealing with a metaphor. For example, I don’t think anyone in this debate would insist that a computational model is of relevance to interpreting how the brain looks in the pale moonlight, or how it tastes when fried with garlic and a side of chianti. The computational theory of mind, and computer-based accounts of the brain, are metaphors in this very broad sense of being stories about particular facets of the brain, specifically its “functions”.

2. What metaphysical commitments does this view entail?

If the analogy with the way we view the heart is any use (analogies about metaphors, we are tumbling down the representational rabbit hole here!), then we might use it to illuminate the claims being made here about brains. The heart is a pump. Pumping is thus its “function”. Now, in any objective science, the introduction of “function” must be flagged as important, and in need of careful consideration.

I will distinguish here strictly between the mathematical notion of a function (y = x^2) and the way function works in explanatory accounts of the world. The mathematical function is an innocent beast. We start with some numbers, some structures, some mathematical objects, and we transform them into other mathematical objects. That is at the heart of what most people mean by computation: the rule-based transformation of numerical (or, more broadly, mathematical) objects. (This applies equally to input-output mappings, and to the unfolding of state in a cybernetic model.)

This is emphatically not the sense in which we use the word function when discussing the heart. There, the word has teleological commitments. A heart may be said to succeed or fail in its function. There is, we all agree, an important difference between a corpse and a living person. Some of that difference is to be attributed to the heart, which is important in keeping a whole integrated body alive. Teleology has snuck into a domain we normally like to pretend it has no business in.  One might be forgiven for thinking that we had gotten rid of Aristotelian teleology with the construction of a mechanical cosmology. But here is goal-directedness lying at the heart of our consensus-based account about the world. (If this point is not agreed upon, then the conversation can go no further. The importance of teleological explanation within science surely requires no further argumentation on my side?)

We normally do not need to calibrate our cosmologies when discussing hearts, because we share the following common ground: we all are well aware of the significance of the healthy heart for a living person, and we do not feel the need to subject this to critical examination all the time. But the teleological nature of the attribution of function to a piece of meat (the heart) requires this shared ground. Other views of the heart are possible (it is a rather bad croquet ball), but our shared ground in delimiting the goal makes discussion of function unproblematic in almost all contexts.

If we assert that we have stumbled upon the correct metaphor with which to view the brain, so that we might employ it as literally as we employ the pump metaphor when discussing the heart, then we need to see if we have the same common ground.

Here the discussion might fracture, as I might pursue one computational story and my interlocutor might have a radically different story in mind. I might be fighting the spooks of input-output models that reduce the person to the form of a puppet, while you might be thinking of state-evolution in an autopoietic organisation. It is difficult, at this point, to set up the nature of the disagreement in a way that might command consensus as the heart/pump image does. This difficulty is itself important. Do we have a common view of what is to be explained?

3. What other views might exist?

Here, I intend to follow a dangerous course. Because I see the differences in interpretation as lying, not in empirical matters, but in the unspoken commitments that frame our inquiry, I consider the differences to be metaphysical, or, if you like, religious. If I state it that baldly, perhaps it will be easier to understand why I want to insist that other views must be entertained, or allowed to exist, even if they appear to some to be misguided, or more likely, mysterian.

We do not have a single computational account, but a host of them. Many of them, I think, illuminate the person, the brain and the body in ways that range from thought-provoking to extremely useful. I have developed such models myself, and I stand by them for what they are worth. But any such account assumes—must assume—some split between subject and world. That is, given something to be explained, some explanatory load is placed on the person and some on the world.

At this point, I will lose any unsympathetic readers, because I want to assert that there is no consensus based account of how that explanatory load can be distributed that is not beholden to specific religious commitments. The approach taken in computational theory of mind, and in cognitive psychology more broadly, is rooted in Protestant autonomy. It assumes an inert world on which the person acts, in a manner of interaction much like pushing a button and awaiting a result. Now most people who develop and work with such computational models will not think of their own tradition in this fashion, and this post is already too long, so let me just outline some reasons that might give rise to caution even in those for whom that claim is outrageous:

  • Science has developed in a specific cultural and historical milieu
  • Within that milieu, an emphasis on the autonomy of the individual has been developed that is not universally shared
  • This emphasis on autonomy is caught up with accounts of personal (moral) responsibility and with political and legal developments
  • It is not possible to extract any account of the person from such an intellectual history.

Other cultures, other religious traditions, have chosen to distribute the explanatory loads differently between subject (however conceived) and world. In Buddhist, and perhaps Vedantic, traditions, the interaction is better understood as a handshake than as a button push. In a handshake, both parties are agentive, and the resulting observable action is not reducible to the action of two autonomous individuals.

The division of responsibility between the psychological and social sciences is contested, must be contested, and the manner in which we handle such contestation will greatly affect our ability to be social scientists of any kind.  That division is political in its import.  The attribution of agentive function to the brain, or even the individual relies on one or other distinction between the individual and the many currents they swim among.

The colonial West has a shameful history of insisting that it had the correct view of the world that trumps all others because it is “correct”. The very notion of literal truth itself is a creation of the post-reformation, highly literate, Western society we live in. Literal truth implies that the mapping from representation to world is unassailable. Once more, I feel that scientists are not perhaps taking stock of philosophy, but there is no philosophically sound position that insists on a simple notion of truth correspondence.

4. Why this matters

I have said that I value computational models, and have even developed some myself, and have found them to be useful for many purposes.

But I have also pointed out that the satisfying sense of having found the right metaphor for explaining brain function critically depends on us sharing an understanding of what brain function is.

Unlike the heart, there is divergence here. I frequently find myself speaking to colleagues for whom such culturally-relative, theologically-oriented, metaphysical discussion is of no value for their everyday work. To them I simply ask that they allow space for other faiths, other beliefs.

To appeal to faith or belief may seem unduly weak in an argument that involves such hard-nosed notions as computation and computers. This is a problem. The domain and remit of scientific knowledge must be seen as work in progress, in which we are improved by dialogue rather than insistence.

In my scientific work I encounter phenomena that are hidden if one assume the standard metaphysical position that underlies any computational account of brains. I refer to our collective nature(s), our enmeshing in lifeworlds that are generated by our activities, but not in any way that we are conscious of or have control over. This is the sedimentation of human activity that generates the human lifeworld(s). It would burst the banks of this post to argue much further.  I don’t want to convince anyone that the things I look at are more important than the things they look at. I do want to insist that there are ways of viewing the world, its inhabitants, and their activities, that are radically different from those that start with a single metaphysical view of person and world.

So please don’t insist that the brain is a computer. Doing so is very similar to the religious zealot who insists that they and only they have the secure knowledge necessary to pronounce on matters that underpin the exercise of social authority, the freedoms accorded to persons, and the make up of society. Be computational to your heart’s desire (hah!) but don’t do that in a way that denies the reality other people are studying from differing perspectives. If you find yourself asserting that a description is a literal truth, be cautious.

 

Sorry, comments for this entry are closed at this time.