Your Brain is Reading This

What Bennett and Hacker call “the mereological fallacy” is the view that psychological predicates attributable to whole organisms may also be attributed to proper parts of organisms. It’s consistent with such a view that my cat may remember where the litter-box is in virtue of his brain’s remembering where the litter-box is. Bennett and Hacker’s hostility toward this view goes beyond merely thinking it false: they reject it as incoherent and nonsensical.

In contrast, I regard it as at worst mostly harmless, probably true, and thus far from incoherent. A lot of the difference between us likely hinges on differing views regarding the mutability of concepts and the scientific worth of conceptual analysis.

Let’s, however, indulge in a little analysis, especially of the concept of a fallacy. I regard fallacies as invalid arguments, and if there is an invalid argument form deserving of the title “mereological fallacy” it goes something like this.

1. a is F
2. b is a proper part of a
3. Therefore, b is F

You can’t plug just any old predicate in for “F” and expect 1, 2, and 3 to come out true. However, it’s fully consistent with this that there are some substitution instances whereby 1, 2, and 3 do come out true. Let a = Mandik, b = Mandik’s left foot, and F = in the Earth’s gravitational field.

There are lots of occasions in which 1, 2, & 3 come out true. Why not, then, regard occasions in which F is a psychological predicate as such occasions?

Consider some relevant analogies. If my computer crashes and investigation reveals that all of its parts are in working order except for the hard-drive, then no confusion ensues in saying that the hard-drive crashed. If my cat digests a meal and investigation of all his parts reveals that his stomach did most of the work, then no confusion ensues in saying that his stomach digests the meal. Medieval philosophers, concerned with the doctrine of bodily resurrection, used to engage in a priori speculations about how digestion worked. It seems silly to engage in such practices now.

It should, at a minimum, be regarded as an open question, not something ruled out a priori, that further investigation will uncover facts we may summarize as that the brain remembers, is conscious, has beliefs, etc.

Bennett, M.R. and P.M.S. Hacker (2003). Philosophical Foundations of Neuroscience. Berlin, Blackwell Publishing.

Bennett, M.R., D. Dennett, P.M.S. Hacker, J. Searle. (2007). Neuroscience and Philosophy. Brain, Mind and Language. N.Y., Columbia University Press.

55 Responses to “Your Brain is Reading This”

  1. One fine point to note: you seem to be defending a view according to which any ordinary psychological predicate applied to the person may equally well be applied to the brain or parts thereof: as “John digests” => “John’s stomach digests”, so “John sees/thinks/reads…” => “John’s brain sees/thinks/reads… Whether there is any error here really depends on what you want to go on to do with these, but so far, indeed it seems harmless enough (though also pointless.)

    But what I gather Hacker is objecting to most is the use of intentionalistic descriptions of brain activities that would be *different* from those applied to the person. For example, saying that your visual system forms a hypothesis, interprets sensory data, makes an inference to the best explanation, fills in the representation of missing details, etc. These would not be occurences of a person-level psychological description of the subject — it is (normally) no part of your mental life that you apply interpretations to things on the retina (you can’t, because you can’t see them!) If one goes on to draw philosophical conclusions from such descriptions (e.g. about our epistemological relation to the world), I think it is appropriate to take the view that they involve conceptual confusions.

    However, Hacker’s approach to clearing up confusions has always seemed disappointing to me. His hero Wittgenstein compared philosophy to a kind of therapy, by which someone in the grip of a picture might be brought to a clearer understanding of concepts. Such a therapy depends in part on the fact that, in other contexts, the person already tacitly understands and uses the relevant concepts correctly, and may require getting the sufferrer to recognize and acknowledge the sources of the confusion. But Hacker writes like someone who, having either been cured himself or never having had the disease, just mocks and sneers at the sufferers. Unlike Wittgenstein, he leaves out the therapy! This is what makes his work come off like doctrinaire apriorism (language policing) and makes for ineffective therapy.

    But the fact that Hacker’s approach is not effective does not mean there is not plenty of conceptual confusion in this vicinity!

  2. Eric Thomson says:

    Anders wrote of the word-stealing:
    (though also pointless.)

    It helps people understand the results.

    Taken to one extreme, you could argue that we should create neologisms for every new phenomenon we find. Things would quickly become a mess.

  3. I can think of at least one case where the fallacy applies and that is perception. James Gibson talks about how it isn’t merely that the “brain perceives”, but rather, the entire organism perceives. This is because perception can only be made sensible in terms of ecology. That is to say, perception evolved in order for entire organisms to pickup information about the environment. The brain doesn’t pick up information. Nor does the retina. It is the entire retina-brain-body system that picks up information because the information picked up is relevant to the entire organism, not just the retina or the brain.

    Gibson goes into much more detail in his “Ecological Approach To Visual Perception”, which I highly recommend.

  4. Pete, your comments here remind me of a passage by Carnap:

    “The acceptance or rejection of abstract linguistic forms, just as the acceptance or rejection of any other linguistic forms in any branch of science, will finally be decided by their efficiency as instruments, the ratio of the results achieved to the amount and complexity of the efforts required. To decree dogmatic prohibitions of certain linguistic forms instead of testing them by their success or failure in practical use is worse than futile; it is positively harmful because it may obstruct scientific progress. The history of science shows examples of such prohibitions based on prejudices deriving from religious, mythological, metaphysical, or other irrational sources, which slowed up the developments for shorter of longer periods of time. Let us learn from the lessons of history. Let us grant to those who work in any special field of investigation the freedom to use any form of expression which seems useful to them; the work in the field will sooner or later lead to the elimination of those forms which have no useful function. Let us be cautious in making assertions and critical in examining them, but tolerant in permitting linguistic forms.”

    The problem with this passage, and I think with your comments as well, is that it neglects the difference between standards of utility and standards of truth. If the claim is that there is no standard of truth that isn’t first a standard of utility, it follows that there is no difference at all between metaphorical truth and literal truth. Surely this is highly implausible. It is better for a scientific discourse (indeed, for any discourse) that its speakers keep track of when they are speaking literally and when they are not. When I am speaking literally, there are no people in my head.

    The point isn’t that there can be no new uses for words. The point is that the speaker should be aware when the use is both literal and new, and should give (as best as she is able) the criteria of meaning that the new use should have. And when the use is metaphorical, clarity demands that the speaker explicitly acknowledge as much (as long as we’re talking about works of science rather than works of poetry).

    I’d like to put down my support for N. N. here and note that I would like neuroscientists to read Bennett & Hacker’s book as well. I enjoyed it.

  5. Eric Thomson says:

    If the claim is that there is no standard of truth that isn’t first a standard of utility, it follows that there is no difference at all between metaphorical truth and literal truth.

    It’s not clear this follows. Carnap’s quote is beautiful, thanks for posting it.

    That book you linked to is kooky, but it also isn’t neuroscience.