Mind Spill in Aisle Nine

I’m going to side with Fodor a bit in the following remarks about Andy Clark’s response to Fodor’s LRB review of Supersizing the Mind.
 
There’s a worry of Fodor’s, or kind of like a worry of Fodor’s, that seems to me insufficiently addressed by Clark. To put it in a very cute and short way, the worry is that too much attention is given by the externalists to the “where” in “where is my mind?” and insufficient attention is given to the “my” in “where my mind?”.
 
To spell this out a bit more, let’s start with the role of functionalism/multiple realizaility in the externalists’ arguments.
 
Clark runs a quick little version of that old functionalist gem, the silicon chip replacement thought experiment. Clark writes:
 

Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system. That alone, if you accept it, establishes the key principle of Supersizing the Mind. It is that non-biological resources, if hooked appropriately into processes running in the human brain, can form parts of larger circuits that count as genuinely cognitive in their own right.

 
What Clark is here calling the key principle looks like functionalist multiple realizability to me. From there, Clark builds up to iPhone etc. playing the same functional roles that brain circuits do. That’s one way to start getting a mind to supervene on more than a brain. But there’s a much older way to do it, a way that predates 1990’s-style mind extension.
 
Consider the functionalists’ “Systems Reply” to Searle’s Chinese Room: The Chinese-understanding mind supervenes on a larger system of which Searle is a proper part and of which other parts include the remaining contents of the room. But on that story, presumably, Searle’s monolingual English-understanding mind just supervenes on Searle’s brain.  A mind has leaked out into the room, it just happens not to be Searle’s.
 
Here I think worries can be raised about violations of physicalist supervenience, especially a version I call “fine-grained supervenience,” which I won’t spell out much here but have explored in my paper, “Supervenience and Neuroscience”: [link]. The Chinese understanding mind has parts which have supervenience bases overlapping with supervenience bases of Searle’s mind. Things get even weirder when we add the extended mind thesis and let Searle’s mind leak out into the whole room. Now the room-system as a whole serves as a supervenience base for two distinct minds. That looks to violate a principle of “no mental differences without physical differences”. It also raises very worrying questions of how to tell who’s mind is who’s. Arguably, all we have to go on, being neither Searle nor the Chinese AI, is the physical stuff, right?
 
So part of what I take to be worrying Fodor, or should count among his worries, is the question of how to count minds if they start leaking out all over the place.
 
Fodor writes:
 

 [T]ry this vignette: Inga asks Otto where the museum is; Otto consults his notebook and tells her. The notebook is thus part of an ‘external circuit’ that is part of Otto’s mind; and Otto’s mind (including the notebook) is part of an external circuit that is part of Inga’s mind. Now ‘part of’ is transitive: if A is part of B, and B is part of C, then A is part of C. So it looks as though the notebook that’s part of Otto’s mind is also part of Inga’s. So it looks as though if Otto loses his notebook, Inga loses part of her mind. Could that be literally true? Somehow, I don’t think it sounds right.

 
I don’t think it sounds right either. Can a principled reason against it be given? I think something along the following lines needs to be sorted out. Part of what matters about mental states is who’s mental states they are states of. Internalist brain-lubbers have a straightforward way of sorting that out: one per customer. I’m not sure how the externalists propose to cope with this concern.

6 Responses to “Mind Spill in Aisle Nine”

  1. Ben Young says:

    Not so sure about the internalist having such a straightforward way of sorting out the problem. Is there a principled reason for assuming that there can be but one mind per brain/head/person (choose your poison)?

    Perhaps you have been to influenced by the Pixies.

    Your head will collapse
    If there’s nothing in it
    And you’ll ask yourself

    Where is my mind?

  2. Pete Mandik says:

    Holy crap, Ben. Thanks for pointing out the argumentum ad pixium. I totally love it. Regarding principles, you’re just going to have to read my paper linked above.

  3. Josh Weisberg says:

    Had to get in on this Pixies Jam.

    Pete–

    I’m not sure I get your rejection of the systems reply. Dennett likes to make the point in terms of virtual machines “instantiated” in the running software of another machine. You can have several virtual machines all running on the same platform, all dependent on the same hardware, etc. Is this the bad kind of supervenience? Does it violate fine-grained supervenience? I’m not sure I get your counterargument.

    Also, what is it about brains, beyond their functionally-specifiable properties, that prevents the weirdness? If it’s just functional properties, then I don’t see why replacement and extension scenarios don’t follow. If it’s not functional properties, what properties are they?

    On uber and unter minds: if we discovered that your neural connections were being achieved by little sentient gremlins–that they were using little oil cans of neurotransmitter to allow your neurons to communicate–would that somehow challenge the idea that you have a mind? Or am I totally missing your point? (Perhaps my gremlins are slow today!)

  4. Eric Thomson says:

    I can be a vehicle internalist but content externalist, no? E.g., the only states that carry my thoughts are my brain states, but what fixes the contents of those brain states are external states. I assume this is aimed at those who think vehicle externalism is viable.

    On the other hand, Josh makes good points.

  5. Pete Mandik says:

    Heya Josh,

    My take on virtual machines is that “virtual” means “not really real”, so you can have a bazillion of ‘em without violating finegrained supervenience. I think that, for Dennett, the existence of a virtual machine is just stance-dependent, so there’s nothing *really* there besides the bottom physical level. For others, though, virtual machines are just as much real properties of the system as its physical properties, and, according to me, that violates supervenince, since you get differences without physical differences.

    Re brains, I’m happy to say that all of their properties are functional properties, but my view of functionalism equates functional and physical properties. So this is kind of like Lewis style functionalism that’s consistent with type identity theory. What I don’t like is heavily multirealizability-style functionalism that says that since functional properties are multirealized by realizers that have nothing physical in common, functional properties are non-physical. *Functionally specifiable* does not entail *multi-realizable*.

    Re gremilins, if you discovered that my mind supervened on the mereological sum of a bunch of gremlin minds, then I would thereby be proven wrong about all this finegrained supervenience stuff. Now you know why I refuse all your requests to do brain surgery on me!

  6. Pete Mandik says:

    Yo Eric,

    right, the current post really is only targeting vehicle externalism. Which is not to say that I love content externalism (which I hate for Unicorn-ish type reasons), but nothing here really speaks against it.