I’m going to side with Fodor a bit in the following remarks about Andy Clark’s response to Fodor’s LRB review of Supersizing the Mind.
There’s a worry of Fodor’s, or kind of like a worry of Fodor’s, that seems to me insufficiently addressed by Clark. To put it in a very cute and short way, the worry is that too much attention is given by the externalists to the “where” in “where is my mind?” and insufficient attention is given to the “my” in “where my mind?”.
To spell this out a bit more, let’s start with the role of functionalism/multiple realizaility in the externalists’ arguments.
Clark runs a quick little version of that old functionalist gem, the silicon chip replacement thought experiment. Clark writes:
Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system. That alone, if you accept it, establishes the key principle of Supersizing the Mind. It is that non-biological resources, if hooked appropriately into processes running in the human brain, can form parts of larger circuits that count as genuinely cognitive in their own right.
What Clark is here calling the key principle looks like functionalist multiple realizability to me. From there, Clark builds up to iPhone etc. playing the same functional roles that brain circuits do. That’s one way to start getting a mind to supervene on more than a brain. But there’s a much older way to do it, a way that predates 1990’s-style mind extension.
Consider the functionalists’ “Systems Reply” to Searle’s Chinese Room: The Chinese-understanding mind supervenes on a larger system of which Searle is a proper part and of which other parts include the remaining contents of the room. But on that story, presumably, Searle’s monolingual English-understanding mind just supervenes on Searle’s brain. A mind has leaked out into the room, it just happens not to be Searle’s.
Here I think worries can be raised about violations of physicalist supervenience, especially a version I call “fine-grained supervenience,” which I won’t spell out much here but have explored in my paper, “Supervenience and Neuroscience”: [link]. The Chinese understanding mind has parts which have supervenience bases overlapping with supervenience bases of Searle’s mind. Things get even weirder when we add the extended mind thesis and let Searle’s mind leak out into the whole room. Now the room-system as a whole serves as a supervenience base for two distinct minds. That looks to violate a principle of “no mental differences without physical differences”. It also raises very worrying questions of how to tell who’s mind is who’s. Arguably, all we have to go on, being neither Searle nor the Chinese AI, is the physical stuff, right?
So part of what I take to be worrying Fodor, or should count among his worries, is the question of how to count minds if they start leaking out all over the place.
[T]ry this vignette: Inga asks Otto where the museum is; Otto consults his notebook and tells her. The notebook is thus part of an ‘external circuit’ that is part of Otto’s mind; and Otto’s mind (including the notebook) is part of an external circuit that is part of Inga’s mind. Now ‘part of’ is transitive: if A is part of B, and B is part of C, then A is part of C. So it looks as though the notebook that’s part of Otto’s mind is also part of Inga’s. So it looks as though if Otto loses his notebook, Inga loses part of her mind. Could that be literally true? Somehow, I don’t think it sounds right.
I don’t think it sounds right either. Can a principled reason against it be given? I think something along the following lines needs to be sorted out. Part of what matters about mental states is who’s mental states they are states of. Internalist brain-lubbers have a straightforward way of sorting that out: one per customer. I’m not sure how the externalists propose to cope with this concern.