Analytic Functionalism and Evolutionary Connectionism

(I leave in a few hours to go see Jerry Fodor give a talk at the CUNY Graduate Center. I thought it appropriate, then, to post something on how awesome evolution and neural networks are.)

What makes analytical functionalists functionalists is their belief that what makes something a mental state is the role that it plays in a complex economy of causal interactions. What makes analytical functionalists analytical is their belief that which roles are essential is to be discovered by a consultation of common sense knowledge about mental states.

There are three serious related problems that arise for analytical functionalism. The first problem is that analytical functionalism appears to be committed to the existence of analytical truths and various philosophers inspired by Quine have been skeptical of analytical truths. As Prinz (20**) succinctly sums up this Quinean skepticism, the objection is that “[r]oughly, definitions of analyticity are either circular because they invoke semantic concepts that presuppose analyticity, or they are epistemically implausible, because they presuppose a notion of unrevisability that cannot be reconciled with the holistic nature of confirmation” (p. 92). There are two main ways in which analytic functionalism seems saddled with belief in analytic truths. The first is concerns the nature of psychological state types such as beliefs and desires. Analytical functionalism is committed to there being analytic truths concerning the necessary and sufficient conditions for being a belief. The second concerns the meaning of mental representations. The most natural theory of meaning for the analytic functionalist to adopt is that what makes one’s belief about, say, cows, have the meaning that it does, is the causal relations it bears to all other belief states. However, it is likely that no two people have all the same beliefs about cows. Thus, on pain of asserting that no one means the same thing when they think about cows, the functionalist cannot allow that every belief one has about cows affects the meaning of one’s cow thoughts. In order to allow that people with divergent beliefs about cows can both share the concept of cows, that is, both think about the same things when they think about cows, the analytic functionalist seems forced to draw a distinction between analytic and synthetic beliefs, eg., a distinction between beliefs about cows that are constitutive of cow concepts and beliefs that are not. But if Quinean skepticism about the analytic/synthetic distinction is correct, no such distinction is forthcoming.
The second problem arises from worries about how minds are implemented in brains. Many so-called connectionists may be seen to agree with analytical functionalists that mental states are defined in terms of networks. However, many connectionists may object that when one looks to neural network implementations of cognitive functions, it is not clear that the sets of nodes and relations postulated by common sense psychology will map on to the nodes and relations postulated by a connectionist architecture (see, e.g. Ramsey, et al., 1991). The question arises of whether folk-psychological states will smoothly reduce to brain states or be eliminated in favor of them. (I will not discuss further the third option that folk-psychological states concern a domain autonomous from brainstates.)
A third problem arises from worries about the evolution of cognition. If a mind just is whatever the collection of folk psychological platitudes are true of, then there seem not to be any simple minds, for a so called simple mind would be something that the folk psychological platitudes were only partially true of in the sense that only some proper subset of the platitudes were true of it. However a very plausible proposal for how our minds evolved is from simpler minds. It counts against a theory that it rules out a priori the existence of simpler minds than ours for it leaves utterly mysterious what the evolutionary forebears of our minds were. This third problem is especially pertinent to artificial life researchers.
One promising solution to these three problems involves appreciating a certain view concerning how information-bearing or representational states are implemented in neural networks and how similarities between states in distinct networks may be measured. Models of neural networks frequently involve three kinds of interconnected neurons: input neurons, output neurons, and neurons intermediate between inputs and outputs sometimes referred to as “interneurons” or “hidden-neurons”. These three kinds of neurons comprise three “layers” of a network: the input layer, the hidden layer, and the output layer. Each neuron can be, at any given time, one of several states of activation. The state of activation of a given neuron is determined in part by the states of activations of neurons connected to it. Connections between neurons may have varying weights which determine how much the activation of one neuron can influence the other. Each neuron has a transition function that determines how its state is to depend on the states of its neighbors, for example, the transition function may be a linear function of the weighted sums of the activations of neighboring neurons. Learning in neural networks is typically modeled by procedures for changing the connection weights. States of a network may be modeled by state spaces wherein, for example, each dimension of the space corresponds to the possible values of a single hidden neuron. Each point in that space is specified by an ordered n-tuple or vector. A network’s activation vector in response to an input may be regarded as its representation of that input. A state of hidden-unit activation for a three-unit hidden layer is a point in a three-dimensional vector-space. A cluster of points may be regarded as a concept (Churchland 19**).
Laakso and Cottrell (1999, 2000) propose a method whereby representations in distinct networks may be quantified with respect to their similarity. Such a similarity measure may apply even in cases where the networks in question differ with respect to their numbers of hidden units and thus the number of dimensions of their respective vector spaces. In brief the technique involves first assessing the distances between various vectors within a single network and second measuring correlations between relative distances between points in one network and points in another. Points in distinct networks are highly similar if their distinct relative distances are highly correlated.
Regarding the analytic/synthetic distinction related worries, the Laakso and Cottrell technique allows one to bypass attributions of literally identical representations to distinct individuals and make do instead with objective measures of degrees of similarity between the representations of different individuals. Thus if I believe that a cow once ate my bother’s hat and you have no such belief, we may nonetheless have measurably similar cow concepts. This is no less true of our psychological concepts such as our concepts of belief and concepts of desire. The so-called common-sense platitudes of folk psychology so important to analytic functionalism may very well diverge from folk to folk and the best we can say is that each person’s divergent beliefs about beliefs may be similar. And similarity measures are not restricted to the concepts that constitute various folk theories, we may additionally make meaningful comparisons between various folk theories and various scientific theories. This last maneuver allows us to retain one of the key insights of analytic functionalism mentioned earlier: that we are in need of some kind of answer to the question ‘how do you know that your theory is a theory of belief?” The answer will be along the lines of “because what I’m talking about is similar to beliefs.”
Regarding the question of simple minds, if there are no analytic truths, then there is no a priori basis (if any) for drawing a boundary between the systems that are genuine minds and those that are not. Similarity measurements between simple minds and human minds would form the basis for a (mind-body?) continuum along which to position various natural and artificial instances. How useful for understanding human minds will be the study of systems far away on the continuum? We cannot know a priori the answer to such a question.

Jombie On My Mind

Fig. 1. What are minds such that complex ones come from somewhere instead of nowhere? Photo by Pete Mandik.

References

Churchland, P. (19**)

Laakso, Aarre and Cottrell, Garrison W. (1998) How can I know what You think?: Assessing representational similarity in neural systems.(92k) In Proceedings of the Twentieth Annual Cognitive Science Conference, , Mahwah, NJ: Lawrence Erlbaum.
Laakso, Aarre and Cottrell, Garrison W. (2000) Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology, 13(1):47-76.
Prinz, J. (****) “Empiricism and State-Space Semantics” in Keeley, ed. Paul Churchland.

Ramsey, W., Stich S. & Garon, J. (1991) Connectionism, eliminativism, and the future of folk psychology, in: W. Ramsey, S. Stich & D. Rumelhart (Eds.) Philosophy and Connectionist Theory. Hillsdale NJ: Lawrence Erlbaum.

11 Responses to “Analytic Functionalism and Evolutionary Connectionism”

  1. Tad says:

    Hey Pete - I read a version of Fodor’s paper linked by one of the commentators on my wips. Wasn’t very impressed. Ask whether ‘digestive system’ is a natural kind about which we can amass projectible information, and whether the counterfactual ‘if ambient flying black dots weren’t nutritious, frog digestive systems wouldn’t be connected to a mechanism for catching and ingesting them’ is false.

    Also, the peacock tail is actually more interesting than Fodor points out. There actually is a mind that selects for longer tails qua longer tails: not God’s mind, but … the peahen mind! Sexual selection is selection by a mind for a property.

    As for your terrific post - here’s a question. This similarity metric you speak of - doesn’t it presuppose a way of assigning content to components? E.g., the old cluster analyses of hidden unit activation referred to by Churchland and others, like Sejnowski’s analysis of his Nettalk model, presupposed interpretations of the dimensions of the similarity space. Is this the case with these newer statistical methods? If it is, then how might it evade Fodor’s complaints in _Concepts_ that similarity of content presupposes identity of content? That is, for hidden unit activations to occupy similar regions of activation space, don’t they have to be locatable in the *same* space, or, in other words, a space with the *same* dimensions? But if the dimensions are defined semantically, and they have to be identical, then similarity of content presupposes some kind of content identity, as Fodor claims. No?

  2. Pete Mandik says:

    Hi Tad,

    I’m seeing your comment only after having talked to Jerry, but interestingly, the main question I asked him is very similar to what you suggest. Great minds, eh? Anyway, my question was something along the lines of:

    “In your analogies concerning historical explanations, it seems like what’s doing a lot of the work in keeping these from being covering law explanations is that the kinds mentioned, like being a battle, aren’t natural kinds. Selection aside, is perhaps what’s going on with the biological examples a failure of phenotypic traits to be natural kinds?”

    His response was something along the lines of saying that’s a very good question and that he often asks biologists just how much they would miss evolution if adaptationism turned out to be false.

    I pursued my line further with him at the bar afterwards and was pushing him on whether he thought there were any laws that related biological kinds as such. His responses were something along the lines of speculating that ultimately, only psychology would be inelimniably intensional.

    Regarding your peacock remark, a similar line was pursued at the Q&A by David Pereplyotchik (one of the NC/DC guitarists). Jerry granted the point that lots of cases have selection for being mediated by intentional systems (Davids example involved ants seeing polar bears) but said that there weren’t enough of them to underwrite all of evolution.

    Thanks for your remarks about my post. I think that you are right that there’s a semantic view behind all of this. Re identity and similarity, here are to different responses: 1) why can’t it be similarity all the way down? 2) Supposing it can’t be similarity all the way down and there needs to be identity of at least some, if not all, of the dimensions. Why think those requisite identities can’t be found?

  3. Eric Steinhart says:

    Hah! I have found your vectors. Your vectors are not safe.

  4. Tad says:

    Pete -

    Thanks for the report on the talk. I wish I could have been there. ‘Twould have been nice to talk philosophy over a bruski with Fodor.

    As for the activation space semantics stuff: I worry that, if it’s similarity all the way down, then at some point content overlap will be so diluted that the worry about whether we’re thinking about the same things will re-emerge. Suppose content similarity is about occupying nearby regions of activation space, but each network’s space is defined by slightly different dimensions - so they’re not actually the same space. But then you’d have to define the similarity b/w dimensions/spaces in terms of some kind of meta-space, and so on, until it would get hard to say in what the original similarity consisted.

    I think #2 is a better way to go, but what makes some semantic dimensions immune to the kinds of worries that got you into similarity in the first place? Sounds like you’d have to buy into some neurocomputational version of the analytic/synthetic distinction: the space in which similarity is measured would have to be defined by semantic dimensions with respect to which all minds capable of thinking similar thoughts are identical, no matter what beliefs they token based on individual learning. What - other than analytic truths - could support such a semantic core?

  5. Pete Mandik says:

    Tad, I wish I had more to say on this, but for now I don’t have much more that the confession that option 1 strikes me as the as the more appealing choice. I’m not super familiar with all of the relevant moves here, so I’m not seeing exactly what the big problems are supposed to be for option 1.

    I wonder if the following is relevant. My understanding of what Churchland does with this stuff most recently is not to give a semantic interpretation to the dimensions initially, but just to describe various mappings between to spaces. The single best mapping (defined geometrically) simultaneously determines the interpretation of the dimensions and the points in the space.

  6. Tad says:

    Yeah -

    I think you’re onto something. I may be misremembering whether the dimensions that define the spaces need to be interpreted semantically, to begin with. I think, in the earlier models and analyses, they’re just the activity of different hidden units, uninterpreted. Then, as you say, when interesting geometric similarities emerge, semantic interpretations are proposed.

    So my worries are off base when it comes to computational models. It would be interesting to see whether we can start with similar semantic innocence when defining similarity spaces in actual brains. The brain doesn’t come neatly divided into antomically distinct layers. What counts as a layer or a population often depends on what cells do, and that might require a basic semantic interpretation. A lot depends, I suspect, on the degree to which different brains are anatomically identical (if you want to use the sorts of methods that work on neural networks - which are ‘anatomically identical’).

  7. Pete Mandik says:

    Tad,

    I think what you suggest in your final paragraph is pretty darn suggestive. I wonder if my paper “Evolving Artificial Minds and Brains” is best interpreted as making that sort of point: that carving up the brain into states itself depnds on a certain semantic interpretation.

    I wonder, too, if this can be tied into the Bechtel-Mundale line on the psychological independence of the individuation of neural kinds.

    Hmmmm…..

  8. Tad says:

    Yeah - I was thinking about Jen & Bill too, as I wronte that.

  9. Tad says:

    BTW, is there a link to your paper?

    Also - all this puts me in mind of something Dennett says in an early piece defending the intentional stance. In contrasting his view with the language of thought hypothesis he says something like: brain facts can’t settle interpretation b/c determining which mentalese sentence is tokened presupposes taking the intentional stance toward the whole organism’s actual and counterfactual behavior. I can look up the quotation if you want, but it occurs to me, for the first time right now, that this has some affinity with the Bechtel-Mundale line.

  10. Pete Mandik says:

    Hi Tad,

    Yeah, if you dig up the Dennett quote, that would be cool.

    Here’s my link:

    http://www.petemandik.com/philosophy/papers/emb.html

  11. Tad says:

    Pete Here you go:

    “It is not that we attribute (or should attribute) beliefs and desires only to things in which we find internal representations, but rather that when we discover some object for which the intentional strategy works, we endeavor to interpret some of its internal states or processes as internal representations. What makes some internal feature of a thing a representation could only be its role in regulating the behavior of an intentional system.” (’True Believers’ in _Intentional Stance_, 32 top)

    “…whether we interpret a bit of neural structure to be endowed with a particular belief content hinges on our having granted that the neural system … has met the standards of rationality for being an intentional system.” (’Intentional Systems’ in _Brainstorms_, 105)

    The latter quotation talks about the difference b/w beliefs and mere “assent-inducers”.

    TZ.