PMS WIPS 001 - Tad Zawidzki - The Function of Folk Psychology: Mind Reading or Mind Shaping?

“The Function of Folk Psychology: Mind Reading or Mind Shaping?” by Tad Zawidzki, George Washington University.

ABSTRACT: I argue for two claims. First I argue against the consensus view that accurate behavioral prediction based on accurate representation of cognitive states, i.e. mind reading, is the sustaining function of mental state ascription. This practice cannot have been selected in evolution and cannot persist, in virtue of its predictive utility, because there are principled reasons why it is inadequate as a tool for behavioral prediction. Second I give reasons that favor an alternative account of the sustaining function of mental state ascription. I argue that it serves a mind shaping function. Roughly, mental state ascription enables human beings to set up regulative ideals that function to mold behavior so as to make it easier to coordinate with.

[Link to full text of article]
[Link to further info on PMS WIPS]

41 Responses to “PMS WIPS 001 - Tad Zawidzki - The Function of Folk Psychology: Mind Reading or Mind Shaping?”

  1. [...] Brain Hammer Pete Mandik’s Intermittently Neurophilosophical Weblog « PMS WIPS 001 - Tad Zawidzki - The Function of Folk Psychology: Mind Reading or Mind Shaping? [...]

  2. Mason Cash says:

    Excellent paper, Tad. I have been thinking along similar lines myself, but had not thought to put it in terms of this contrast between different functions of mindreading practices.

    I’d like to suggest an important addition to the account here, by building on your analogy (p. 8) between the coordination problems involved in driving and in other human interactions: There are situations where people need to diverge from the typical driving norms, such as when the driver realizes rather late that they need to move from the left lane of a three lane road to turn right, cutting across two lanes of moving traffic.

    In such situations we have norms, in addition to those regulating how drivers maneuver their vehicles that you mention. We also have norms regulating signaling your intentions, that make it easy for other drivers to predict (coordinate with) such otherwise unpredictable behavior. The driver should use their indicator signal to make their intention to move to the right clear to others into whose path they will move. (Of course, there are many places where such signaling seems to be avoided–perhaps as a sign of weakness– but that’s another matter. When I was taught to drive, I was instructed to be as redundantly predictable as possible, just in case).

    We have a whole network of such signaling practices, whose norms regulate how we can and should our intentions to others and which regulate how one should interpret such advertisements. One the road, we have norms regulating the appropriate use of flashing headlights, the horn, waving someone on, etc.

    My point is that in a vast number of cases we don’t need to figure out what our fellow community members are likely to do. They tell us. And when they don’t explicitly tell us, they do and say things that make their commitments to certain goals and courses of action obvious to those who observe such actions and utterances, trusting that our shared familiarity with the norms of the practice of giving and asking for reasons for actions will make their intentions transparent.

    This is especially so when, like on the road, there is a need to have others respond in particular ways to one’s own actions (such as not crashing into you when you attempt to merge right). We have developed a normative practices involving the production and interpretation of all kinds of signals, including linguistic ones, to solve such coordination problems.

    The role of language, as well as other such signaling actions that function to helping coordinate interactions between members of a conforming community, shouldn’t be understated.

    The norms relating utterances, intentional states and actions are known to all socialized members of a linguistic community. Mutual familiarity with these norms helps people make their goals and beliefs apparent to others, and helps interpreters make sense of others’ actions.

    I’m sure you’re aware of all this. I say this as a recommendation, not a criticism. Adding a brief mention of this linguistic dimension of normative mindreading practices to the account you present would, I think, further reinforce the normative, rather than predictive, solution to the coordination problems you describe here..

  3. Tad says:

    Mason -

    Good to hear from you! Where are you now?

    Thanks for the very stimulating comments. I have read about the importance of signalling norms in solving coordination problems - I believe Morton talks about this, and Sterelny, and I agree wholeheartedly with the importance of this component of mindshaping.

    In fact, if you look at mental state ascription, it’s probably most reliable when interpreting communicative acts. Fodor’s favorite examples always involve promises to meet at certain dates and times, etc. My suspicion is that this is no accident. We’re shaped from a very early age to make ourselves easily interpretable to others, and, as you eloquently put it, “the norms relating utterances, intentional states and actions are known to all socialized members of a linguistic community.”

    I guess part of the reason I’m attracted to the mindshaping hypothesis is that mental state ascription only seems efficient and reliable in contexts where there’s been a lot of mindshaping. Linguistic communication is the paradigm here. (Have you ever tried mental state ascription watching a video of human beings with the volume muted? I might be near mindblind or something, but I have no idea how to go about ascribing mental states in such contexts, unless the agents are engaged in some highly stereotyped cultural activity, like a sport).

    So I’d say much if not most mindshaping is for the purposes of making ourselves easily interpretable in communicative interactions.

    Here’s a worry I have about this whole line though. Someone might argue that molding oneself and others to respect certain rational/intentional/communicative norms, in order to be more easily interpretable, presupposes a descriptive/predictive/explanatory use of mental state ascription. In other words, you can’t have mindshaping without sophisticated mindreading first. I don’t know if that’s a fatal flaw in my proposal or not. I guess my initial response is that we do start with sophisticated mindreading, but it doesn’t involve mental state ascription, at least not propositional attitude ascription. It involves a kind of sophisticated socio-cognitive tracking - the kind that Gallagher and Tervarthen call ‘primary intersubjectivity’. That’s enough to get mindshaping off the ground, I want to say.

    Another worry: perhaps the difficulties I identify with theory theory are overstated. Holism is a problem for any theory, but somehow scientists can select plausible hypotheses from the infinite numbers compatible with the data. Whatever mechanism explains such non-demonstrative inference in science, might explain how we’re able to make reliably accurate mental state ascriptions for predictive/explanatory purposes in every day life.

    Finally: what are cognitive neuroscientists talking about when they experimentally identify false belief ascription in children as young as a year and a half? On my view, all belief ascription aims at shaping behavior in some way, not in detecting behaviorally relevant neural causes. But it’s implausible that such young children are doing this. So, assuming the recent evidence that prelinguistic infants ascribe false beliefs is good, is it a problem for my view, or do developmental cognitive neuroscientists mean something sufficiently ‘low-level’ by false belief ascription, that it could count as part of primary intersubjectivity?

    Any thoughts?

    Thanks again for your comments.

  4. Robert Thompson says:

    Hi Tad,

    I’m a big fan of this venue and of your work, so I’d like to chime in early, after a quick reading of your paper. I’ll offer more detailed arguments later.

    First, as I’m sure you know, I dispute the timeline you offer as to the onset of mindreading capacites. These are up for debate.

    Second, and more importantly, I wonder about your focus on prediction of behavior as the major goal of mindreading. We have had debates about this since the late 90s, and I think that prediction is the least powerful aspect of mindreading. Explanation and understanding of behavior work much better, and I think that mindreading makes much richer and more plausible stories about the etiology of an action than predictions about that behavior.

    Third, related to my second point, I think the reason why the folk psychological/ToM/mindreading account of explanation and understanding of behavior works so much better than prediction has little to do with the focus of your paper, i.e. the ascription of propositional attitudes. If anything, as you mention, Gauker’s point–what are, excatly, these FP laws???–explain some of these shortcomings more directly. It will be much easier to make sense of some behavior than to predict it, and this need have no connection to problems with PA ascription. Now, if the real issue is with time and the cognitive resources involved in prediction, as opposed to explanation, then this is a debate. There are all sorts of problems that one could find in prediction using FP that one might not face for other aspects of FP.

    Fourth, if you’re going to bring up evolutionary attacks on FP, one would need to rule out other uses for FP. Given my comments above, I think there are some reasons to think that based on the benefits of FP in explaining behavior, an attack on FP needs to take these into account.

    Finally, I think the “shaping” aspect of this project could be clearer. If you’re coming up with the most plausible account of why PA ascription is around, I think it is true that ascribing PAs helps with coordination problems, but it might have many other benefits that might make its selsection more beneficial. A bit more about why this helps solve coordination problems (and not other problems) could help.

  5. Tad says:

    Robert -

    Thanks for the great comments. I agree that a lot of the points you raise need to be addressed, so let me make a first stab.

    I don’t see how understanding or explanation of behavior without prediction is supposed to make a difference to reproductive success. So I’m not quite sure how understanding and explanation alone can explain why PA ascription (you’re quite right that this is my focus - which could be clearer, since in the paper I say ‘mental state ascription’, and surely the ascription of pain is more about mind reading) was selected for in evolution. Perhaps you can explain why it persists in certain cultural settings because of the explanations it provides. But if it has no effect on solving coordination problems, its status should be little more than that of a fairy tale. The only two ways I can think of that it could help with coordination problems are prediction and mind shaping, so given the problems with the former, I opt for the latter. But I’m open to other suggestions. I think however that the ball is in the other court.

    With regard to satisfying explanations and understanding, perhaps we find PA ascription so satisfying because of its rationalizing function, which ultimately, I think, can be explained in terms of mind shaping. The idea is that the practice of giving reasons for behavior, often in retrospect, maintains our status as reliable, well-ordered members of a community. Behavior that is weird, anomalous bestows on its producer an obligation to explain how it fits into the rational pattern encoded in FP. Otherwise they are sanctioned. This practice has beneficial side-effects for coordination. Because we’re all constantly trying to make sure we don’t break too many FP rules - don’t act too weird - and coming up with rationalizations on the fly, in the backs of our minds, to make sure we can defend behaviors that might appear anomalous, we’re making our behavior more predictable to our fellows, and easier to coordinate with.

    I think a good point you make is that I’m very hand-wavy and vague about how PA ascription feeds into mechanisms of mind shaping. I guess my idea is that, through the sorts of mechanisms Mameli, Bruner, McGeer, and Dennett identify - social expectancies, intentional interpretation of random vocalizations, queries for reasons from a young age, and narratives - we socialize children into the practice of trying to rationalize their behavior - and behaving in easily rationalizable ways. Then, as adults, we’re constantly thinking about how our behaviors and those of others might fit into patterns of rationalization in terms of PAs. So a particular PA ascription need not itself have a mind shaping effect; rather it’s part of a practice of rationalization, that has mind shaping effects over the long run, making each more easily interpretable by others.

    I hope that at least gestures in a direction that might address your last point. I think by far the biggest problem I have is with your first point, namely, the onset of PA ascription in development. I mention this in my earlier reply to Mason. I want to accomodate the data discussed by Saxe at her SPP talk regarding evidence that infants little more than a year old appear to ascribe false beliefs. I just don’t think that the ability to distinguish cases where the interpretee is mistaken about something, from those where she isn’t is sufficient for ability to ascribe belief. But I need to have a better reason for this than just - it has troublesome implications for my theory!

    One thing to note is that I can’t see what 1.5 year olds do with this information! They don’t exactly use it to reason about other behaviors - to help predict them or solve coordination problems. Can they, for example, use it to help concoct clever strategies about how to get their caretakers to satisfy their desires?

    Second, the thing about the False Belief paradigm is that the ascriptions subjects make are almost always wrong! Dolls don’t have any beliefs, so ascribing a false one can’t possibly help in predicting or coordinating with their behavior. And when the interpretee is an experimenter, they usually know where the candy, or toy, or whatever is - they’re just pretending not to know. So again the subject makes an incorrect ascription, that is of no help in prediction or coordination. This just makes the point of my paper. Even if it’s true that we ascribe PAs really early, and automatically, this can’t explain why the practice was selected for and persists, if most of these ascriptions are false, or otherwise useless for prediction and coordination.

    So I suppose what I’d say about evidence of very early onset for ToM is that the brain’s gearing up for all the mindshaping ratioinalizing it’s going to have to do in the future. I guess what I’d really like to see is real time neural activity in a very young child while they’re solving some social coordination problem or we know they’re trying to predict what someone will do. Do the same areas light up as in the ToM task? This might be evidence that they’re using ToM to predict behavior, or to successfully coordinate, though I’m not sure whether it would pick b/w mindshpaing and mindreading.

    I guess I’m skeptical that this will happen because if one year olds can go from perceiving behavior to correct, reliably predictive PA ascription, then they essentially vindicate logical behaviorism, or at least solve the holism problem - there is a simple mapping between behavior and PA tokening after all. I find this unlikely. That was rather verbose. I hope you’re still reading!

  6. Brad C says:

    I really enjoyed your paper.

    A couple of thought that occured to me:

    I wonder how you would fill out the account of the normative standards you take to govern the attitudes we ascribe. In particular, I wonder how your model would take into account the difference between norms that govern belief and intention ascription and those that govern desire ascriptions. The former two are held to pretty strict rational norms in a way that the latter are not; ambivalence is not as (clearly) irrational as having inconsistent beliefs or intentions.

    It might also be interesting to think about whether and when shaping could be bad for the shapee - even if it makes them more ammenable to coordination, prediction, control, etc.

  7. Tad says:

    Brad -

    Thanks for your comments. You raise a good point about desires. I find my proposal least plausible when it comes to desires. It seems much easier to perceive what someone desires, and to use this information to predict their behavior, especially when it comes to immediate desires. And the evidence is that children ascribe desires in successful behavior prediction from a very early age, and that many species of non-human primate are likewise sensitive to conspecifics’ goals.

    Some make a distinction between such ‘low-level’ desires, which take things and actions as their objects (as in ‘I want a banana’ or ‘I want to dance’), and the more explicit, logically articulated desires that take propositions as their objects (as in ‘I desire that I eat the banana’). On some theories of ToM development, the former kind of desire concept is available early, and is non-representational, because words specifing the objects of desires in desire ascriptions are not referentially opaque: ascribers don’t make a distinction between wanting a banana and wanting the yellow fruit on the table, even if there is reason to believe the ascribee doesn’t use those two terms interchangeably.

    In any case, it’s the more sophisticated kinds of desires that are relevant to normative appraisal, since desire ascriptions need their contents to be propositioinally articulated in order to construct practical syllogisms. So I suppose there must be rational norms that govern such articulated desires - the norms operative in the practical syllogism. But I’m not sure what these norms are. I guess my inclination is to say that the things philosophers of mind have standardly called laws of FP, are actually norms. So here’s the classic law of FP, relating beliefs desires and actions, modeled on the practical syllogism: if you want that q, and you believe that if p then q, then, ceteris paribus, you want that p. I guess I’d change that to: … then you ought to want that p.’ But I’ve heard anti-Humeans raise a stink about such views. They say that you have a choice - you can want that p, or you can abandon your want that q. They claim there must be other grounds for justifying desires, and think this shows that the rational evaluation of desires is more substantive than Hume thought: justification for desires depends on more than their role as means to still further desires. I really don’t know what to say about that.

    One thing I have noticed with my daughter: she often claims to desire something, and then not do what it takes to get it, or immediately claim to desire something incompatible with it. At least she did this a lot when she was two and three. And I’d always find myself calling her on such inconsistencies: how can you want both? Maybe that’s one normative constraint on desire.

    The distinction you draw between desire and belief reminds me of a discussion of this in Brandom’s Making It Explicit. He doesn’t draw the same distinction you do, but he does note that, whereas saying you believe that p has the implication that others in your epistemic situation ought to believe it as well, there is no corresponding ‘publicity’ (don’t know if that’s the right word) constraint on desires: you can desire that p, and there is no expectation that others situated similarly to you should desire the same thing. Is that related to the distinction you draw?

    I think shaping is often bad for the shapee, depending on what you mean by ‘bad’. I think part of Mameli’s point is that pernicious gender stereotypes might turn into self-fulfilling prophecies due to mechanisms of shaping, for example. An old friend of mine, who is now a psychiatrist and actually reconnected with me yesterday, after 20 years, because of reading this blog (!) is interested in psychosomatic type disorders where patients suffer certain conditions primarily because of what they beleive about themselves. Such conditions might also be effects of pernicious mindshaping.

    Thanks again for your comments!


  8. Brad C says:

    Thanks for the thoughtful response.

    I am myself drawn to an (anti-Humean) objectivist accounts of well-being and an agent’s good and thus think that we can criticize desires for being bad for people in a way that is analogous to the way that we might criticize beliefs for being false. But that does not simply entail the conclusion that such desires or beliefs are irrational. To start to think about that we would need to clarify whether we are rationally criticizing the belief or the believer - but I will leave that and many other issues aside.

    I agree that some sort of instrumental or procedural normative principle applies to desires - roughly it is procedurally irrational to desire X, believe Y is a necessary means to X, and not desire Y at all. That formulation allows for the rejection of the initial desire, and it is pretty weak given the necessary condition stipulation.

    But notice that unfettered pursuit of this sort of rationality (or some slightly sronger version) can lead to what Aristotle would call unwise cleverness; consequently some would resist using the term “rational” in that way, but as long as we know what we are saying I suspect is not a huge worry - we can simply distinguish procedural and substantive rationality. In any case, that distinction reminds us that shaping people to be more procedurally rational is not always for the best, because, for exmaple, it is not better to be wholeheartedly imprudent than it is to be semi-continent while pursuing the prudent course. In a related vein, I suspect that procedural practical rationality is not as obviously worthy of pursuit as procedural theoretical rationality: unfettered pursuit of procedural rationality is not usually a reliable means to acheiving substantive rationality, while unfettered pursuit of coherent or consistent beliefs is usually a reliable means to achieving substantive theoretical rationality (i.e. true beliefs).

    As the distinction between procedural and substantive rationality suggests, I think your cases involving a whim (a short lived and quickly revised desire) and ambivalence (desiring X and desiring not-X, or something incompatible with X) appeal to normative standards that must be grounded in an account of well-being and/or an agent’s good. If that is right it seems misleading to lump them in with norms of instrumental rationality - even subjectivists about well-being think you need more that that notion of rationality to get an account of an agent’s good.

    This allows me to make my point a bit clearer (I hope). I have two doubts: first, about whether being ambivalent or having whims is irrational in an procedural sense, and, second, about whether desires like these are substantively irrational in the sense that is grounding in an account of well-being / an agent’s good (I am playing fast and lose with the “object of assessment” issue here).

    Wollheim’s discussion (in Chapter 7 of the Thread of Life) of norms of desire ascription has lead me to doubt that desires of these sorts are normally irrational in the substantive sense (he has a subjective account of well-being and an awkward penchant for using psychoanalytic notions, but his argument is still worth reading). Doubts of a similar sort are also raised by Williams in Truth and Truthfulness.

    And even if it were better for people to have fewer whims, be instrumentally rational, and be unambivalent, that does not mean that shaping people in the light of these norms is the best, or even a tenable, means to getting them to have those features.

    Hope that is a little clearer. I had to play fast and loose with the theoretical raitonal analoges too.

  9. Mason Cash says:

    First of all, these italics are driving me crazy and they’re my fault. I tried to italicize a word in my initial comment, and left a malformed “end italics” command in there. Let me try to end it now Ah… that’s better [I hope]. Apologies everyone.

    Second, I’m at the University of Central Florida now, Tad. (Shaun Gallagher is my colleague, BTW, but I am nowhere near equipped to explain his views for him. I have read some of his stuff on primary intersubjectivity, though, and am rather sympathetic.)

    Now to business.

    In reply to me Tad said:
    “Here’s a worry I have about this whole line though. Someone might argue that molding oneself and others to respect certain rational/intentional/communicative norms, in order to be more easily interpretable, presupposes a descriptive/predictive/explanatory use of mental state ascription. In other words, you can’t have mindshaping without sophisticated mindreading first. I don’t know if that’s a fatal flaw in my proposal or not. I guess my initial response is that we do start with sophisticated mindreading, but it doesn’t involve mental state ascription, at least not propositional attitude ascription. It involves a kind of sophisticated socio-cognitive tracking - the kind that Gallagher and Tervarthen call ‘primary intersubjectivity’. That’s enough to get mindshaping off the ground, I want to say.”

    I agree that this slow development of primary intersubjectivity is the place to focus as the foundation of mindshaping. (I argued a very similar thing at a conference on intersubjectivity and TOM last year.) Communicative practices do depend to some extent on something like explicit intentional state ascription. However, this skill of attributing intentional states to others develops slowly, as our communicative skills develop. And it depends on “sophisticated mindreading” as you say, which itself depends on very primitive mindreading (primary intersubjectivity).

    And as Trevarthen argues, primary subjectivity is appended at around one year old or so, as infants begin to develop secondary intersubjectivity. Children begin to engage in shared attention with others, where interactions with others con involve shared perception of object: awareness that I see the ball, and that you see it too. We see the ball together. This shared intentionality gives us a foundation for language (e.g. in naming things, requesting objects, and so on) and thus is also a foundation for collaborative activities, for shared goals, etc.. Both primary and secondary subjectivity, as I understand it, underlie human intersubjective social and normative practices.

    The point is that these develop slowly, through the process of “mindshaping” as you call it. So while communicative norms probably depend in large part on predictive/explanatory uses of intentional state ascription, these explicit ascriptions (a) are not the primary function of mindreading skills, and (b) they themselves depend on primary intersubjectivity and mindshaping practices.

    I fall back on Brandom here: The ability to explicitly follow a norm (for example, to follow linguistic norms in explicitly ascribing an intentional state as a reason for action) depends upon the ability to simply treat in practice certain performances as appropriate or not appropriate. Explicit norm following skills and practices depend on tacit norm abiding skills and practices.

    And as Gallagher notes, primary intersubjectivity is primary in two senses. (a) in the developmental sense and (b) in the sense of being a necessary background for sophisticated mindreading skills.

    This also is relevant to your reply to Brad.

    In reply to Robert, Tad says:
    “I want to accomodate the data discussed by Saxe at her SPP talk regarding evidence that infants little more than a year old appear to ascribe false beliefs. I just don’t think that the ability to distinguish cases where the interpretee is mistaken about something, from those where she isn’t is sufficient for ability to ascribe belief. But I need to have a better reason for this than just - it has troublesome implications for my theory!”

    I wasn’t at SPP this last year, but I think I get the idea (does anyone have a reference for this??). There are good reasons for doubting that infants ascribe false beliefs, along the lines of your theory. I’d argue that there is a difference between explicitly ascribing a false belief as a means of attempting to explain or predict their actions, and treating someone in practice as making a mistake.

    Brandom’s point seems particularly applicable here. The development of mindreading abilities and the consequent ability to engage in intersubjective normative practices, depends on the ability to tacitly treat in practice certain performances as appropriate or inappropriate. That is what these infants must be doing. They have the bare beginnings of intersubjectivity, and so have the ability to exhibit surprise when someone does something they didn’t expect, but don’t have anything like the ability to explicitly ascribe an intentional state, let alone a false one.

    I’d accept that these infants might treat someone as making a mistake, as doing something wrong or at least something unexpected. But I can’t see that this is a reason for claiming that they explicitly ascribe a false belief. That requires the kind of linguistic and cognitive skills that infants that young haven’t developed. It seems that this comes from too much of a “theory theory” approach. The point of Gallagher’s argument seems to be that it makes far more sense to see these as developing mindreading skills, rather than an explicit theory of intentional states.

    I have a 13 month infant at home and –parental pride aside– I can’t see how he could attribute a belief to anyone. He can repeat a few labels and can pick up objects we call by mane, but that’s about it. He has the very beginnings of secondary intersubjectivity, at best.

    The best my son could do if I for example, told him to get the ball, and then acted like I wanted his duck, would be to be confused or frustrated, and perhaps adjust his preparedness to respond to my requests. (I’m reluctant to try this experiment, though. The trust we have developed is an important foundation of our intersubjective activities. He depends on it to learn more linguistic practices.)

    I’d say that at best we have established a practice of trying to respond appropriately to one another’s actions (I try harder than he does, I guess). For instance, I cheer him for saying “baw” when asked to get the ball, and for actually crawling to the ball rather than other toys when so asked. He seems to try to follow instructions like that, and gets some pleasure from being cheered and applauded when he does follow them.

    It’s this kind of very basic intersubjective practice you’re highlighting when you describe our mindshaping activities and the sanctioning and rewarding that shapes an infant’s developing abilities and awareness of the normative practice in which they are learning to participate.

    And I’d argue –though I don’t know the details of Saxe’s claims here—that at best the infants have the kinds of tacit expectations of others’ responses such that they could exhibit surprise at unexpected actions. But I sincerely doubt there is evidence to support the claim that they are attempting to explain those actions, and explain the actions to themselves by attributing false beliefs.

    I take this also to be a point of agreement with your comment to Robert regarding how we socialize young children into the normative practice of giving and asking for reasons. PA ascription is but one mechanism among many of socializing developing members of our community, and of ensuring that one’s own behavior accords with the practice. To the extent that understanding or explanation play any role, it’s often –incorrectly, so you point out, Tad– held to be a means of predicting others’ actions. If they explanations play any role, it‘s probably as a means of prediction (better: guiding my responses).

    However, –and this is the important point–while predicting others’ actions may occasionally be a means of achieving our aims, it depends on a lot of more basic skills and practices, it often isn’t the focus of our attention, and may occasionally be quite irrelevant; especially for very young infants and their developing mindreading skills.

    I like to use Michael Polanyi’s distinction here, between focal awareness and subsidiary awareness (from his 1958 Personal Knowledge: Towards a Post-Critical Philosophy). I am often aware of what the current interaction demands of me as my next move, because of my mastery of the practice of interacting with others. My focal awareness is on our shared goals, for instance, and then next “obvious” move I should make. If pressed, I can pull out of subsidiary awareness some ascriptions of intentional states to my co-participants (and to myself); ascriptions that underlie my judgments about my most appropriate next move –to justify my action if asked, for instance. But as a skilled practitioner of mindreading practices, explicit attributions of intentional states to explain or predict others’ actions are not part of my focal awareness. I simply find myself making the “obvious” response to my co-participants’ actions.

    And in small children, there are plenty of ways they learn to smoothly interact with others in practices that depend on primary and secondary intersubjectivity, that don’t involve ether explaining or predicting others’ actions, but rather aim simply at smooth interactions –ones that elicit reinforcement rather than sanctions from our co-participants.

    I also completely agree with your comment to Brad that the ”laws” of our folk psychology are norms. Let me elaborate slightly. Any predictive successes we have depends upon that normative practice, in the same way that I can predict (I’d prefer “tacity assume” here; it’s much less explicitly entertained) that the driver waving their hand towards me at a four way stop is going to let me go first, such that it’s safe for me to proceed across their path. Similarly, when someone acts in ways consistent with certain goals and beliefs, they take themself to have placed themself under a commitment to certain courses of action that are consistent with the goals and beliefs, those the follow inferentially from the goals they know that others are licensed to (ought to) ascribe to them. Talking explicitly for the moment, norms regulate what intentional states I should ascribe to others as reasons for my actions, and they know these norms too, and so understand that they should act in accor4d with the intentional states that one should infer from their reasons (either that or they should recognize that my asking for an explanation (excuse) is appropriate).

    An on that point, in your reply to Brad you seem to be gesturing towards a distinction between very low level actions directed at objects and higher level ascriptions of explicit goals. For instance, when I point at the ball and say “get the ball” to my 13 month old son, he seems to understand at least that my actions and attention are directed at the ball, and seems to get that I want him to direct his attention and actions at it too. I don’t think he understands “get” yet. But that’s hard to determine. But his ability to respond appropriately does not depend on his ascribing any explicit goals to me. He’s certainly not able to do anything like attempt explanations or predictions of my behavior. I’d speculate that at best he has tacit expectations of my response to his action; expectations that might only be apparent to him on the occasions when I respond in ways he didn’t expect (these either elicit giggles or crying; usually the latter).

    You are right that the more sophisticated explicitly entertained goals are most obviously susceptible to normative appraisal (especially ones that are expressed linguistically). But even the low-level interactions I have with my son are normative. They shape his mind by scaffolding his developing mindreading skills and his slowly growing appreciation for the normative practices into which he is being socialized.

    Thanks again for a provocative paper and your thoughtful responses. Apologies to the verbosity of my responses, but take this as a sign that I think there is a lot of value in the approach you are articulating here—mostly because I believe I share much of it.

  10. Pete Mandik says:

    Testing, one two, three. Testing.

  11. Pete Mandik says:

    Mason, thanks for undoing your italics-html-mojo. Interestingly, that’s what I was just busy doing in my previous comment.

  12. Mason Cash says:

    I figured that, Pete. Again, apologies. (A little knowledge is a dangerous thing; but at least I had the “sorceror’s apprentice” problem solved, and realized how to turn it off.)

  13. Tad says:

    This is in reply to Brad’s last post. I’ll get to Mason’s later (thanks for the further thought provoking points).

    I have a much better sense of where you’re coming from - Brad. And I’m broadly sympathetic with your distinction between procedural and substantive rationality. I really have no strong view on where norms governing desires fall. But I should make clear that I’m more interested in the force or purpose of PA ascriptions, not in whether their use for this purpose is justified. You’re probably right that it is very hard to find justifiable normative constraints - whether procedural or substantive - that govern desires. However, I’m interested in the fact that desire ascriptions are often, and I want to say typically, used with normative force: we teach children how they ought to act if they desire something, what they ought to desire, what else they ought to desire given that they desire something etc. We sanction adults who don’t fit our preconceived notions about which patterns of desire and behavior are kosher, etc. (By sanctioning I don’t mean anything as explicit as beating with sticks - we just pay those we consider irrational less attention, or otherwise subtly submit them to negative consequences). Whether these practices are justified or not is a separate issue, and the one to which, I think, you usefully draw attention. (Thanks for the Wollheim & Williams references, btw).

    With regard to your claim about whimsical or ambivalent desires, I think some distinctions need to be drawn. I suspect that a child’s rapid-fire expression of apparently incompatible desires is not necessarily an expression of whimsy or ambivalence. Children might not realize that two desires are incompatible, for a variety of reasons. They might not appreciate semantic distinctions among the words they use to express the contents of the desires. Or they might not have the practical knowledge necessary to see that two courses of behavior can’t be pursued at the same time, etc. So correcting them is not necessarily sanctioning whimsy or ambivalence. There may be more basic rational-communicative norms at issue here. This is not to deny that whimsy and ambivalence are sanctioned as well.

    Thanks for the great follow-up post!


  14. Tad says:

    In reply to Mason’s wide-ranging, energetic, and thought-provoking post, I am entirely in agreement. I’ve known Mason since grad school, and had no idea we had so much philosophically in common. Some more specific comments.

    The allusions to Brandom’s tacit deontic attitudes is very a propos - it was certainly in the back of my mind when writing this paper. I think a lot of the kind of developmental psychology on which Gallagher and others draw is a nice fit with Brandom’s system.

    I’m not sure about the precise reference to the paper Saxe was citing at the SPP (I don’t think it’s her own research). But Robert made me aware of these results before Saxe’s talk, so he probably knows.

    I really like the way you put the distinction I was trying to get at in my response to Robert: between explicit ascription of false beliefs as a means of prediction and explanation and treating someone in practice as making a mistake. I don’t recall whether such distinctions were controlled for in the experiments alluded to by Saxe. I think they involved brain scans, and controlled for reactions to non-cognitively inappropriate behaviors. So it’s not clear that the distinction to which these infants are sensitive is merely between correct and mistaken behavior. The distinction might be specifically cognitive. I think the infants observe a subject interact with an object, leave the object behind, then observe the object being moved to a new, hidden location, and then observe the original subject returning and knowing where the object is. They show tremendous surprise at this. So it’s not exactly that the subject is making a mistake. It’s more that he’s not making the mistake he should be making given what he should believe. Seems pretty sophisticated, but as you say, it takes more to show that they use such information in reasoning about future actions.

    I also really liked your point about how introspectively implausible it is to assume that explicit ascription of propositional attitudes underlies our interactive abilities. There is prediction going on, but interaction is so smooth, that it must be based on tacit knowledge of the significance of low level cues, and background information about shared goals and obvious moves. I get a lot of this from Gallagher and McGeer, but also Morton’s ‘Folk Psychology is not a Predictive Device’ from 10 years ago. This idea of knowing the obvious moves and shared goals, and assuming others do as well, is, I think, something on which Morton, in particular, focuses.

    Anyways, thanks again for the encouraging and interesting comments. We’ll have to have a drink and hash this out some more some time.

    Best, Tad.

  15. Thanks for an interesting and thought-provoking essay, Tad! I thought I’d pitch in my two cents.

    First, let me say I’ve very sympathetic to the idea that mind-shaping is a crucial and much-neglected aspect of mental-state attribution practices (I’m a fan of McGeer, especially, on this), and I’m also very sympathetic to the idea that our predictions of people often proceed by simpler strategies than the philosophical fantasy of belief-desire psychology (Kristen Andrews won me over on this). However, I think you overstate the case somewhat.

    In particular, I don’t see any reason to suppose the “ceteris paribus” nature of folk psychological belief and desire predictions needs to be crippling. The philosophy of science literature on ceteris paribus clauses, of course, shows a consensus that many scientific claims are riddled with implicit ceteris paribus clauses whose complete articulation would be unwieldy or even impossible. Yet those sciences proceed nonetheless. Likewise, although the inference from “he thinks there’s beer in the fridge” to “he’ll offer beer to June if she asks for some” is riddled with implicit ceteris paribus conditions, we make it anyway (unless there’s a salient defeater), and we’re usually right in doing so.

    It seems both that we can attribute the belief in such cases — this is especially obvious in cases where we personally know the belief to be mistaken — and that we can predict behavior successfully on this basis. Now whether we actually do this (as opposed simply to being able to), that seems a matter of the intricacies of cognitive processing — something you don’t really address.

    So: Why not be a pluralist? Mind-shaping and mind-reading both, as current function and as evolutionary basis for selection: Good and hooray!

  16. Anibal says:

    With respect to the second claim, in my judgement, it seems to me that is a daunting task to asserts yet what is the preponderant evolutionary function of “mind reading” ability. We dont know what is the evolutionary order of appearence of “mind reading” functions and what is evolutionary primary, that is, whether they serve understanding others, prediction-coordination, anticipation, or detection of covariances between behaviour and mental actions, if they are embedded in a cultural context subserving socio-cognitve functions to adscribe meaning to the actions of others, to manipulate… (even we have not yet a desirable scientific knowledge to interpret things) Every purported function mentioned above can be seen as the primary function of mind reading, how can be so?
    We cannot cut and separate the precise time window or time frame in our natural history in which the primate lineage, and specially humans, start to mentalise and left to read behaviours, leading us to claim that the primordial function of theory of mind is to coordinate rather than understand others, just because evolution is about past events, very difficult to recuperate and prone to “just so stories” scenarios, and because current mind workings and operations in human minds is so blurred by others systems (language) making the human mind like Swiss knife or at least a whole complex entity that is more than its parts.

    We uses a variety of processes when we are engage in depichiring the mental states of others that exclude a unique and pure explanatory procesess (Malle 2004, p. 230), and by no means it is only subserve by a given neural mechanism within or between primate species (Stone, Baron-Cohen and Knight 1998). I know of more than three brain areas involve in mentalising: right TPJ (Saxe), MPC (Frith), STS (Perrett), Fronto-parietal pathways along with ACC, temporo-hippocampal circuits complex, VPA or F5 and in humans mirror neuron system (Rizzolatti and colleagues) and so on.
    Each of this areas have their own intrinsic computational rules in relation to several functions that conjointly encompass social cognition in general. Finding a unified model that describes how they interact and secondly how they manage to resolve the issue of what is the primordial function in the ability to attribute mental states to others in response to to selective pressures, is for now ungraspable. If mind reading don´t give us an accurate description of the cogntives states of others and we defend a more intuitionist line á la Trevarthen with non-meta-representational capacities in which affective states of people enable them to stay tunned together, we have to ask the reverse question, what is the purpose of legislate, educate or engineering things. Some might answer that to understand laws, to know things and to understand systems.

    However i believe, along with Eric, that we have to embrace a pluralistic vision on social cognition.

  17. Tad says:

    Eric -

    Thanks for your very insightful comments, many of which echo my own worries. Part of the reason I pitch my case in such stark terms is because of a personal preference for such rhetoric, but also because, as you point out, mindshaping has been much neglected as a function of PA ascription, which is almost exclusively taken to aim at mind reading.

    However I think a case can be made that my point isn’t as overstated as you suggest. I had expressed a similar reservation about the alleged problem with ceteris paribus clauses in a response to Robert, above, I think, pointing to the example of successful scientific explanation. However, on further thought, it seems to me that there are important disanalogies between the case of institutionalized science and folk psychology. Foremost among them is that, in the contexts that provide(d) the most important selection pressures on the latter practice, it is arguable that practicioners do not have the leisure, and time to experiment with and identify exceptions to generalizations. Much science, particularly applied experimental science, consists in identifying situations in which broad generalizations do not hold. This takes a lot of time, energy and effort. Time, energy, and effort that we usually don’t have when predicting the behavior, or ascribing PAs to conspecifics. I can only think of highly sophisticated cultural contexts - legal investigations, professional psychology, fiction writing - where ascribers of PAs have the leisure to hone their ascriptions such that they are predictively useful. But in most circumstances, and surely in most of prehistory, social cognition was done on the fly, and required much quicker and more efficient tracking of conspecific behavior.

    Incidentally, this is an argument that simulation theorists have always run against theory theorists, and I’ve found it persuasive. Hopefully, I’ve given some reasons for this intuition here. I have no doubt that the problems with ceteris paribus clauses can be overcome in contrived, institutionalized contexts, for PA ascription as much as for science. But it’s not in such contexts that the practice of PA ascription initially earned, and indeed continues to earn its keep.

    As for the fact that it seems to work: I grant that near the end of the paper. Predicting that drivers stop at red lights is extremely reliable, especially in certain cultural contexts, as well. But what is the reason for this? My hypothesis (which I think is only implicit in the paper) is that PA ascriptions are only reliable in contexts where a lot of mind shaping has already gone on. That’s why the best examples of predictively successful PA ascriptions involve the interpretation of linguistic acts (their Fodor’s favorite example). Once someone has been shaped to think in certain ways about linguistic communication, the ascription of thoughts expressed by acts of communication becomes a very reliable instrument of behavior prediction. But this is derivative of mind shaping: PA ascription is no more a descriptive theory than traffic laws. (More on this in one of my responses to Mason, above).

    This should in principle be testable. I think, for example, substantial, robust deficits in the use of PA ascription to predict behavior in alien cultures would support my view over the mind reading view, although, of course, I assume re-interpretatioins in favor of the latter view will always be possible.

    Anyways, that’s the direction in which I’m inclined to go, in response to your worries. I want to be a pluralist, but I just don’t think PA ascription is that useful for prediction in most contexts, and especially in contexts where there has been no or very different mind shaping than what the ascriber is used to (non-human animals, infants, members of alien cultures).

    Thanks again for the comments!


  18. Tad says:

    Anibal -

    Thanks for the sobering reminder of the pitfalls of adaptationist speculation! In my defense, those I criticize are no less unabashed in concocting just so stories. It kind of irks me that this image of our precursors, and infants, as little theorists gains such uncritical acceptance. So I mainly address this. However, I think a more substantial response to your worries is possible.

    I don’t agree that you have to have details of neural implementation all worked out prior to embarking on speculations about natural function. I think the lesson of recent cognitive science is that proposing hypotheses in all areas relevant to the study of a cognitive phenomeon should go on at the same time. You’re quite right that what we learn about correlations between neural areas and performance on certian tasks places important constraints on speculations regarding evolutionary origins. However constraints in the reverse direction are also important. The design of experimental tasks, and computational modeling of neural resources often reliy on tacit, substantive assumptions about what human brains are primarily for. One thinks of the different approaches to the study of visual processing suggested by Marr’s conception of vision as constructing a 3-d representation, and Ramachandran and animate vison’s reconceptualization of vision as an evolved bag of tricks aiming at successful navigation in typical environments.

    I think the conception of human beings as primarily psychologists, trying to figure out the cognitive properties of their fellows in order to better predict them has had too much of a monopoly on the functional assumptions guiding lower-level research into ToM. The paper can be read as offering a different background conceptualization of human beings - as interacting, shapers of each other.

    This reconceptualization, as the one with vision, is not just drawn out of my colon, so to speak. There are general considerations that can put substantial constraints on speculation about natural function, even in the absence of detailed knowledge of neural implementation. My argument is as follows: due to holism, PA ascription only has a chance at successful prediction if ascribers have time to devote to exploring inevitable exceptions to generalizations. It is unlikely that in contexts in which PA ascription was, and continues to be selected for, there is the time to devote to such sophisticated honing of PA ascriptions. So it’s unlikely that PA ascription was selected and continues to be sustained for the purpose of prediction.

    But surely PA ascription plays some important role in coordination. What can this be if not prediciton? One possibility is mind shaping - just as traffic laws are sustained because of their role in facilitating coordination through behavior modification, so PA ascription and related rules might be sustained becasue of their roles in coordination through mind shaping. This is a hypothesis I offer given that there are reasons to doubt the hypothesis that is taken for granted by most.

    A supplemental argument I propose in the paper draws on the following consideration. The neural causes of primate behavior are extremely complex. It seems unlikely that natural selection could solve the problem of accurately representing such complex cognitive states in a way that makes quick and reliable behavioral prediction possible. If there are mechanisms, involving the public ascription of propositional attitudes, that could shape conspecific behavior in a way that makes it more easily predictable, and if such mechanisms are more likely to be discovered by natural selection than accurate representations of cognitive states, then mind shaping is evolutionarily more plausible than mind reading. I gesture at some reasons for affirming this antecedent. Because of the extreme sociality of higher primates, and the universality and importance of social learning in humans (imitiation, explicit instruction, etc.), it is not unreasonable to assume that our precursors found it easier to shape each other’s behavior to make it more tractable, than to figure out the causes of spontaneous unshaped behavior, and use these to predict it. The mechanisms suggested by Mameli (social expectancies), McGeer, Mameli, Gallagher, and Bruner (intentional interpretation of random vocalizations), Dennett (reason ascription from a young age before reasons are even appreciated), and Bruner (ubiquitous exposure to narrative) are suposed to make this possibility even more palatable.

    Of course such arguments are not conclusive. But I think they constitute good grounds for exploring a hypothesis about the natural function of PA ascription that evades some of the problems with the hypothesis that many have treated uncritically as the default. (That reminds me of another reason offered in the paper in favor of mindshaping - the problem of ceteris paribus clauses doesn’t appear to arise for mindshaping. Since prediction isn’t the goal, it doesn’t matter if we don’t know exceptions to rules: ascribees are subject to rules even if they don’t always conform to them).

    Anyways, sorry for going on so long, and recapitulating much of the paper. In my defense, I’m quite sensitive to the problems you raise for speculation about natural function. Such proposals are often dismissed out of hand due to worries about just-so stories. I don’t think that’s fair. We don’t need to figure out everything about the brain before we engage in responsible (rationally constrained) speculation about natural function; in fact, investigating the brain often presupposes views about natural function, and the more explicit we are about these, and the reasons we have for holding them, the better, in my view.

    Thanks for the challenging remarks!


  19. Anibal says:

    Thanks for the replica you are well armed.

    But i think we have not to forget that defining, a priori, computational goals (functions) before physiological studies (structures) or Marr´s approach, is always more constrained to evidence from physiogical studies than viceversa. The alledge indepency of computational levels is not completly true.

  20. Robert Thompson says:

    At some point in the discussion, someone asked about a citation about early ToM development. I can’t recall if Saxe mentioned this piece or not, but the following was the source for my discussion with Tad:

    Onishi and Baillargeon “Do 15-Month-Old Infants Understand False Beliefs?” SCIENCE, vol. 308, 8 April 2005.

    First wave discussion can be found in

    Perner and Ruffman, “Infants’ Insight into the Mind: How Deep?” SCIENCE vol. 308, 8 April 2005
    Ruffman and Perner, “Do Infants Really Understand False Belief?” in TRENDS IN GOG SCI, 8 (10), 0CTOBER 2005
    Csibra and Southgate “Evidence for Infants’ Understanding of False Beliefs Should Not Be Dismissed” TRENDS IN COG SCI, 10 (1) Jan 2006 Leslie “Developmental Parallels in Understanding Minds and Bodies” in TRENDS IN COG SCI, 9(10) October 2005.

    If anyone knows of other discussions of this stuff, please let me know. I hope to have a paper defending the mentalistic interpretation of these results soon.


  21. Anibal says:

    Henry M. Wellman, David Cross & Julanne Watson, (2001), Meta-Analysis of Theory-of-Mind Development: The Truth about False Belief. CHILD DEVELOPMENT 72, 3, 655-685.

    Luca Surian and Alan M. Leslie, (1999), Competence and performance in false belief understanding: A comparison of autistic and three year-old children. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY. 17, 141-155.

    Wendy A. Clements and Josef Perner, (1994), Implicit understanding of belief, COGNITVE DEVELOPMET. 9, 377-395

  22. Robert Thompson says:

    Thanks Anibal. Those are good papers about early ToM development. I’m specifically looking for papers discussing really early ToM development, either in response to Onishi and Baillergon, or discussing abilities before 24 months.

  23. Pete Mandik says:


    I’ve really enjoyed your paper and the ensuing discussion. Also, I find the mind-shaping hypothesis increasingly appealing. One thing I wonder, though, is how essential you really think all the stuff about teleofunction and Darwinian evolution is. Do you think you would have essentially the same case if you were forced to drop all mention of functions and natural selection? Could the hypothesis be sustained instead as a thesis about not what the function of folk psychologizing is, but simply what it is? Abandoning appeal to Darwin leaves you, then, with appeals only to, say, the cultural innovation of mindshaping or the sustaining action of current coordinative success to explain where mindshaping came from.

    Forgive me if this seems cloudy or ill-thought out (because it probably is) but I have a hunch I’d be curious what you thought of. The hunch is that the Darwinian function stuff is inessential.

  24. Tad says:

    Pete -

    Thanks for the helpful suggestions. I see your point about the vitue of dropping the Darwinian spin. I guess my inclination is to agree with you that sustaining full-blown PA ascription has some kind of cultural explanation, but I see mindshaping as a broader activity of hominids that has an adaptive explanation, and from which PA ascription derives.

    So, I’m very attracted to the kinds of hypotheses Dennett (2003) floats, following Frank, about the importance of signalling commitment to solving the kinds of coordination problems that likely faced our precursors. Dennett also draws on Boyd and Richerson to identify circumstances in which inclinations toward conformity and to punish cheaters would have been adaptive, and argues that such circumstances likely characterized the socio-ecology of our precursors.

    Finally, there’s some really interesting new research in antrhopology (Sosis is the main researcher doing this) arguing that costly signalling is a very efficient means of distinguishing b/w cooperators and defectors. If someone is willing to participate in complex ritual preambles to cooperative endeavors, they’re likely to be trustworthy - not free riders. Sosis has theoretical reasons for this - free riders wouldn’t be likely to pay the opportunity costs involved in complex rituals - and empirical evidence drawn from communal living arrangements in current populations - e.g., comparing religious to secular kibbutzes. Anyways, there is some indication in this research that complex language/ritual, taught from a young age, may have been a useful code for distinguishing those you can trust from those you can’t.

    All of these I’d argue are evolved mind shaping phenomena aimed at solving coordination problems by insuring cooperative behavior through complex communicative and pedagogical practices. I’m favorably disposed to your idea that the sophisticated PA ascription that’s so important to philosophers is a very late arrival, sustained mainly by cultural forces. However, I want to claim that it somehow derives from more rudimentary mind shaping practices that do have a Darwinian explanation.

    Also, having been immersed in Dennett for much of the past 2 years while writing my book, I find it harder to draw a distinction b/w natural selection, and meme selection in cultural evolution. It’s all part of one giant design space being explored by various selectionist algorithms, some more speedy and efficient than others! That’s at least my intuitive take these days (haven’t thought much about the problems with it, or how I’d defend it).

    Is that anywhere in the vicinity of your suggestion?

  25. Pete Mandik says:

    Thanks Tad, I think that’s pretty cool and that’s due in part to the fact that I like Darwin and Dennett a whole bunch. What inspired my initial comment, though, was imagning what you would have to do if you wanted to convince, say, Jerry Fodor of the mind-shaping hypothesis. One thing you would have to do is pull out all the Darwin, I was wondering then, what would be left. I don’t have any problems with your paper and was raising these in more of a devil’s (Jerry’s) advocate sort of spirit.

  26. Tad says:

    Pete -

    I don’t expect any of this could convince Fodor, so I haven’t given it much thought, but thanks for the rhetorical suggestion…


  27. Anibal says:


    Well, if we give credit to classical findings of comparative psychology about parallelisms in psychological development between human infants and newborns monkeys [e.g. Darwin C. (1877), Biographical sketch of an infant. MIND, 285-294; or Kohts N. (1935), Infant Chimpanze and Human Child. Oxford University Press ], and we assume as well a route to theory of mind (mind reading) via mirror neurons ( present in humans and non-human primates as well, specially macaques) because they subserve imitation and intersubjective communication, and because it is possible that there is a imitation-mind reading-connection (perhaps you have to solve too some intermediate issues such as the long dispute about the mind reading abilities in non-human pirmates e.g. Tomassello and Call Vs. Povinelli and colleagues, and decide in which side your are), doubtless and interesting paper is:

    Ferrari Pier F. et al. (2006),Neonatal Imitation in Rhesus Macaques. PLOS 4,9,

    if this is too complicated to derive a natural relation of ideas to prove mentalistic understanding in human neonates, a more conventional look is:

    Carlson M. S., Mandell D. J. and Williams L. (2004), Executive function and theory of mind: Stability and prediction form ages 2 to 3. DEVELOPMENTAL PSYCHOLOGY. 40,6, 1105-1122.

  28. Robert Thompson says:

    Following up on Pete’s question about the evolutionary stuff, I have to repeat the Fodorian claim, “Please, spare me. No Darwin.” I’m not sure why we need to make evolutionary speculations here. Why expect Darwin to help here?

    Just as in the Fodor-Millikan debates we used to have in the ancient past (I still want to ask: where did those debates go? Is all this modal crap about consciousness really better than speculating about the counterfactuals involved in asymmetric dependence? But alas…), given the recent work on primates by Tomasello (and colleagues) and Laurie Santos (Yale psych), the results actually seem to suggest that it isn’t coordination problems where ToM skills arise for the primates, but in competitive situations. If this is right, there could be a different evolutionary story about the onset of ToM that is told, but I’m not sure how well it fits with Tad’s story. I’m not sure why we can’t rely (just) on what the ToM mechanism is currently used for…Moreover, there is a serious question coming up when primates with little social structure (like macaques) demonstrate an understanding of false beliefs…Is this shaping??? Do these results help or harm Tad’s case?

    What I find most interesting with the new primatology stuff and the new stuff with really young kids is that we need to find new ways to describe this early/odd competence. Even scholars who argue for early ToM skills (like Tomasello) still insist that the major move in going from understanding Intentional Agents to a full-blown ToM is grasping that (belief) representations can be false. If the new results are right, we need to say something new not only about the positions of Perner, Wellman, Gopnik, and others, but also revise Tomasello and his colleagues who have misdiagnosed the reasons for failing the standard false belief task as resulting from an inability to reallize that people can have false representations of reality that affect their behavior. If young kids, chimps, macaques, and neonate macaques can pass the Onishi and Baillergeon task, I think we need to rethink the supposed limitations of ToM.

    Bringing things back to Darwin, I’m not sure why we need to bring up Darwin in explaining why ToM does what it does, or if we do, why these basic level results don’t cast doubts on Tad’s claim that ToM is there for solving coordination problems. ToM is useful, sure. More fine-grained purposes are going to be more dubious, and even if they are true, I don’t see how speculating about Darwin gives us much more in these cases.

  29. Tad says:

    Robert -

    Thanks for the challenges. I do think that the stuff on infants and primates is the most serious problem for my view. However, I reiterate what I said earlier. Detecting false beliefs can only help in competitive situations if the detector makes accurate attributions of false belief, and these attributions help the detector predict the behavior of the detectee. But infants and primates only have overt behavior to work with. And primates must come to quick judgments if it’s to help in competitive situations. So do primates and infants somehow solve the holism problem? Is there a straightforward mapping from overt behavior to belief, and from belief to future behavior? If so, what is it? If not, then how do infants and primates learn and deploy, so quickly, all the relevant ceteris paribus clauses?

    I don’t think there is any mind shaping going on in primates. So, I’m committed to the view that, whatever they’re doing, they’re not attributing beliefs. Even if they’re sensitive to the distinction b/w behaviors that we take as symptomatic of true vs. false belief, I want to say that this does not amount to the full-blown capacity to attribute beliefs.

    Perhaps I can hedge my bets this way. To the extent that the holism problem arises for a certain variety of PA ascription, its sustaining function can’t be prediction, based on accurate representation of cognitive state, because in the real time scenarios where we want good predictions, there isn’t time to deploy all the relevant cp clauses. So for those varieties of PA ascription where holism is a problem, there must be some other explanation of why the practice persists. Coordination through mindshaping is my answer. (BTW, I mean ‘coordination’ in a much broader sense than you seem to suggest. Competition is a kind of coordination, as far as I can see - you’re coordinating your behavior with that of another, though the other isn’t aware of it).

    As for Darwin, I don’t understand why everybody is so down on speculating about natural function. Sure there’s risk of just-so stories. But there is a reason why any capacity was selected for and continues to persist, and there are substantive constraints on responsible hypothesizing about it. The ‘mindreading’ camp does it as well. You can’t do cognitive explanation unless you’re clear on the function of the capacity you’re trying to explain. How are we supposed to get clear on the function of natural, cognitive capacities, without some idea of what they were selected for? Are we just to take it as obvious that PA ascription, for example, aims at representing truths about the cognitive lives of our fellows for the purposes of prediction? This is surely a contingent claim, and evidence needs to be provided for it.

    What sorts of evidence counts? Well, if you think of PA ascription as something that natural, evolved, cognitive systems do, surely we must appeal to evidence about what’s likely to evolve in certain environments. I have argued that mind shaping is more likely to evolve than mind reading because shaping conspecifics is easier than reading their minds, especially if their minds are as complex as those of primates. What’s wrong with that? It’s a claim that is easily defeated by empirical evidence - say identifying a mechanism that makes conspecific minds especially easy to read w/o prior mindshaping. I claim that this is unlikely b/c of holism. But a mechanism that can quickly and efficiently overcome the cp clause problem would blow this argument out of the water. So my Darwinian speculating is entirely responsible, and the ball is in the court of those who think accurate, predictively fecund mind reading is something our precursors could easily evolve.

    I really don’t get the animosity toward Darwin among some circles in cognitive science. Fodor’s arguments in a Theory of Content are laughable. Sober long ago made the distinction b/w selection of and selection for a trait. Evolutionary biology appeals to the latter relation - the function of traits is given by what *explains* their persistence. Explanation is an intensional relation - frog snaps at flies are explained by the presence of flies qua flies, or at least qua nutritional black dots: that’s the counterfactual supporting explanation of why they persist. If psychology can help itself to intensional, nomic relations that support counterfactuals, then so can evolutionary biology, Fodor’s ideologically-driven hypocrisy on this matter notwithstanding. Darwin is dead; long live Darwin!!!

  30. Richard Brown says:

    I have enjoyed reading this paper and following the discussion, though I don’t really know a lot about this stuff and so it is hard for me to know what to think about it

    At any rate if anyone is interested in what Fodor currently thinks about intensionality in evolutionary explanations, he will be speaking about that at the CUNY Cognitive Science Colloquim Friday Sept. 29 1-3 at the Graduate Center (365 5th ave). A copy of the paper he will be reading from is available at

  31. Robert Thompson says:

    OK, Tad. My question was intended as a Blueberry Hill question, not a PSYCH Room 219 question. Of course I really ike Darwin and evolutionary speculation. Of course I really like modal claims about consciousness. And of course I think that many claims Fodor makes about content are pretty suspect. But you have to love the “Spare me; No Darwin” lines. But, if I were you, I’d cut Fodor some slack–you don’t want him to write THE MIND DOESN”T SHAPE THAT WAY!

    I wasn’t trying to start a debate about how effective just so stories are, but rather raise a couple of questions about the sort of story you offer. I like your story as a story about what nonhuman primates can’t do, and what children are encouraged to do, and as what adults tend to do. I just don’t see why you need to bring up evolution at all here. But part of my skepticism is that I don’t agree with your attack on mindreading as being a clunker that would offer no adaptive value. Once mindreading looks more viable, I think the evolutionary points won’t help decide between them. I love Darwin, Tad.

    As some of the comments have expressed, I think the biggest problem with mindreading might be entanglement, but you focus on complexity, so I will discuss complexity.

    First, p. 5: I am sort of sympathetic to Gauker’s point about not having the laws, but your next claim seems to be in need of more support. Would this same argument apply to the field of syntax? Have they had enough success to make their laws real, or is there too much disagreement and theory change? Is FP any worse off than syntax? Have psychologists even sought to formulate the laws of FP or CogPsych (would they put it in those terms)? Does the Biases and Heuristics literature or the stereotypes literature count as formulating SOMETHING like the laws of FP?

    Second, and more importantly, I just don’t see mindreading as being a slow clunker. Most of the debates between the theory-theorists and simulationists took place at a level of abstraction that leaves us unable to articulate in any detail what would go on in mindreading by employing the principles or ceteris paribus laws of FP. The major exception to this lack of detail is Nichols and Stich’s account. When provided with these resources, I don’t see how mundane mindreading won’t happen quickly and effortlessly.

    For instance, in most mindreading situations, the discrepant belief and desire attribution mechanisms will remain largely silent. Just because the laws included in the mindreader are hedged, does not suggest that the system doesn’t usually follow them unreflectively, so to speak, and that it messes up pretty frequently (as it should, to match the data). Nichols and Stich have offered a complicated enough device to both have parts clamped and connections cut (in cases of trauma, development, and mundane mindreading) but also enough sophistication to sit and spin when some sleuthing is involved.

    Now, I still have some explaining to do–how does the more complicated stuff come online in non-mundane cases, and how one discovers that this example is a non-mundane one (tasks I assume, which are part of the under-described MINDREADING COORDINATOR). But, I just don’t see that this more complicated mechanism couldn’t get the job done.

    Part of our disagreement may boil down to how often we need mindreading (or perhaps non-mundane mindreading). If you haven’t read Bermudez’s paper on this “The Domain of Folk Psychology”, you should (though I disagree with his thesis). I tend to think that a lot more mindreading goes on than other people do (for me, we have to mindread to grasp the semantic content of many utterances, and kids have to mindread to learn the meanings of words).

    After all of this, I just don’t see why Darwin needs to enter into the picture here.

    Finally, a question about mindshaping: I don’t see why kids fail the standard FBT up until age four if their predictions about people are based on stories and shaping that little kids are exposed to. Fairy tales and kids stories are full of examples of things being misplaced, going missing, deception, evil mysterious forces. Those of you with kids must be able to think of several stories that your kids understand and that involve situations with false belief, right? Does anyone have a good example of this?


  32. Tad says:

    Robert -

    I didn’t mean to come off as half-cocked - a Blueberry Hill criticism deserves a Blueberry Hill response! You raise some really potent challenges. I knew you were going to be my nemesis that day you stole my watch! ;-)

    With regard to the syntax analogy - I don’t think it’s helpful. Here’s why: it’s very hard to identify on the fly communicative situations where bad syntax has some communicative cost. Most actual, concrete, context bound linguistic communication routinely flouts rules of syntax. There are plenty of examples of perfectly successful communication among today’s populations that resemble ‘proto-linguistic’ structure - not just pidgins, but ‘the basic variety’ used by immigrants, and merchant marines, and in general whenever people who don’t share a language, but have a few words common, must communicate. What does this mean? That knowing the syntax of language has little effect on survival value in most communicatve contexts. That’s why I favor an exaptationist account of the evolution of syntax. The situation I want to claim is different for laws of FP b/c there you really need to know all the complex laws in order to get anything useful out of PA ascription.

    I’m glad that you find entanglement a real and challenging phenomenon for mindreading theories to explain. However, I think it’s just a species of complexity/holism. The point about entanglement is that in the kinds of sophisticated social interactions that you see among adult humans, the desires of one party often refer to the mental state of the other. But this mental state changes as soon as the other tries to figure out the desires of the first. And this might change the desires, and so on. What this shows is that ‘other things are never equal’. Desires are ascribable except when they involve the ascribee’s best estimate of the ascriber’s mental state, which leads ascribee to change desires. Basically, in many situations involving adult human social cognition, the act of trying to ascribe a mental state violates ceteris paribus clauses governing the ascription b/c it involves altering an environmental variable with important effects on the ascribee’s desires.

    I don’t know much about the heuristic/biases or steretyping traditions - I didn’t realize that they were primarily concerned with social cognition, as opposed to problem solving in general. Can anyone help with a reference here?

    I need to look over the Stich & Nichols model in more detail to see if it can address my worries. As I recall, default ascription that ignores cp clauses, for S&N, is essentially simulation - the ascriber uses their own decision-making system to come up with hypotheses about what the ascribee will decide. I don’t think such simulation is plausible b/c I don’t think our decision-making systems are that alike, at least independently of significant mind shaping. I think Dangerous D might have a paper out on this - I guess I’m inclined to the view that successful simulation presupposes a lot of theoretical knowledge, in order to adjust inputs so as to model differences b/w ascriber and ascribee. Insofar as such theory-based adjustment is not necessary among adult humans, I want to argue that it’s b/c of a common history of mind shaping.

    I need to look at the infant and primate stuff you cite. Here’s my worry about that - maybe you can actually put this worry to rest. Exactly what kind of evidence, in the primate case, rules out the hypothesis that primates are sophisticated behaviorists? What we call deception could just as easily arise, as far as I can tell, from a sophisticated sensitivity to subtle patterns of behavior. E.g., if no one’s around or looking at me when I find the food, then no one will take it. Now I realize that the kind of flexibility that some primates display in using information about what their conspecifics have and have not seen makes a cognitivist explanation more tempting. And perhaps there is something to that - perhaps they are attributing some kind of internal cognitive state. But I fail to see why you’d call the state they’re attributing a belief, or any other PA.

    Here is why. I take it as definitive of what most philosophers are talking about when they talk about belief and other PAs, that ascription of these states is inevitably holistic, in the sense that no behavior by itself suffices for a warranted belief ascription. Or, in other words, any behavior is compatible with an indefinite number of non-equivalent PA ascriptions. But I find it very implausible that, if primates attribute unobservable cognitive states, these attributions are subject to holism. Can a chimp somehow represent the possibility that a conspecific might engage in exactly the same behavior as that which triggered the 1st chimp’s attribution of a PA at one time, w/o tokening the PA? Suppose a chimp sees that a conspecific wasn’t looking when it found some food, and attributes what you would call the false belief that there is no food there. Does the chimp have the resources to envision circumstances where the precise same behavior would not warrant such an attribution? Can the chimp for example wonder whether the other sneaked a peak while it wasn’t looking, and then test for this hypothesis? Or is the attribution of false belief automatically triggered by certain observed behaviors, like the attributee’s not looking in a certain direction?

    It seems to me that to be able to attribute the full-blown concept of belief, an attributor needs to be sensitive to the fact that any behavioral evidence is defeasible. This is a consequence of holism: the thesis that any belief is compatible with any behavior given appropriate adjustments to background PAs. I don’t think that chimps or infants deploy this concept of PAs. And this is precisely the concept of PAs that I’m interested in - b/c it’s only with such PAs that the connection to prediction is too tenuous to explain the role of their attribution in solving coordination problems.

    I don’t doubt that chimps and pre-linguistic infants are great mindreaders, nor even that they can attribute some kind of internal cognitive state (though I’m less sure of the latter). My only point is that their mindreading does not involve PA attribution. Rather, it involves tracking of behavioral tendencies based on certain non-obvious, low-level cues, assumptpions about shared goals, etc. Now this admission is somewhat in tension with one of my evolutionary claims, namely the claim that all mindreading of brains as complex as those of primates, is too ‘evolutionarily expensive’, and that mindshaping is a cheaper alternative. That claim is too strong. There are reliable enough links b/w overt behavior and motivational states to support efficient mindreading in primates and infants. In fact, it’s arguable that some such reliable links actually evolved to make primates more interpretable to each other (mindshaping on a phylogenetic scale?). But, in my judgment, such mindreading typically exploits either sophisticated connectionist-style behavioral pattern recognition, or the attribution of very low-level cognitive/motivational states that have an ‘encapsulated’ relation to overt behavior, such that automatic attributions triggered by overt behavior have a good chance of being correct and affording useful predicitons.

    Such mindreading, though very powerful in some contexts, is still of limited value, given the complexities of the primate brain. From my cursory observation of primates in zoos, on films, etc., their social dynamics seem fairly chaotic, especially compared to the relatively orderly dynamics seen in most human settings (I’ve been riding the metro to work and back since taking my new job, and it never ceases to amaze me how you can stuff hundreds of primates in small metal cylinders underground and have them remain in complete silence without attempting any physical contact with each other). The reason for the difference is that their complex brains *are* near chaotic, especially given the constant provocative and confusing stimulation they get from their conspecifics, while our brains have been tamed by mindshaping.

    So, the gist of my argument remains, I think. Even granting that critters that haven’t been subject to mindshaping can be good mindreaders in some contexts, and attribute cognitive/motivational states, the cognitive/motivational states they attribute aren’t what most philosophers mean by PAs because their relation to behavior has to be more direct, and therefore not holistically-mediated. Furthermore, such mindreading that they have is of limited effectiveness in solving coordination problems and otherwise anticipating and controlling conspecific behavior, when compared with populations, such as human ones, where mindshaping has regulated the social realm to make it more predictable and controllable.

    That’s a long post - but you raise some really excellent points that deserved detailed consideration. Thanks again for the thought provocation, and all the interesting references (you don’t happen to know where the Bermudez appeared, do you?)

    Best, Tad.

  33. Tad says:

    Richard -

    Thanks for enjoying the fray! And thanks especially for the link to the Fodor paper. Very useful!

    Robert -

    I forgot to address your question about the time line of exposure to narrative and FBT performance. You’re right that most fairy tales, etc., involve mistaken identities, false beliefs and the rest of it. I find that very interesting, and congenial to my conclusion. I’m not quite sure why failure at the classic FBT before age 4 is somehow puzzling given all the exposure to narrative.

    First, it’s hard to control for it - different children are exposed to different amounts and types of narrative at different times. I think there’s a piece by Gopnik somewhere, suggesting evidence that appropriate input can actually accelerate the time-course of FBT competence. Anyways, exposure to narrative can play a causal role in acquisition of FBT competence, even if it takes several years of it.

    It could be a question of short-term memory or attention limitations, as nativists about ToM like Fodor believe. Perhaps kids just can’t keep all the info necessary to making an FB judgment in mind at once prior to certain maturational milestones. So though hearing narratives motivates and helps scaffold FB competence, it’s not sufficient or something.

    Another possibility is the one proposed by the de Villiers - syntactic competence with embedded clauses in verbalized thought and utterance ascriptions is a precondition on FBT competence. If so, the role of narrative might be exposing children to sentences, like ‘Godilocks did not know that bears lived there’, which, coupled with appropriate maturation in syntactic processing areas, yields an appreciation for false belief.

  34. Hi Tad,

    Thanks for the interesting paper, and thanks to everyone for the interesting discussion. As you might expect, I am pretty sympathetic to your position (especially the critique). I do, however, have a couple of questions. One is whether you think we can account for all our predictive behaviors through the norms of mindshaping rules, or what kind of force it has to say that our predictions are normative. (In some sense, you might think that anything that is a prediction is a claim about how a thing ought to behave, but that seems to be too trivial to be your point here.) As Eric mentioned, humans (perhaps including people with autism who fail FB tasks) may make predictions and coordinate behavior using a varity of methods; we can use personality traits or mood attributions to make predictions of behaviors, we can make generalizations on a person’s past behavior (no matter whether we think it’s how the person ought to behave according to social norms). We do, of course, make lots of predictions from social norms too. One thing I’ve been interested in is predicting immoral behavior. How do you account for our ability to make those predictions on the mind-shaping view?

    One other small point. I’m not sure if it’s true that “Human beings are distinguished from other mammals by their extreme sociality.” There are many other highly social species. Maybe you have something more specific in mind by “extreme sociality”? If humans and other species are both highly social, and animals are able to coordinate behavior and make predictions, what should we conclude about the mechanisms that allow for those abilities?

    And to Robert and others interested in work on early ToM, I have a colleague, Maria Legerstee, who claims to find such evidence. She has a lot of her work on her website:

  35. Tad says:

    Hi Kristin -

    Thanks for the comments. I thought you might be sympathetic. On to your questions:

    I think there is a lot of prediction, involving Trevarthanian intersubjectivity, sensitivity to low-level cues, assumption of shared goals, etc., that does not necessarily involve mindshaping. I think all non-human primate social cognition is like this. As I acknowledge in my reply to Robert, I think even some mental state ascription might be exclusively for predictive purposes, w/o relying on prior mindshaping for efficacy. My main claim is that specifically PA ascription, where PAs are the full-blooded attitudes that are subject to holism, and therefore have a very complex relation to behavior, must in the first instance involve mindshaping. Any predictive use of such PA ascription relies for its efficacy on prior mindshaping.

    In fact, part of my argument is that we are relatively good predictors, of unshaped behavior, without PA ascription. So if PA ascription’s main goal is prediction based on accurate mindreading, it must somehow improve on the predictive ability made available without it, e.g., using Trevarthen’s intersubjectivity. But given the problems with PA holism, it’s very unlikely that it can improve upon PA-ascription-independent prediction. So PA ascription must have some other use - mindshaping.

    I don’t think all predictions have normative force, though I know what you mean by the trivial reading. Perhaps this can be cleared up with Anscombe’s distinction b/w world tracking and world changing direction of fit (not her words). A linguistic act is world tracking if a disappointed expectation ought to lead to a revision of the linguistic act. A linguistic act is world changing if a disappointed expectation ought to lead to a revision of the world. So, paradigmatically, assertions are world tracking while commands are world changing. I think that PA ascriptions have a kind of dual use - they’re ‘Pushmepullyous’ in Millikan’s terms. Just as one says to a child ‘You will eat your peas’ - meaning to predict as well as to prescribe, a PA ascription implicitly predicts and prescribes as well. This is what its normative force consists in. Normal predictions only have a world tracking direction of fit, so if they’re disappointed, they must be revised. My claim is that PA’s world chaning direction of fit is primary: if someone’s behavior is incompatible with an ascribed PA, it could be, and in the case of children often is, that their behavior is at fault.

    I completely agree with you and Eric that we predict behavior on the basis of generalization from the past, and on the basis of personality trait - or mood-ascriptions. Sterelny has also suggested social roles as a non-mentalistic basis for prediction. My claim concerns only PA ascription, which I maintain aims in the first instance at shaping, and any predictive use is derivative of this. I suspect that predictions based on behavioral generalizations are almost entirely predictive, with little shaping role, while mood and trait ascriptions are an interesting intermediate case. My intuition is that ascriptions of moods, emotions, etc., are primarily there for the purpose of prediction, though cultural variation in emotion ascription may suggest a shaping role as well. Some times if you’re told you’re agressive and angry enough, you become that, I suppose (that’s kind of like Mameli’s point about gender-specific mindshaping). Traits are even more normatively loaded. So it’s an interesting issue how much they function to shape, and how much to predict based on representation of shaping-independent facts.

    I guess the really interesting thing is that, when it comes to our fellow humans, mindreading is never entirely ‘innocent’ (for lack of a better word). Because of our intense, automatic, subtle, and powerful constant influence on each other, any attempt to describe or predict is also, potentially, an attempt to mold so as to make easier to predict. This is significantly disanalogous to our relation to other parts of the environment, where the act of describing/predicting does not have such immediate effects on the domain described/predicted, and thus, the direction of fit is, in the first instance, almost always world tracking.

    As far as predicting immoral behavior - moral norms are a subset of the norms we institute through mindshaping. My paper focuses more on the rational norms implicit in PA ascription, and on which most other norms are based, arguably. The only way to flout these norms is by being irrational, e.g., the insane. And here I go with Dennett - when the norms I’m interested are flouted the behavior can’t be made sense of in intentional terms. It can be predicted, but by descending to a non-intentional stance. As for immoral behavior, that might not flout rational norms (at least if you’re not a certain variety of Kantian). So the mindshaping associated with PA ascription should support predictability of immoral acts, since only rational norms are at issue in such ascriptions, and immoral acts can be rational. Torturing the innocent is morally wrong, but one can predict that certain people will defend the practice, given the desires and beliefs that are ascribable to them, and the fact that they’ve been shaped to pursue their desires in light of their beliefs.

    I think, following Dunbar, that all primate species are distinguished from other mammals by their extreme sociality. Dunbar has a very nice chart relating certain measures of social complexity (group size, sexual relations, etc. - I don’t remember all of them) to relative size of prefrontal cortex - the correlation is impressive. I think most primate intelligence is the result of dealing with complex social environments - the Machiavellian Intelligence Hypothesis. However I think, following many evolutionary psychologists, that human socio-ecology is hyper-complex, and this led to the evolution of precocity at social cognition unmatched by other primate species. Where I depart from evolutionary psychologists is in how I characterize the socio-cognitive innovations that distinguish humans from their socially adept cousins. It’s not that humans evolved a more sophisticated ToM. It’s that they figured out how to shape each other in order to make each other’s behavior more tractable - easier to coordinate with. I think the higher primates are excellent natural psychologists - almost as good as us w/o mindshaping. Our innovation, of which true PA-ascription is a part, is to learn how to tame the intractable complexity of our social environment through mindshaping. I suppose the evolutionary story I want to tell is the following - at some point, for contingent reasons, perhaps increased group size, or pair bonding, or whatever, conspecific behavior gets too hard to predict using the mindreading capacities we share with other primates (which does not include, on my view, true PA ascription). Mindshaping is selected as a way of taming or rendering more tractable this increasingly complex socio-ecology.

    Hope this gives you some idea of what a thorough and proper response to your worries might look like…

    Thanks for the reference!

  36. Chase Wrenn says:

    This is all great stuff. Fascinating! I wish I knew more about it.

    Maybe I’ve missed something along the way here, Tad, but why isn’t it the case that holism is a problem for the mindshaping view no less than it is for the mindreading view?

    Here’s my worry: Just as holism causes problems for determining what a person WILL do, given some proper subset of her mental states, it causes problems for determining what she SHOULD do, given some proper subset of her mental states. Just as “laws” of folk psychology are massively hedged ceterus paribus laws, the “norms” that seem to take their place on the mindshaping view are massively hedged pro tanto norms.

    Suppose the mindreading view suffers because prediction on the basis of FP is intractable. Doesn’t a parallel argument show that the mindshaping view suffers because prescribing on the basis of FP is equally intractable?

    [I should also mention my misgivings about the view that propositional attitudes are constitutively governed by norms at all. That's a substantive view that is far from obvious, and there are some good reasons for rejecting it, I think. There's a nice paper on the issue in the current PQ on the case of belief (it's by Asbjorn Steglich-Petersen).]

  37. Tad says:

    Chase -

    Excelent point. I was waiting for someone to bring that up. You are indeed correct that so-called rational norms have to be hedged. Believing that it is raining and desiring that you stay dry only rationally obligates umbrella opening provided that one doesn’t more strongly desire to conceal one’s umbrella, one doesn’t believe that one’s umbrella is broken, etc. However I don’t think this is a serious problem for two reasons.

    First, I don’t think that the hedging problem is as bad for norms as for predictive/descriptive laws. The reason is that norms are the kinds of things we can fail to live up to. Sometimes when we fail it’s because we’re living up to another conflicting normative requirement. E.g., we fail to act as we should on our belief that it is raining and our desire to stay dry because we are acting as we should on our desire to conceal our umbrella, or our belief that it is broken. But other times we just fail to live up to the norms for no rational reason. We may be akratic, or have a lapse of memory, or otherwise lapse from rationality. Such lapses do not excuse us from the norms. We are still subject to them, but we fail to live up to them, and as a result are often tacitly sanctioned, if just through embarassment. So, in such cases, we’re bound by the norms without hedges.

    The whole point of norms is that we are bound by them even when, through our own fault, we flout them. It’s not like you ought to act according to the norms of rationality except when you forget, or your will is weak… These are not excuses, hence do not hedge rational norms. But the corresponding laws, if taken descriptively, with the goal of prediction, would have to be hedged in cases of memory failure, akrasia, etc. If the point of a PA ascription is prediction, then lapses of rationality constitute exceptions that must be encoded in cp clauses. If the point of it is prescription, then such lapses are not exceptions; they’re failures to live up to the norm.

    Ok, so the hedging problem is not as bad for PA ascriptions taken as shapers, as it is when they are taken as describers. But there still is a serious problem - and it’s precisely for the same reason that Morton doubts the descriptive efficacy of PA ascriptions - holism. Any behavior is both causally *and rationally* compatible with a finite set of PAs, assuming appropriate modifications to background PAs.

    Here’s how I want to address this issue. It may be very hard to figure out what we’re actually obligated to. But this isn’t as pressing an issue as the corresponding difficulty of figuring out what we will do. Here’s why. Incorrect assumptions about what we ought to do can contribute to solving coordination problems while incorrect predictions cannot. The idea is that if someone accuses you of flouting some norm given the PAs you’ve given reason for ascribers to ascribe, you must accept sanction or else explain why you were ascribed the wrong PAs or claim you had other PAs that explain the deviant behavior. This puts us in the following situation. As I mentioned in another reply, we are constantly floating candidate rationalization for things we do, in order to be able to respond to potential challenges, etc. The fact that everyone does this, and tries to insure that their behavior is easily rationalizable, has as a side-effect better regulated, more stereotyped, more tractable behavior, with which it is easier to coordinate.

    So there are coordination benefits even if we’re often wrong about what we ought to do. As long as we’re all obsessed with acting in a defensible manner, coordination is served. The point of the paper is not to identify the correct rational norms relating PAs to behavior, but rather to claim that PA ascriptions are typically used with normative force, and this helps solve coordination problems, whether or not correct norms are ever identified.

    Note that this possibility is precluded if PA ascription is only used for prediction. If coordination depends on correct prediction of behavior, and predictions are often wrong, then coordination suffers. The key difference in the mindshaping alternative is that even mistaken claims of normative commitment can have positive effects on coordination because it forces potential ascribees to have rationalizations in hand, and engage in behavior that is easily rationalizable. This narrows the range of behavior they’re likely to engage in, thus helping coordination. Incidentally, this nicely explains an oft-noted feature of explicit, linguistic PA ascription - it is more often used for retrodictive rationalization than for prediction.

    I don’t think the claim that PAs are constituted by norms is obvious either. I haven’t been convinced by any of the attempted refutations I’ve seen in the literature, but my claim is not that there is some kind of logical connection between PAs and norms of rationality (though there may be). My claim is purely empirical. Given that PA ascriptions can’t help us through facilitating prediction of behavior, maybe their role is to help shape behavior so as to make it easier to coordinate with. This is supposed to be a contingnet, empirical claim about the role that most everyday PA ascriptions play in our everyday lives.

    Thanks for the stimulating objection, Chase! Your schooling is always welcome.

    Sorry I haven’t learned to match you concision!

  38. Chase Wrenn says:


    I think the important issue is Morton’s. No proper subset of a person’s PA’s is sufficient to determine how she ought to behave. So, it seems, to know on the basis of a person’s PAs what she ought to do, you must know what all of her PA’s are. According to the mindshaping hypothesis, we use PA ascriptions to figure out what people ought to do, and we use our assessments of what people ought to do to solve coordination problems. So, if Mindshaping is correct, the problem of figuring out what a person ought to do on the basis of her PA’s needs to be tractable. But it isn’t tractable, and that looks like a problem for Mindshaping.

    Your response is extremely interesting: We can solve coordination problems even if we are WRONG about what people ought to do. So, the problem of (correctly) figuring out what people ought to do on the basis of their PA’s doesn’t have to be tractable for the mindshaping hypothesis to work.

    I’m not sure your response is wrong, but I’m not sure it’s right either. I think it could be argued that many of the best solutions we have arrived at for coordination problems are designed to do two things. First, they exploit the pre-existing and independent tendency of people to understand the world and one another in terms of agents, beliefs, and desires. Second, they are designed to be robust against the occasional misapplication of norms to people who are not bound by them. If this is so, then it’s not the fact that we attribute PAs and normative commitments to people that explains our good solutions to coordination problems. Instead, one thing that helps a solution to succeed is that it exploits our tendency to attribute PAs and normative commitments, and it has mechanisms built in to counteract and correct the occasional mistaken attribution.

    Here’s another way of coming at the problem. If Mindshaping is correct, then people need to be able to decide whether or not sanction one another’s behavior, and they need to make those decisions on the basis of PA ascriptions. Given that no proper subset of a person’s PAs determines whether or not she is subject to sanctions, how is another person to decide whether to sanction her behavior or not, without making assumptions concerning the totality of the first person’s PAs?

    You seem to invoke a sort of challenge-and-response model of sanctions, where I try to sanction you, and then you respond by demonstrating the rationality of what you did after all. But how did I decide to sanction you in the first place?


  39. Tad says:

    Chase -

    Thanks for the nice, clear statement of where I want to go. I’m definitely into the challenge-and-response model. Why do I need a complicated story about how I decide to sanction? If a behavior strikes me as weird, given certain background assumptions about what someone should do given what they believe, etc., I sanction. Sure it’s risky, but that’s why, as Mason points out, PA-ascription for the purposes of mindshaping is inherently social. The object of my sanctions can set me aright - I use others as a resource to tune my dispositions to sanction. In fact, I guess there’s a kind of meta-sanctioning involved - unjustified sanctioning is itself sanctioned. As Mason puts it in a paper draft of his I’ve been reading, not only are there norms of behavior, there are norms of ascribing PAs and corresponding behavioral proprieties.

    The hand-wavy idea I have is that this challenge-and-response practice, much of it implicit, has as a side-effect regulation of behavior such that it is easier to coordinate with. We become so attuned to potential challenges, and so adept at responding, that the result is fairly stereotyped behavior (easily rationalizable behavior relative to a given community) that can be easily predicted using lower-level, non-PA-ascription-involving Theory of Mind, e.g., sophisticated connectionist-style behavioral pattern recognition, and ascription of lower level mental states with a more direct connection to behavior.

    Another idea I had, incidentally, is that the kind of mindshaping that we see in development - the sort of social expectancies that Mameli identifies in the case of gender stereotyping of infants, and interpretation of infant vocalizations as intentional communicative acts - restricts the kinds of constellations of PAs and associated behaviors that, as adults, we’re likely to produce. So mindshaping during infancy and childhood, involving both PA ascription and other mechanisms (imitation, gender and other trait stereotyping, etc.), leads to socialization that largely mitigates the hedging problem. This, of course, allows for predictive PA-ascription, as well as prescriptive PA-ascription, to become tractable. But that’s something I allow for in the paper. Traffic laws are very efficient predictors of driver behavior, but they aren’t in the first instance a preditive theory. They can be used to predict because of a more fundamental shaping role. I want to say the same about PA ascriptions and associated norms. To the extent that the hedging problem can be mitigated, it is due to a prior mindshaping use of PA ascriptions.

  40. Chase Wrenn says:

    Tad sed:

    “If a behavior strikes me as weird, given certain background assumptions about what someone should do given what they believe, etc., I sanction. Sure it’s risky, but that’s why, as Mason points out, PA-ascription for the purposes of mindshaping is inherently social.”

    Chase sez:

    The trouble is that there seems not to be any way of making sense of those “background assumptions.” Any behavior is rationally compatible with any less than comprehensive set of such assumptions. The trouble here isn’t just a problem of fallible normative judgments. The trouble is that, on the mindshaping proposal outlined so far, there are no possible grounds for even a *prima facie* judgment that a person’s behavior is rationally appropriate or not. There is no way to get the challenge/response process going, because the problem of deciding whether or not to make the initial challenge is intractable.

    Tad also sed:

    A bunch of stuff about norms being such as to encourage people to behave in ways that make them easy to coordinate with.

    Chase wonderz:

    What makes behavior easy to coordinate with, especially if “easy to coordinate” does not entail “easy to predict”? Or is the point here that mindreading is parasitic on mindshaping?

    Chase also sez:

    The stuff in the last paragraph sounds really neat. Is it right to compare mindshaping’s role in making people predictable to Universal Grammar’s role in making language learnable?

  41. Tad says:

    More to follow, but in response to the first point - You’re question is about the possible grounds for prima fascie judgments about rational appropriateness. But I’m not, in this paper, interested in justifying such judgments. I’m just saying they happen, more than predictive uses of PA ascription, and that they are more fundamental than predictive uses. How can ungrounded normative judgments give rise to a mindshaping practice that helps solve problems of coordination? That’s a good question. I have some thoughts on it, but nothing too worked out. Keep an eye out for the book! ;-)

    Thanks again for keeping me honest, Chase!