The Economy Problem


The Perfect Cow

Originally uploaded by Pete Mandik.

While there are many theories of what representational content is and how representations come to have it, it is not entirely clear that these theories are compatible with basic assumptions about the diverse roles that representations play in the internal causal economies of organisms. Let us call the problem of showing the compatibility of a theory of content and these pre- theoretic assumptions about the roles of representations within a causal economy “the economy problem.�

The economy problem is due in large part to the emphasis that perception has received in theories of content. Consider the kind of stock example typical of this literature. Smith has a mental representation heretofore referred to as “/cow/.� As the story goes, /cow/ means cow, that is, Smith has /cow/ in his head and /cow/ represents a cow, or cow-ness, or cows in general. On the standard story, Smith will come to have a tokenning of the representation type /cow/ when Smith is in perceptual causal contact with a cow and comes to believe that there is a cow, presumably by having, in his head /there/ + /is/ + /a/ + /cow/ or some other concatenation of /cow/ with various other mental representations. The main question addressed in this literature is how /cow/, a physical sort of thing in Smith’s head, comes to represent a cow, a physical sort of thing outside of the Smith’s head.

This focus on the perceptual case has made causal informational proposals seem rather attractive to quite a few people, so let us focus on the following sort of suggestion, namely, that /cow/ represents cow because in typical scenarios, or ideal scenarios, or in the relevant evolutionary scenarios, /cow/s are caused by cows, that is, /cow/s carry information about cows. Thus, tokenings of /cow/s in the heads of Smith and his relatives are part of the operation of a cow-detector. A widespread presumption of this kind of view, and a not necessarily bad one, is that the /cow/s you find in the perceptual case are the same things that will be deployed in the memory, planning, and counterfactual reasoning cases too. The presumption, inherited from a long empiricist tradition, is that what ever happens in perception to wed representations to their contents, can simply be passed along and retained for use in non-perceptual mental tasks. In its most literal form, this is the view that whatever happens to items in the perception “box� is sufficient to mark those items (picture them as punch cards, if you like) as bearing representational contents. Those items can thus be passed to other boxes in the cognitive economy, and retain their marks of representational content even after they may go on to play quite different causal roles.

This is an interesting suggestion, but certainly open for questioning. That is, what might seem like a good idea about the nature of representations in connection with perception may not generalize to all the other sorts of things mental representations are supposed to do. Presumably, /cow/s, that is, mental representations of cows, have a lot more work to do than take part in perceptions. Consider that /cow/s are used to remember cows, to make plans concerning future encounters with cows, and to reason about counterfactual conditions concerning cows (e.g., what if a cow burst into this room right now?). Perhaps, then, the sorts of conditions that bestow representational contents onto perceptual states are very different than the conditions on representation in memory, which are yet different from the conditions for representation in planning, counterfactual reasoning, and so on.

A second concern, not unrelated to the first, is how you tell what and where the /cows/ are in the first place. Focusing on the case of perceptual belief brings with it certain natural suggestions: point Smith at some cows and look for the brain bits that seem to “light up� the most. Much talk of representation in neuroscience is accompanied by precisely this sort of methodology. But are the bits that light up during the retrieval of memories of cows or counterfactual reasoning about cows the same bits that light up in perceptions of cows? And more to the point, how will various theories of representational content cope with the different possible answers to this question?

The economy problem might best be seen as decomposing into a pair of problems, the first concerning a question of representational content and the second concerning a question of representational vehicles. The economy problem for content is the question of whether the conditions that establish representational content for perceptual representations are the (qualitatively or numerically) same conditions that establish the representational contents of memories and intentions or whether distinct conditions are necessary. The economy problem for vehicles is the question of whether the vehicles of perceptual representations will be the (qualitatively or numerically) same vehicles as in memories and intentions or whether distinct vehicles are necessary.

(excerpt from
Mandik, P. 2003. Varieties of Representation in Evolved and Embodied Neural Networks. Biology and Philosophy. 18 (1): 95-130. )


18 Responses to “The Economy Problem”

  1. Dan Ryder says:

    Re: the economy problem for content - For a large class of perceptual representations, memory representations, intentions, supposings (in counterfactual reasoning), other thoughts, and desires, it seems a certain sort of systematicity obtains. If you can have a perceptual state with that content, then you can also have a memory representation with that content, and a supposition, etc. etc. If all these representation types (and tokens) had their content assigned in different ways, this systematicity would be surprising, no? I think this is a pretty strong reason to suppose that their contents are all determined in the same way, even by numerically the same conditions. (Is it an *informational* way, or some other way closely linked with perception? No comment.)

    Re: the economy problem for vehicles - Here’s an inconclusive argument for why the vehicles ought to be (at least partially) numerically the same across perception, memory, intention etc., at least where these representations can sometimes play a cognitive role. When I see a cat, and then later remember a cat, what makes it the case that my mental states purport to concern the same cat? Something must make it the case, and that something must be accessible to the system (since it’s psychologically relevant.) In short, the representations in question need some sort of samness marker (see Millikan, On Clear & Confused Ideas). Same vehicle is an awfully convenient and easy sameness marker.

    Another example: when I make some inferences about gold, the things I learn in making my inferences had better attach to the same representation that I wield when I see some gold later, and then develop some intentions with respect to gold… or else what was the point of making the inferences in the first place? Again, some sort of sameness marker is required across token representations of gold, and same vehicle is the obvious one.

  2. Can I escape such problems by suggesting that “representation”-talk is just a fictional calculus — good for some things, but breaking down when taken too literally? Or does that simply turn the mind into an intractibly amorphous mass?

  3. Pete Mandik says:

    Eric,

    I buy a Cartesian line of argument the conclusion of which is something like this: all sorts of things might turn out to be fictions, but at least one thing won’t, and it will be a representation.

  4. Pete Mandik says:

    Dan,

    I now have a 4 comment Dan Ryder backlog to work through. Give me some time and I’ll have interesting responses to your interesting comments.

  5. I agree it’s hard to deny that in some sense I am “representing” when I think to myself (in inner speech, say), “there’s a cow”. But it’s about seven leaps from that to the “Representational Theory of Mind”. Why think that there is some economy of things, “representations” that persist and are manipulated in our thinking, and have a distinct “content” that must remain the same between uses?

    Neurons fire. We have thoughts. There is some pattern to those thoughts. But whether this pattern is underwritten by genuine representations being manipulated according to syntactical, or grammatical, or semantic, or logical, rules –that’s where the temptation arises in me to think in terms of “useful fiction” or sometimes “not-so-useful” or “misleading” fiction.

  6. Tad says:

    Eric-

    I’m sympathetic with a lot of what you say, but what do you mean by ‘thoughts’? Representationalism is not just a theory of how the mind/brain works, it’s a scheme for individuating/identifying thoughts - for saying what thoughts are and how we distinguish them from each other and from non-thoughts. What other way is there of doing this? In particular, when you say “Neurons fire. We have thoughts”, what is it, exactly, that we have?

    Tad.

  7. I don’t have a good theory of “thoughts” — but I guess what I mean is that we have a certain sort of experience that people often label with the word “thought”. I’m not sure exactly what, if anything, all such experiences have in common.

    But I bet you won’t deny that there’s some sense of “thought” in which it seems that you sometimes have thoughts, and that “there’s a cow” is the kind of thing that one might “think” in the relevant sense….

  8. Tad says:

    Eric -

    Sure I’d admit that, but you seem to be falling back on phenomenal criteria for identifying/individuating thoughts. Such criteria seem more problematic to me than seeing them as representations in an internal cognitive economy.

  9. Well, I don’t know. It’s all problematic!

  10. Pete Mandik says:

    Dan,

    I agree with your remarks on content, but not so much on vehicles. Utilizing numerically one and the same vehicle on distinct occasions of representation is an engineering solution that I doubt is often utilized. Consider a detector model for sensory representation. At some distinct times t1 and t2 when I represent redness by having my red-detector fire, the representational vehicles in question are detector firings, and there is a numerically distinct firing at t1 and t2.

  11. Pete Mandik says:

    Eric and Tad,

    I’ve enjoyed your exchange, and found myself wanting to bring up points that Tad kept on beating me to. To address one thing Eric brings up early on:

    Eric writes: “Why think that there is some economy of things, “representations” that persist and are manipulated in our thinking, and have a distinct “content” that must remain the same between uses?”

    I’d like to highlight that the last part of the question, the one concerning why we should think that contents remain constant between uses, is the sort of question I meant to raise in the post, especially in the fourth paragraph.

  12. Right, I see that now, on re-reading. But I’m puzzled why you don’t move from challenging the constancy of contents to denying an economy of representations in general. If the “contents” shift depending on use, that seems a pretty fundamental challenge to the idea that there are real representations that persist across uses and are manipulated in thinking — unless maybe you think that representations are not essentially individuated by their contents?

  13. Fodor and Lepore (both jointly and separately) see better than others, I think, the threat of holism and contextualism to the “Representational Theory of Mind”. But I’ll ride the conditional the other direction. “One man’s modus ponens is another’s modus tollens”.

  14. Pete Mandik says:

    Eric,

    The conclusion I would draw from contents shifting on use is that contents are determined by use.

  15. What justification is there for thinking that there are real items that are manipulated in cognition that somehow retain identity across time despite variations in “content” due to different patterns of use? Is there neurological evidence for this? Behavioral evidence?

  16. Pete Mandik says:

    Hi Eric,

    Thanks for pressing me on these issues.

    Does this address the sorts of concerns you are trying to raise?

    When I checked my blog, I saw that there was another comment preceded by the heading “Eric Schwitzgebel says”. I recognized this as belong to the author of The Spinltered Mind. While wondering what motivated your comment, I thought back to a conversation we had once at a party in Arizona last year. I thought too about the first time we met when I commented on a paper of yours at an SPP meeting several years ago. Reflecting on this sequence of memories, it occurs to me that not only am I thinking of Eric Schwitzgebel now, I’ve thought of him on several occasions in the past. There have been multiple occasions upon which I’ve thought of that person. There have been several occasions in which I’ve had thoughts about Eric Schwitzgebel.

    What am I thereby committed to by committing to the existence of such thoughts?

    Does the above account entail that there was some thing inside of me the whole time which is some constant and unchanging “Eric Schwitzgebel concept” or “Eric Schwitzgebel thought”? I seriously doubt it. But I’m uncormfortable concluding that all this thought talk is just fiction-spinning. I’m convinced by Cartesian cogito type considerations that I can’t be wrong about whether I have thoughts.

    What is the nature of these thoughts? For reasons I’ve spelled out elsewhere, their nature better be in principle possible to spell out in reductive neurophysiological terms (see “Supervenience and Neuroscience”. And I do think that there is both behavioral and neurological evidence that such a project can be pulled off (see “Evolving Artificial Minds and Brains”).

  17. Well, I guess I’ll just have to read your essay! (I look forward to it.)

  18. Dan Ryder says:

    More from my spring break posting opportunity….

    Re: vehicle identity as a sameness marker - right you are, I should have said things more carefully. You can think of a detector as a vehicle for a stored representation, and detector firings as (related) vehicles for occurrent representations - in fact, the firings can be usefully thought of as tokenings of the stored representation (the concept [giraffe] gets tokened in the complex occurrent representation [giraffe in front of me now]). I was betting that the nervous system makes use of vehicle identity (i.e. identity of neural population) as a sameness marker for stored representations, and therefore only indirectly as a sameness marker for occurrent representations, via the stored representations of which they are tokenings. This suggests that the class of representations that exhibits economy in conditions of content determination (according to the argument you accept) also exhibits economy of (stored representation) vehicle. Or so it seems to me at the moment, at least!