While there are many theories of what representational content is and how representations come to have it, it is not entirely clear that these theories are compatible with basic assumptions about the diverse roles that representations play in the internal causal economies of organisms. Let us call the problem of showing the compatibility of a theory of content and these pre- theoretic assumptions about the roles of representations within a causal economy â€œthe economy problem.â€?
The economy problem is due in large part to the emphasis that perception has received in theories of content. Consider the kind of stock example typical of this literature. Smith has a mental representation heretofore referred to as â€œ/cow/.â€? As the story goes, /cow/ means cow, that is, Smith has /cow/ in his head and /cow/ represents a cow, or cow-ness, or cows in general. On the standard story, Smith will come to have a tokenning of the representation type /cow/ when Smith is in perceptual causal contact with a cow and comes to believe that there is a cow, presumably by having, in his head /there/ + /is/ + /a/ + /cow/ or some other concatenation of /cow/ with various other mental representations. The main question addressed in this literature is how /cow/, a physical sort of thing in Smithâ€™s head, comes to represent a cow, a physical sort of thing outside of the Smithâ€™s head.
This focus on the perceptual case has made causal informational proposals seem rather attractive to quite a few people, so let us focus on the following sort of suggestion, namely, that /cow/ represents cow because in typical scenarios, or ideal scenarios, or in the relevant evolutionary scenarios, /cow/s are caused by cows, that is, /cow/s carry information about cows. Thus, tokenings of /cow/s in the heads of Smith and his relatives are part of the operation of a cow-detector. A widespread presumption of this kind of view, and a not necessarily bad one, is that the /cow/s you find in the perceptual case are the same things that will be deployed in the memory, planning, and counterfactual reasoning cases too. The presumption, inherited from a long empiricist tradition, is that what ever happens in perception to wed representations to their contents, can simply be passed along and retained for use in non-perceptual mental tasks. In its most literal form, this is the view that whatever happens to items in the perception â€œboxâ€? is sufficient to mark those items (picture them as punch cards, if you like) as bearing representational contents. Those items can thus be passed to other boxes in the cognitive economy, and retain their marks of representational content even after they may go on to play quite different causal roles.
This is an interesting suggestion, but certainly open for questioning. That is, what might seem like a good idea about the nature of representations in connection with perception may not generalize to all the other sorts of things mental representations are supposed to do. Presumably, /cow/s, that is, mental representations of cows, have a lot more work to do than take part in perceptions. Consider that /cow/s are used to remember cows, to make plans concerning future encounters with cows, and to reason about counterfactual conditions concerning cows (e.g., what if a cow burst into this room right now?). Perhaps, then, the sorts of conditions that bestow representational contents onto perceptual states are very different than the conditions on representation in memory, which are yet different from the conditions for representation in planning, counterfactual reasoning, and so on.
A second concern, not unrelated to the first, is how you tell what and where the /cows/ are in the first place. Focusing on the case of perceptual belief brings with it certain natural suggestions: point Smith at some cows and look for the brain bits that seem to â€œlight upâ€? the most. Much talk of representation in neuroscience is accompanied by precisely this sort of methodology. But are the bits that light up during the retrieval of memories of cows or counterfactual reasoning about cows the same bits that light up in perceptions of cows? And more to the point, how will various theories of representational content cope with the different possible answers to this question?
The economy problem might best be seen as decomposing into a pair of problems, the first concerning a question of representational content and the second concerning a question of representational vehicles. The economy problem for content is the question of whether the conditions that establish representational content for perceptual representations are the (qualitatively or numerically) same conditions that establish the representational contents of memories and intentions or whether distinct conditions are necessary. The economy problem for vehicles is the question of whether the vehicles of perceptual representations will be the (qualitatively or numerically) same vehicles as in memories and intentions or whether distinct vehicles are necessary.
Mandik, P. 2003. Varieties of Representation in Evolved and Embodied Neural Networks. Biology and Philosophy. 18 (1): 95-130. )