Are there any non-question-begging arguments for externalism?


Residence

Originally uploaded by Pete Mandik.

The point of this post is not so much to answer the titular question (though hopefully commentators take a stab at it) as to sketch some remarks about how to avoid giving question-begging arguments for externalism.

Before diving into the remarks, first some term-defining preliminaries.

Let internalism be the view that whatever makes a person’s mental state the type of mental state that it is, it involves no relations other than relations to other mental states of that person. Let externalism be the view that whatever makes a person’s mental state the type of mental state that it is, it involves at least one relation to things other than mental states of that person. Various externalisms, then, include social externalisms (via essential relations to states of other persons), evolutionary externalisms (via essential relations to states of evolutionary ancestors), embodied cognition views (via essential relations to extra-mental bodily states), and what I shall call referential externalisms (via essential relations to extra-mental entities that constitute the referents of mental states). I’m particularly interested in referential externalism here, so let me say a bit more about it.

Referential externalism is primarily a theory of mental states that have content and explicates content in terms of a semantic relation—the reference relation—between a mental state and a (typically) extra-mental entity that the mental state designates. On such a view, for example, my belief that Socrates philosophized is the type of mental state that it is in virtue of there being, among other things, a semantic relation between my mental state and Socrates. For examples of referential externalists I offer philosophers that hold the wide-spread view that Twin-Earthlings have different beliefs from Earthlings in virtue of environmental chemical differences between XYZ and H2O.

For the rest of this post I shall refer exclusively to referential externalism by my use of “externalism”.

How to avoid giving a question-begging argument for externalism

One good thing to avoid if you want to avoid being a question-begging externalist is characterizing the explanandum—content—in terms of a reference relation. So, for example, if you characterize what needs to be explained as how your beliefs about Socrates are able to bear the reference relation to Socrates, then your very characterization of the explanandum is going to make everything but externalistic explanations seem totally hopeless. In short, you would be begging the question against the internalist from the outset. Similarly, if your characterization of the explanandum is done in terms of representation instead of reference, you may still be in danger of question-begging if your model of representation is a relation between representer and represented.

Consider that the internalist is very likely going to advocate a conceptual role semantics for mental content. On such a view, what makes beliefs about Socrates the beliefs that they are are relations only to other beliefs (beliefs about Athens, beliefs about Platonic dialogues, etc.). Characterizing the explanandum in terms of a reference relation to extra-mental entities begs the question against this kind of internalist.

What needs to be done, then, is to characterize content in a way that is neutral between internalism and externalism. For propositional attitudes, at least, the topic of content can be introduced as (1) that toward which attitudes are taken and (2) in virtue of which distinct attitudes (e.g. the belief that grass is green, the fear that grass is green) may nonetheless psychologically resemble each other. This characterization leaves it open for externalists to argue that the belief that grass is green involves relations between beliefs and grass and it leaves it open for internalists to argue that the belief that grass is green involves only relations to other beliefs. What this characterization does not do is make essential reference to extra-mental entities (e.g. grass). The use of “grass” in characterizing the belief as a belief about grass does not necessarily involve bearing a reference relation to grass, or so it is open for the internalist to deny.

26 Responses to “Are there any non-question-begging arguments for externalism?”

  1. Hi Pete,
    Congratulations on your marriage!

    A question… On internalist account, can two people have same belief and if they can, in virtue of what it will be the same belief?

  2. Pete Mandik says:

    Tanasije,

    It probably would be better for the internalist to deny literal sameness of belief and instead talk of similarty of beliefs, where belief similarity is going to be determined by the similarities between the structures of the two different sets of beliefs had by two different people.

    See also: http://www.petemandik.com/blog/2006/09/29/analytic-functionalism-and-evolutionary-connectionism/

  3. Thanks for the link Pete. That was very interesting post, and discussion in the comments.

    As for a move towards similarity by the internalists, I wonder if it might be a point where both externalists and internalists will agree.

    Externalist will point that if internalist accept such view, there is not much sense in a priori discussion over the externalist/internalist problem, as they are not discussing the same problem at all, but two more or less similar problems. Internalist, probably not finding a priori discussions convincing in the first place, will agree.

  4. Eric Thomson says:

    Where does truth come in? I think the obsession with reference stems from an obsession with accurate portrayal of the world (usually cashed out as truth, but you could also have a more relaxed notion of degree of fit between internal contents and the world). The radical internalist (for whom all content supervenes on internal states) doesn’t seem to have a good idea for how to get this important cognitive virtue.

    For instance, Churchland uses the map as an analogy (the metric structure of the ANNs state space maps onto the metric structure of the world, just as in a garden-variety map, and it (arguably) isn’t the individual points that have content in either case). But there are serious problems with this (e.g., draw me three points on a piece of paper and they can map onto an infinite number of things: without getting into the causal/informational story about how those three points were produced, such as during the learning period for the ANN, it isn’t possible to specify the content).

    Without a good response to this problem, externalists will remain in charge I presume. If there are any good responses, I’d love to see ‘em (that would take away my main problem with identity theories of mind).

  5. Pete Mandik says:

    Eric,

    I think that you are probably right that truth motivates a lot of externalism.

    It is worth keeping in mind that it is always an option to deny that truth is a relation to external entities (the so-called correspondence relation). I’m not going to pursue that here.

    The other option is to spell out how correspondence about truth is compatible with internalism about content. Something like the Churchland view strikes me as pretty promising, actually. Keep in mind how complex and non-symmetrical the relevant maps are going to be. Unlike your three-point example, there are going to be many more constraints on the mappings.

    If it’s determinate specifications you are worried about, it’s worth keeping in mind that causal/informational stories haven’t been without their own problems. Regarding specificity, there are all sorts of problems concerning where in the causal chain to locate the content (e.g. proximal or distal causes). Also, there are serious problems about how to represent things that couldn’t enter into causal chains with our minds because, eg., they are outside of our light-cone, or they don’t exist, or are too abstract.

  6. Eric Thomson says:

    Pete: yes, there will sometimes be many (many!) more constraints than in the three-point case. I don’t like this response (which Churchland also uses) for a few reasons.

    1. It seems less of an answer and more like “This is unlikely to happen in practice so we don’t need to worry about it.” Not very satisfying.

    2. I’d like to leave open the possibility that there are very simple nervous systems that have content but not a super-high dimensional state space.

    3. In practice, neuroscientists don’t study sensory coding by building two state spaces and comparing them. To study coding we study how the neuronal response and the stimulus are related in causal and informational terms. I’m not sure this is a good reaction as much as a bias toward scientific practice.

    If it’s determinate specifications you are worried about, it’s worth keeping in mind that causal/informational stories haven’t been without their own problems. Regarding specificity, there are all sorts of problems concerning where in the causal chain to locate the content (e.g. proximal or distal causes). Also, there are serious problems about how to represent things that couldn’t enter into causal chains with our minds because, eg., they are outside of our light-cone, or they don’t exist, or are too abstract.

    I think these are all difficult problems, but I think Dretske has good first-pass answers. But it will take a while to write that up: perhaps I’ll write it up and post it at philosophyofbrains…

  7. Pete Mandik says:

    Eric,

    I deal with some of these issues at length elsewhere. Re: 2 see my “Cognitive Cellular Automata” and re: 3 see my “Evolving Artificial Minds and Brains”.

    Here are some super-brief additional thoughts:

    re: 1, your specificity intuitions are based on the complicated human case. The failure of them to apply to the three point case might not be worth much.

    re: 2, what reason is there to believe that the contents for such simple systems are going to be highly specific?

    re: 3, the idea I discuss in “Evolving…” is that in sensory coding cases, the causal relations don’t determine content, but instead determine that the representations in question are sensory as opposed to , say, mnemonic or volitional. This leave open that the codes themselves have their content in virtue of their structure.

    Anyway, the above is too pathetically brief to be convincing. Hopefully the longer discussions I refer to do a better job.

    I look forward to your further remarks on this topic, here or at Brains.

  8. Tad says:

    Would the following specification of content beg the question in favor of the externalist?

    The content of my belief that snow is white is what both I and Helen Keller express using the words ‘Snow is white’.

    This seems intuitively right and non-question-begging, though I’m open to the possibility that it is on some deeper level.

    In any case, I think this constraint motivates, either overtly or covertly, many arguments about mental content.

    If it’s non-question-begging, it seems to lead rather straightforwardly to an argument for externalism, since Helen Keller and I are unlikely to be very similar neurocomputationally.

    You might try to make sense of communication in terms of similarity of thought, but I find it rather implausible. When I talk about Washington DC, and you understand what I say, and respond about Washington DC, surely we are talking about the *same* thing (city)! Also, it’s hard to make sense of scientific progress if we don’t take partisans of competing paradigms to be talking about the same things, and it’s hard to see how to do that without externalism.

    One way of splitting the difference, perhaps, is to claim that there are two kinds of content. The content of public language utterances is externalist, and there is a derivative thought content, understood as that which is expressed in public utterances. Such thought content plays more of a normative role and less of a causal role in behavior. You can fail to act in ways we’d expect you to act given the thoughts you express using public language. Thought-content in this sense is very thin: just whatever we have to assume is expressed in public language in order for public language to contribute to certain communicative projects that depend on the assumption that we can talk about the same things. (Dennett’s distinction between belief and opinion and Frankish’s distinction between Mind and Supermind are relevant here).

    The second kind of content - call it ‘psychological content’ - is whatever content must be ascribed to causally efficacious internal states to explain behavior. This would be internalist content. The relationship between these two kinds of content would be the topic of a very interesting, promising, and fecund philosophical/psychological research project… ;-)

  9. Pete Mandik says:

    Hi Tad,

    For the purposes of introducing content non-question-beggingly, I don’t mind doing it in terms of interpersonal sameness of what’s expressed by utterances. However, I think that the more I keep in mind how that needs to be read to keep from begging externalism, the less I think the arguments you gesture toward are, as you say, straightforward. Much more would need to be said to see how externalism would follow non-questionbeggingly.

    A question-begging reading of the thing about you and Helen Keller would be something like “Tad and Helen express the same thing by “snow is white” only if there is a thing x external to Tad that Tad expresses and a thing y external to helen that helen expreses and x is numerically identical to y”. But there are ways of reading the thing about you and Helen Keller expressing “the same thing” that doesn’t invlove there being some third thing external to you and Helen that you are both related to. An internalist can grant that it is just as literally true that you and Helen Express the same thing as that you and Helen have the same height, without either the expression context or the height context necessitating that there is some third, external, entitity that you bear special relations to.

    Why think, then, that the thing about you and helen Keller favors externalism over internalism? As the analogy to “same height as” shows, one can affirm that the thing about you and Helen Keller is literally true in virtue of similarities between you and Helen. While I personally don’t dismiss the possibility that these similarities are neurocomputaitonal, the internalist need not commit to neurocomputationalism. As you yourself mention, these could be similarites in thought. But what’s implausible about that? It is a dificult project to spell out, to be sure. But so is the project of spelling out what the external similarities are between you and HK are such that you both think about DC. Also, keep in mind the difficult project of spelling out how you and HK can both have beliefs about unicorns. So a quick assesment of relative plausibility isn’t going to cut much ice.

    Regarding the scientific progress argument, why is externalism about content necessary? Why wouldn’t a story about truth suffice?

    Sorry if this is all over the place. To focus this into a single question: How was it that you were intending the reading of your way of introducing content?

  10. Tad says:

    I just intended it as a commonsense constraint (one of several) on propositional thought content, where ‘propositional’ refers specifically to the kind of content expressed by uttered sentences. I guess the Helen Keller example is supposed to gesture at Schiffer’s arguments against functionalism (and hence, I think he thinks, internalism) in Remnants of Meaning.

    Why do I think this constraint might be more easily satisfied by an externalist theory? I guess I’m thinking of common internalist proposals, and I can’t think of one that supports the kind of transpersonal identity of content I take to be necessary for communication. Can you think of a candidate internal fact that can support it?

    I like your comparison to height. I take it you’re drawing on Churchland (and others’) comparison of propositional attitude ascription to measurement, which relates intrinsic properties of different objects to the same abstract entity - a number. But it still leaves open a question: what internal sets of facts can be mapped into propositions, in the way, say, height is into numbers, such that two individuals’ internal sets of facts are both mapped into the same proposition? Until I see a specific proposal, it’s hard for me to assess this.

    My own prejudice is to locate the set of facts in dispositions to semantic deference. To know whether we token thoughts with the same contents we need to know whether we assent to the same sentences in the same language, and this is, ultimately, a matter of who or what we defer to regarding the use of the words we utter. To be sure, there is an internal component to this, but nothing specific enough to yield determinate thought ascriptions. Maybe internalist facts can help fix which language we’re speaking, but what sentences of that language, and hence the thoughts we express with them, *mean* depends on socio-linguistic facts, in my view.

    As for scientific progress, I don’t see how you could have truth without content. But I don’t want to get into abstract philosophical disputes, since I only intended to express a simple-minded worry: how can Einstein and Newton be in disagreement, and how can Einstein be right and Newton wrong, if they’re talking about different things (e.g., when they speak of mass)? But internalist theories of content are usually conceptual role theories, and the conceptual role of Newtonian ‘mass’ is different from the coneptual role of Einsteinian ‘mass’. So internalism seems to have the consequence that they are talking about different things. So where is the disagreement?

    As for the unicorn problem - that might be a problem for referential externalists (which I believe were your main target), but I think it’s plausible that mythological terms have their uses fixed by socio-linguistic facts as well. Another thought - what exactly is the problem with contents of non-referring terms being compositional constructs of the external contents of referring terms? E.g., unicorn = horse with a horn.

  11. Pete Mandik says:

    Tad,

    Regarding promising internalist proposals, I’d count Churchland’s. For a previous discussion of mine on this, see the link I posted above in my respnse to Tanasije.

    Regarding the compositional solution to unicorns, one big problem is the massive concession to conceptual role semantics that it constitutes. The other is the problem I spell out here (in the last paragraph):

    http://www.petemandik.com/blog/2007/02/09/philosophy-porn-and-other-things-that-do-not-exist/

  12. Eric Thomson says:

    Pete: I just posted on one of your problems for info-semantics over at Philosophy of Brains.

  13. Pete Mandik says:

    Thanks for the heads-up.

  14. Dan Ryder says:

    Hi Pete,

    Following up on the discussion with Eric: I’m a fan of the idea that structural similarity/isomorphism/homomorphism plays a role in determining content too. But I think teleology is essential - it has to be a normative similarity. You seem to think not, that the structural similarity ala Churchland can work by itself, as long as it is complex enough to be sufficiently “constraining” on the mapping. My question: given that our mental representations/concepts can be equivocal or disjunctive (think: the concept of jade), and thus correspond to utterly non-natural, basically gerrymandered categories, how do these “constraints” get off the ground? The possibility for gerrymandered categories as referents means that the isomorphism is completely *un*constrained, no?

    Great blog, BTW!

    P.S. My SINBAD neurosemantics can handle unicorns no prob. :) I mention it since that seems to be a favourite puzzle of yours…

  15. Pete Mandik says:

    Hi Dan,

    It is good to hear from you. And thanks for the blog love. I have some confessions, though:

    1. I haven’t read SINBAD. I will though. If, in the meantime, you whip out a thumbnail sketch of your solution to unicorns and plop it in this comment-thread, that would be pretty cool.

    2. I don’t understand the problem you mean to pose re gerrymandering. How is it that you are thinking of isomorphism such that it would be unproblematic for un-gerrymandered categories?

    I should highlight here that for the purposes of the current post and discussion thread the part of Churchlandish state-space semantics that I am endorsing is the part that makes it a kind of internalism a la conceptual-role semantics. The isomorphism stuff strikes me as conceding too much to the referential externalist. I question the r-externalist’s view that content is a head-world mapping. Further, as I suggested briefly to Eric and Tad, I think that this view of content can be denied while affirming that truth is a head-world mapping. I’m not taking a stand on this particular view of truth, but I get the impression that it is being assumed without sufficient argument that a mapping view of truth entails a mapping view of content.

  16. Dan Ryder says:

    (I’m dividing the three topics into separate posts, for anyone who is only interested in one of them.)

    Topic 1: The problem re: gerrymandering. Well, it’s not that I think representation-as-pure-isomorphism is unproblematic if our concepts were only ever of un-gerrymandered categories. It’s just that the problem is much worse with gerrymandered categories. If brain structure is the domain, and world structure is the range, there are a lot more candidate mappings from domain to range if the range includes gerrymandered categories in addition to natural ones. If an arbitrary degree of gerrymandering is permitted, then adding complexity to the structure in the domain helps exactly not at all.

  17. Dan Ryder says:

    Topic 2: On whether truth as a head-world mapping entails a mapping view of content. Well, content and truth are clearly closely related. Surely we can at least say this: an account of content had better explain why a particular mental state has the truth-(or satisfaction-)conditions that it does, even if something else is more fundamental to content (and even if the truth-or-satisfaction-conditions are only contingently linked to the contentful state). Call this the challenge of explaining truth conditions. It’s here where most internal conceptual role theories seem to fall short; they don’t even seem to try to meet the challenge. (They only talk about content identity or similarity across individuals.) Introducing isomorphism into the picture, as Churchland does, at least tries to meet the challenge of explaining truth conditions. It also has pretty strong internalist credentials, since the relation to the external world that it makes use of is an internal relation, namely similarity, rather than an external relation like spatial or causal relations. (That is, it depends only on the intrinsic properties of the relata.)

    How about adding this to your characterization of content: 3) content is that which has truth-(or-satisfaction)-conditions. This doesn’t deny that truth-(or-satisfaction) conditions could be linked to a particular mental state on the basis of purely intrinsic features of that mental state. (It doesn’t even say that particular truth-(or-satisfaction)-conditions are essential to a particular content!) So I don’t think it begs any questions against internalism. It just makes it clear that the challenge of explaining truth conditions is a challenge to be met by any theory of content.

  18. Dan Ryder says:

    Topic 3: OK, you asked for it! Here’s a thumbnail sketch of my take on empty concepts (unicorn, Sherlock Holmes, kryptonite, God…). I warn you in advance that I have very large thumbnails.

    I argue that the cerebral cortex is a SINBAD network. “SINBAD” stands for “set of interacting backpropagating dendrites”, and refers to the mechanism by which the network’s activation dispositions come to be isomorphic to regularities in the environment. The key aspect of SINBAD networks is that individual cells tune to what I have called “sources of correlation” and now call “sources of mutual information” (SOMIs): things that are characterized by clusters of properties that tend to co-occur for an underlying reason. (Boyd’s homeostatic property clusters count as SOMIs, for example, but I would extend the notion to purely historical kinds - where there’s no homeostasis per se - and individuals. So SOMIs are more like Millikan’s “substances”.) As SINBAD networks get exposed to an environment via sensory receptors, they gradually come to mirror its deep structure, becoming isomorphic to regularities involving SOMIs in that environment. (The relations among SOMIs that the network can come to mirror can be multivariate and nonlinear - this ain’t no simple Hebbian associative network.) I also argue that the cortex is supposed to mirror regularities involving SOMIs (the teleological element). That’s what it was selected for. That is, the cortex comes to model (= be normatively isomorphic to) the deep structure of the environment it is exposed to. Think of it as mental wax, that represents what comes to be “impressed” upon it.

    A wax impression represents the thing that caused it to acquire its structure (or rather the aspects of its structure that are supposed to resemble something - its density structure doesn’t represent anything, for instance, only its spatial structure does). Similarly, to figure out what a particular SINBAD cell represents, you have to identify the SOMI that caused it to adopt its place in the big isomorphism. The details of the SINBAD mechanism, and the history of the cell’s interaction with other cells and (indirectly) the environment, allow you to figure out what SOMI this is. (Often it’s more than one, and the cell is equivocal or ambiguous - thus the typical need for a population code.)

    Now for empty representations: sometimes the system can be tricked into “tuning” to something that isn’t there. One set of clear examples involve clustering illusions. In cases like these (possibly the “hot hand” in basketball), there’s a clustering of properties, but there’s no underlying explanation for the clustering - it occurs purely by chance. Nevertheless, a cell will pick up on the clustered properties and “tune” to their nonexistent source. The result is an empty representation. Another way for empty representations to occur is via prior misrepresentation. Consider the scintillating grid illusion, for instance. If you looked at these enough, your cognitive system might introduce a concept for “that kind of spot” - except of course there isn’t any such thing! What’s happened is that early visual processing in the cortex has represented a bunch of properties that cluster, except a subset of those representations are consistently misrepresentations. This leads some SINBAD cells to “tune” to the nonexistent type of object. We can still identify this representation across individuals in terms of its content, though, because the misleading clustering has a similar cause in each case - not a type of object in the environment (though that’s part of it, i.e. the crossed lines etc.), but also the internal structure of the human visual system. I suggest that the concept of Sherlock Holmes is just the same, except that it is a product of cognitive illusion rather than visual illusion. Words on the page (or phosphors on a screen) set up a systematic misrepresentation of the environment in your head, causing you to acquire spurious concepts due to illusory clusterings. (The concept acquisition machinery goes on its merry way, whether or not you believe the fiction is real.) Again, we can still identify this concept across individuals by its content, since the misrepresented clustering has the same explanation for each individual, namely the fiction, understood as a text (or whatever), plus humans’ susceptibility to this particular type of cognitive illusion. The concept of a unicorn would get the same treatment, except the relevant fiction is a more diffuse historical kind than in the case of Sherlock Holmes.

    Whether it’s a SINBAD representation of cats or unicorns (or anything else), there is no particular conceptual role that a representation must play in order to realize the concept in question. Your concepts of cats and unicorns may play different conceptual roles from mine, if our SINBAD cells have picked up on different aspects of the rich clustering of features that are related to real cats and the unicorn fiction. This means that both the concept of cats and the concept of unicorns get the same, non-descriptionist treatment, which maintains the theoretical unity between the two types of concepts that we were after in the first place.

    I don’t know if that was long-winded or too sketchy. (Probably both!) If it doesn’t make sense, my Mind & Language paper might help. (But ignore the brief and misleading remarks I make about empty concepts and “fictional kinds” there!)

  19. Pete Mandik says:

    Dan, Re: Topic 1.

    This helps me understand what you have in mind. I remain to be convinced, though, that there is a problem. I can imagine lots of situations in which increased complexity of structure will actually reduce the number of possible mappings, so I don’t see how complexity per se is the problem (if any) gerrymandering introduces.

    Here’s another way of spelling out how gerrymandering doesn’t strike me as introducing a special problem. I assume that an example of the gerrymandered/nongerrymandered distinction would be warm-blooded animal vs. warm-blooded-animal-or-igneous-rock. I’m not seeing why mapping representations onto representeds would be any more difficult in the one case than the other.

  20. Pete Mandik says:

    Dan, Re: Topic 2.

    I agree that truth/satisfaction conditions (TSCs) and content are related. I think that it wouldn’t be question begging re internalism/externalism to characterizing content in terms of truth in the way you suggest. However, I’m not particularly happy with the characterization offered for a couple of reasons.

    First, I think contents and attitudes are to be distingusihed, but TSCs attach to content-attitutde pairings, e.g., it is my belief that grass is green that is made true by grass being green and it is my desire that the icecream have sprinkles that is satisfied by the icecream having sprinkles.

    Second, unless certain combinatorial semantics are going to be ruled out by stipulation, then we need to leave open the possibility that, e.g. the representation of Socrates in flight is acheived by combining separately contentful representations of Socrates and flight. But these latter representations will no more have TSCs than does the name “Socrates” have a truth value.

  21. Pete Mandik says:

    Dan, Re: Topic 3, I’m left wondering whether you are what I call in the post “a referential externalist”.

  22. Dan Ryder says:

    Spring break, and some time to post… (sorry, I don’t operate very well in blogosphere time, feel free to ignore all this as the blog has obviously moved long past it!)

    Topic 1 - Here’s an analogy: take a model of a Spitfire, a Spitfire, and a giraffe, and suppose we’re interested in spatial isomorphism. Given natural categories, the model of a Spitfire is more nearly isomorphic to the Spitfire, not the giraffe. But given gerrymandered spatial categories, there’s a perfectly good mapping from the model of the Spitfire to various points internal to the giraffe - a Spitfire-shaped collection of giraffe guts, as it were. (That’s one type of gerrymandering, involving the referent of the representation. Another type is gerrymandering of the isomorphic relations themselves, the relations that are “preserved” across the mapping. If any degree of spatial distortion is permitted, there will be a perfectly good mapping from the model of the Spitfire to the *surface* of the giraffe too.) Analogous problems will infect mappings of inferential roles onto regularity structures (or whatever it is inferential roles are supposed to map onto in the world), if non-natural categories and/or mappings are permitted.

    Topic 2 - On your second point - sure! Then how about adding 3) content is that which *explains* TSCs (where applicable). That takes care of non-propositional representations. [On your first point - I disagree that TSCs depend on attitude context - that's the point of talking about "truth or satisfaction". My belief that grass is green and my desire that grass be green have the same propositional contents and the same TSC. No?]

    Topic 3 - Well, I do explicate content in terms of relations to extra-mental entities, and in the case of non-propositional representations, those extra-mental entities normally include the referents of those representations. But not in the case of empty representations. They’d better not, since empty representations don’t have referents! I think in the ordinary cases, someone would unhesitatingly classify me as a referential externalist, and the account of empty representations basically falls out of how I treat the ordinary cases. So you decide: am I a referential externalist? :)

  23. Pete Mandik says:

    Hi Dan,
    Re: topic 1, that helps clarify qhere you were coming from quite a bit. Thanks.

    Re: topic 2, I can see your points and don’t really have much further quibble on this.

    Re: topic 3. there’s a problem that it seems (though I’m not sure) you don’t have a satisfactory answer to: What is representation such that it can be the same thing in true, false, and empty contexts? I take it that there’s no difference in what representation is in the true representation of dogs as having hearts and the false representation of dogs as having beaks. I take it as well that there’s no difference of what representation is in the empty representation of unicorns as having hearts.

    As I understand referential externalism, it would have to hold that there is no representation in the so-called empty context, since there is no existing thing that is being represented as having hearts.

    If you want to hold that the empty context is a genuine case of representation in spite of there being no referent, then it looks like you’ll be forced to give a very different account of representation in the empty case than in the true and false cases. In the true and false cases, representation is a relation. And in these particular cases it is a relation to dogs. In the empty case representation is a relation to nothing at all.

  24. Dan Ryder says:

    Topic 3: Yeah, good - that’s where the teleology comes in. Representation is having the function of or purpose of being related to external affairs in a particular way, or more broadly being supposed to be related to external affairs in a particular way. In the case of truth, the representation is so related to external affairs. Falsity and emptiness are two different ways of a representation failing to be so related to external affairs. But the supposed to applies in each case, so they’re all cases of representation. (This makes it sound like I’m not a referential externalist, I suppose? But keep in mind that I think that our concepts’ referents/contents are determined by their previous causal interactions with the environment. That said, I’ll take whatever label you give me.)

    Here’s an analogy, the Automatic Scale Modeller (ASM): It’s designed to take an object through its input door, make a mould of the object, shrink the mould, fill it with molten plastic, and then spew a model out. The model is a model of the input object (whatever it comes across, maybe this thing zooms around by iself on wheels). The models this thing spits out are supposed to be spatially isomorphic to the input objects that cause them.

    Normally things go well, and the ASM produces a nice accurate model (true belief). But sometimes things go wrong, for instance there’s a little lump on its model of my telephone - the model inaccurately represents that my telephone has a lump on it (false belief). And sometimes, Trickster Pete really messes it up, as follows: he triggers its object detecting laser, which makes the machine start on its model-making procedures. While the machine is chugging along, Pete shakes it vigorously, maybe blows some air through its output door, etc., and the machine spits out a misshapen blob. This blob isn’t a model of anything (empty representation).

    Now, suppose that Pete publishes his exact procedure for messing up the ASM on the internet, and lots of people begin to follow it with their own ASMs, producing exactly similar misshapen blobs. People start calling it “The Pete Blob”. We can identify this empty representation across individuals by what causes it, and each Pete Blob has the same underlying explanation for why it looks like the other ones (having to do with Pete’s instructions and the internal design of the ASM). Unlike in the normal case, however, this explanation does not involve the referent of the model, because there isn’t one. That’s like fiction, as a replicated cognitive illusion. Now suppose that Pete is forgotten, and the origins of the Pete Blobs get lost in the mysts of time; small variations are introduced, and replicated within different cultures in somewhat different ways. That’s like unicorns.

    Our brains are model-making machines, that aim at isomorphism not to spatial structure, but to regularity structure. And our brains can go wrong in just the same sorts of ways that the ASM can go wrong. (Plus some others!)

  25. Pete Mandik says:

    Dan,

    Sorry for the looooong delay, but I’m happy to take this at a snail’s pace if you are.

    Your remarks do help clarify things quite a bit, and I like the illustration of the model maker. I worry, though, that you might still not have solved the problem of representing the non-existent. Take, for instance, how you describe the blob that ASM secretes after Trickster Pete has played his tricks: “This blob isn’t a model of anything” (emphasis yours. It sounds to me like you are taking “model of” as a relation here. If something is a model only if it is true in some sense or other that it is a model of something, then the blob isn’t a model.

    Is it supposed to be part of the story that at some point the Pete Blobs are models even though there is nothing that they do model? If they just aren’t models, and so-called unicorn representations are supposed to be analogous the the Pete Blobs, then are so-called unicorn representations not really representations?

  26. Dan Ryder says:

    No problem with the pace… my natural one, I think!

    There’ s a sense of “model” according to which models have to be models of something, and a sense of “representation” according to which representations have to be representations of something. But if we rely just on that, there’s a pretty quick argument to the conclusion that no theory of representation can account for the representation of non-existent objects! (All you need to assume is a non-crazy metaphysics.)

    The right thing to say, I think, is that some models and representations *purport* to represent things that don’t exist, but they’re representations nonetheless. However, I don’t really care what one decides is the right thing to say. The main thing is to account for the phenomena, which I think the above story does. The main phenomenon is: there are some shareable concepts that lack referents, and which are not complex (and so cannot be explained in terms of concepts that do have referents).