Archive for the ‘Representational Content’ Category

Swamp Mary’s Revenge: Deviant Phenomenal Knowledge and Physicalism

Sunday, December 7th, 2008

I’m pretty happy to be able to announce that I’ve got a paper coming out in Philosophical Studies. It’s “Swamp Mary’s Revenge: Deviant Phenomenal Knowledge and Physicalism” (linked file is a draft. The original publication will be available at www.springerlink.com).

This paper grew out of presentations I made this year at Toward a Science of Consciousness in Tucson and at the University of Cincinnati “Churchlandpalooza“. These talks grew out of discussions from this here blog. See especially Wanted: An Actual Argument for the Knowledge Intuition and Knowledge Intuition Fight Club.

Anyway, have some abstract.

Abstract: Deviant phenomenal knowledge is knowing what it’s like to have experiences of, e.g., red without actually having had experiences of red. Such a knower is a deviant. Some physicalists have argued and some anti-physicalists have denied that the possibility of deviants undermines both anti-physicalism and the Knowledge Argument. The current paper presents new arguments defending the deviant-based attacks on anti-physicalism. Central to my arguments are considerations concerning the psychosemantic underpinnings of deviant phenomenal knowledge. I argue that only physicalists are in a position to account for the conditions in virtue of which states of deviants constitute representations of phenomenal facts.

And here’s why google Image Search is the best thing in the world (from the first page of returns on the search string, “swamp mary”:

There's Swampthing About Mary

There’s Swampthing About Mary from Worth1000.com [link]

Scientists solve ‘gavagai’ problem. Not.

Monday, April 21st, 2008

From NewScientistSpace: “‘Babelfish’ to translate alien tongues could be built

Such a “babelfish”, which gets its name from the translating fish in Douglas Adams’s book The Hitchhiker’s Guide to the Galaxy, would require a much more advanced understanding of language than we currently have. But a first step would be recognising that all languages must have a universal structure, according to Terrence Deacon of the University of California, Berkeley, US.

[...]Deacon argues that all languages arise from the common goal of describing the physical world. That limits the way a language could be constructed, he concludes.

[...]
Deacon argues that no matter how abstract a symbol becomes, it is still somehow grounded in physical reality, and that limits the number of relationships it can have with other symbol words. In turn, this defines the grammatical structure that emerges from stringing words together.

If that is true, then in the distant future it might be possible to invent a gadget that uses complex software to decode alien languages on the spot, Deacon said. He presented his ideas on Thursday 17 April at the 2008 Astrobiology Science Conference in Santa Clara, California, US.

Testing the theory might be tough because we would have to make contact with aliens advanced enough to engage in abstract thinking and the use of linguistic symbols.

The lack of aliens does indeed make that a tough nut to crack. Also, problematic is the lack of a physical “grounding” relation that would serve to distinguish between reference to rabbits, un-detached rabbit parts, and the cosmic complement of a rabbit. Good luck, exolinguists!

babelfish.jpg

Fig. 1. Stick this in your ear hole.

My recollection of Douglas Adams’s description of the Babelfish was that it fed off of the brain-waves of the speaker and secreted telepathic translations into the brain of the listener. Regarding the ‘gavagai’ problem, this is just to kick the problem upstairs: specifying determinate contents for alien brain states is not obviously easier than specifying determinate contents for their utterances.

However, perhaps one can appeal to a strategy outlined recently by Paul Churchland (Churchland, P. (2001). Neurosemantics: On the Mapping of Minds and the Portrayal of Worlds. The Emergence of Mind. K. E. White. Milan, Fondazione Carlo Elba: 117-47.) The gist of Churchland’s suggestion is that the neural activation spaces of distinct brains may be uniquely mapped to one another in spite of large differences between the brains’ fine-grained structure. This is alleged to provide an objective basis for measuring similarities of content in the respective neural representations.

Even if this Churchlandish proposal is correct, huge hurdles remain to harness the proposal in the service of a Babelfish-esque technology. Scanning an alien brain and then adjusting my own to resemble it and thus token representations with similar contents may suffice for me to think like an alien, but it wouldn’t suffice for me to have thereby translated the alien’s thoughts into my own. Consider: if someone zapped a monolingual English speaker with a ray that turned them into a monolingual Chinese speaker, the zapped speaker is no closer than before to understanding how to translate Chinese into English.

toserveman.jpg

Fig. 2. By the way, my Babelfish tells me that the cover of his book says “To Serve Man“. Nice!

Consciousness Without Subjectivity

Friday, April 18th, 2008

Consciousness Without Subjectivity, the PowerPoint from my Toward a Science of Consciousness 2008 talk, appears in my updated talks section. This represents the 20-25 minute version of the talk. The version I’ll be presenting at Churchlandpalooza in May is scheduled for a two-hour slot. A draft of the paper should materialize from the ether sometime June-ish.

Also: There’s Swampthing about Mary.

Also also: Dave Chalmers has his pics up here and here.

What part of “self-explanatory” don’t you understand?

Monday, April 7th, 2008

If someone felt compelled to provide illustrations of so-called “self-illustrating phenomena”, would that be a tacit admission that their labeling anything “self-illustrating” was self-defeating? Anyway, the following link is to my new favorite powerpoint presentation: “Self-Illustrating Phenomena“. It is really cool.

Neurosemantics Bibliography Up and Running

Monday, March 31st, 2008

Sufficiently many people have written on neurosemantics in the past decade or so that it seems worthwhile to try to review the field as a whole. As preliminary work toward such an end, I’ve cooked up a bibliography containing abstracts and links to online works, linked here: [link]. It’s likely that I’ve accidentally left out relevant work, so recommendations for additions are highly appreciated.

More Contents, Vehicles, and Transitive Consciousness

Monday, July 2nd, 2007



Treehead Series - Inheritance

Originally uploaded by redhousepainter

All representations have contents. Even representations of things that don’t exist are meaningfully described as “representations of something”. But not all representations are representations of themselves. Compare, for instance, the sentences “The cat is on the mat” and “This sentence has seven words in it”. Compare also, the thought that “Cherries grow on trees” and “I like thinking about thinking”.

While there may be a sense in which a representation ‘s content is a property of the representation, representing the content doesn’t suffice for representing the representation. All representations represent their contents, since their contents just are what they represent. But not all representations are representations of themselves. So whatever does suffice for representing a representation, it cannot simply be representing its content. If all properties of representations that aren’t content properties are vehicular properties, then whatever does suffice for representing a representation, it must include representing its vehicular properties.

States in which we are conscious of something bear sufficient similarities to representation to warrant postulating that such states are implemented by mental representations. What’s postulated, then, is that being conscious of something is just a certain kind of mentally representing something –that the content of consciousness just is a kind of representational content. It won’t follow from this implementation story without further argument, though, that all states in which we are conscious of something are automatically states in virtue of which we are conscious of those states. Nor will it follow without further argument that simply by being conscious of the state’s content are we thereby conscious of the state itself.

Contents, Vehicles, and Transitive Consciousness

Saturday, June 30th, 2007



Tromp d’oeil painting

Originally uploaded by moocatmoocat

Robert Lurz’s challenge to the standard view of transitive consciousness is constituted by the following claim from his paper, “Neither HOT nor COLD: An Alternative Account of Consciousness”:

[A] creature can be conscious of its thoughts and experiences simply by being conscious of what it thinks or experiences in having those thoughts or experiences

Here is what the Lurz challenge is supposed to be a challenge to. What lots of people seem to agree on, and this would include Tye, Dretske, Rosenthal, Churchland, Prinz, and me, is that transitive consciousness - consciousness of - is implemented by (certain kinds of) representation in the following way: what one is conscious of is what (certain kinds of) representations are representations of. (What further criteria the representations need to meet is what separates the various authors listed, and thus the use of the parenthetical “certain kinds of”.) Call this the Standard View. On the Standard View, one is conscious of such-and-such only if one mentally represents such-and-such. And if one has a (certain kind of) mental representation of a leafy tree, one is thereby conscious of a leafy tree. Or, in other words, what one is conscious of is the content of a certain representation, in this case the content is a leafy tree.

Now, when one has a state of the sort described in the previous paragraph, what sense could it possibly make to say that one is conscious of the state itself? This I try to spell out on pp. 60-61 of The Subjective Brain in terms of the content/vehicle distinction. Being conscious of a leafy tree involves representing a leafy tree. Being conscious of a representation of a leafy tree must involve representing something more than just the leafy tree, that is, something more than just the content of the representation. And the only candidate for the something more is the vehicle of the representation. Thus one is conscious of the representation itself only if one represents vehicular properties of the representation.

So on what basis can adherents of the Standard View resist Lurz’s position that consciousness of what a state represents suffices for consciousness of the state itself? One kind of response would be to point out that it’s not particularly clear what Lurz’s position even means. Another kind of response would be to point out that it isn’t particularly clear that any arguments have been given for Lurz’s position (or even that he takes himself to have supplied any arguments).

Regarding the first kind of response, regarding the meaning of Lurz’s position, is he asserting that being conscious of what a state represents suffices for being conscious of vehicular properties of the representation? Or is he stipulating that being conscious of the content is another way, distinct from the vehicular way, of being conscious of the representation? I’ll come back to this in a moment.

Regarding whether an actual argument is supplied, Lurz does offer “some intuitive support for this claim” in terms of an analogy concerning paintings. He writes:

It seems plain that in order to see what a particular painting represents, one must see the painting itself. If one does not see the painting itself — say, if one is looking in the wrong direction, or is seeing a different painting, or is blind — then one cannot be said to see what that particular painting represents.

Note that while Lurz says this “seems plain,” it seems plain to me that it doesn’t seem plain at all. If what a particular painting represents is the flight of Icarus and I am looking at some other particular painting which also represents the flight of Icarus, then I can see what the first painting represents without seeing the first painting. I do it by seeing the second painting, which is a representation of the same thing as the first painting. So it looks like I can be aware of what a particular painting represents without being aware of that particular painting. (And if paintings and non-paintings can share contents, then I can be aware of what a particular painting represents without being aware of any particular painting at all.)

To make matters worse, it looks like Lurz agrees with this sort of point. He writes:

Three identical-looking paintings by different artists, for example, may each depict a woman seated before an open window…. [I]n one sense of the phrase “what the painting represents,” the intentional-content sense, what these three paintings represent is the same: a woman seated by an open window.

If what they represent is the same, then I can be aware of what the first represents without ever having had any exposure to the first; I just see the second or the third. Imagine further, that while looking at the second, when I blink it is, unbeknownst to me, replaced by the third. And when I blink again, it is once again replaced. I would continue to be aware of what the painting represents without being aware of which particular painting it is –the second or the third –I am looking at. Being aware of what a particular painting represented would suffice for being aware of that particular painting only if different particular paintings necessarily represented different particular things. But, as Lurz admits, paintings can share contents. So which particular painting does being aware of a content suffice to make you aware of?

To return to question of what Lurz’s claim is supposed to mean, specifically whether it is a claim about awareness of vehicles, I note that thinking about typical cases of looking at paintings suggests awareness of vehicles. When I look at paintings I typically notice whether they’re oil or water color, how fat the paint strokes are, etc., and these are vehicular properties of the painting, the properties with which Icarus is represented, not properties that the painting represents.

However, there are atypical cases in which one notices none of the vehicular properties of a painting, and Lurz discusses such cases: trompe l’oeil cases in which, as Lurz points out, we see neither that a painting is present nor the painting as a painting. I might take myself to be looking out a window at a leafy tree when in actuality I’ve been fooled by an incredibly realistic painting. It seems natural to say in such cases that we would not be aware of any vehicular properties of the painting. However, it seems strained to say, as Lurz wants to, that in such cases we are aware of the painting or that we see the painting. Lurz presents his claims about paintings as “intuitive” and I don’t feel the intuitive pull.

So, in summary, the only support offered for Lurz’s challenge to the standard view are some remarks about paintings that are themselves easily resisted.

Space, Time, and Twinearthability

Friday, June 8th, 2007



kant kan

Originally uploaded by Pete Mandik

This is one of those requests for references and reflections post. I’d be grateful for thoughts and recommendations concerning the following questions: Which spatial properties are Twinearthable and which are not? Which temporal properties are Twinearthable and which are not?

Regarding “Twinearthability,� I’m following the usage of John Hawthorne’s “Direct Reference and Dancing Qualia.� The way Hawthorne puts it, a concept like “water� is twinearthable because we can easily imagine an epistemic counterpart that is epistemically just like us but locks onto a different property by “water� than we do (XYZ instead of H2O). A concept is not Twinearthable when beings epistemically just like us would lock onto the same properties that we do. Non-twinearthable properties are so fully present to the mind that epistemic possibility is a guide to metaphysical possibility.

Properties broadly describable as spatial may differ with respect to their Twinearthability. I consider as relevant the following reflections by Roy Sorensen

The volume of a two inch cube is eight times
the volume of a one inch cube. But the surface of the two inch cube
is only four times as large as the surface of the one inch cube. The
ratio of surface to volume further decreases when the cube achieves
a size of three inches. Now all six sides must be dedicated to
maintaining the organism. Thus the geometry of the cubical
organism imposes a limit on its growth. Since the volume of the
organism is cubed while surface area is squared, the animal must
eventually exhaust its ability to feed. The ratio of an organism’s
surface area to its volume is an internal relation. Hence, the size of
an organism is an intrinsic property.
Size is also an intrinsic property of environments. Doubling
everything would not create a duplicate environment. Although the
increase would not be detectable by linear measurements (for our
rulers would have expanded), the increase would make a difference
to planetary orbits and other phenomena governed by geometrical
laws.
Any purely spatial property of an organism is an extrinsic
property. Identical twins can be duplicates even though they stand a
meter apart. Nor is their duplicate status threatened by rotation. If
one spins clockwise while the other spins counter-clockwise, they
remain duplicates. If one twin sleeps with his head to the east while
the other sleeps with his head to the west, they still wake up as
twins. Given this indifference to space, we see that the twins are
duplicates even if they are mirror images of each other.
(From Mirror Imagery and Biological Selection, Biology and Philosophy 17/3 (June 2002) 409-422)

I wonder if temporal properties are similarly split with regards to Twinearthability. I wrestled with this a bit in the puzzle raised in “The Slow Switching Slowdown Showdown� wherein I wondered out loud about how long slow switching would take on a demonically slowed Twinearth. I wonder now about which properties broadly describable as temporal would be Twinearthable and which would not.

Further pointers on space as well as time are welcome.

Intentionality and Formalizability

Wednesday, May 23rd, 2007



P1010581WB.JPG

Originally uploaded by elsewhereness.

Following philosophers like Tim Crane and Uriah Kriegel, let’s call the Problem of Intentionality the problem of motivating the rejection of one of the three propositions in the following inconsistent triad:

1. We think about non-existents
2. One can bear relations only to existents
3. Thinking about is a relation

Part of my interest in the Problem of Intentionality is that a big chunk of the Unicorn Argument involves an acceptance of 1 & 2 and a rejection of 3.

I’ve gotten grief from philosophers like Chase Wrenn and Eric Steinhart about whether the Unicorn can be stated in a formal calculus. Such grief can equally be directed at the Problem of Intentionality. We can motivate such grief by formulating what I’ll call the Steinhart Principle:

Steinhart Principle: A set of propositions exhibits logical properties (e.g., validity, inconsistency) only if there is at least one calculus in which the propositions are jointly formalizable.

I have a worry about the applicability of the Steinhart Principle to either the Unicorn or Intentionality that I would like to raise in terms of what I’ll call the Mandik Principle:

Mandik Principle: The adoption of a formalism is philosophically fruitful only if doing so doesn’t beg (pro or con) the question at hand.

Consider, then, the following challenge: State the Problem of Intentionality in a way that simultaneously respects both the Steinhart Principle and the Mandik Principle.

Can this challenge be met? I haven’t made up my mind one way or another, but here are some reasons for doubting that the challenge can be met.

Consider that meeting this challenge would involve formulating the three propositions in a way that doesn’t require one to assign a particular truth-value to any of them. Now consider proposition #1. It is very difficult to see how to proceed with its formalization without also taking a stand on the truth of 1, 2, or 3. For example…

Suppose that we formulate 1 as
($x)($y)(Px & ~Ey & Txy)
where “($x)” is the existential quantifier, “Px” is “x is a person”, “Ex” is “x exists”, and “Txy” is “x thinks about y”.

Lots of problems arise aside from the fact that one may be squeamish about an existence predicate. In particular, formulating 1 in terms of the two-place “Txy” presumes the truth of proposition 3.

On the other hand, we might try to formulate 1 as
($x)($y)[Px & Tx & ~($z)(Uz)]
where “Ux” is “x is a unicorn” and “Tx” is a predicate we construct by presuming a language of thought and an apparatus of thought-quotation giving us “x is thinking ‘($z)(Uz)’”.

On this formulation lots of problems arise aside from the fact that we are quantifying into the opaque context of thought quotation. In particular, it looks like such a formulation in terms of a one-place thinks predicate presumes the falsity of 3.

Let’s suppose for the sake of conversation that there is no formalization of the Problem of Intentionality that satisfies the Mandik Principle. What, then, is the most appropriate response to the Problem of Intentionality? Rejecting it as a non-problem seems itself to beg genuine philosophical questions.

Third-Manning the Representation Relation

Monday, May 14th, 2007



OK

Originally uploaded by Pete Mandik.

I’ve been thinking about Chase Wrenn’s third-man argument against realization (here) and my own against non-reductive physicalism (here) and the less-than-fully-baked idea occurred to me to run a third-man against the so-called representation relation.

My recollection without looking anything up is that old-school third-man arguments go something like this:
Step 1. Start with some uncontested fact like that Chase is tenured.
Step 2: Conjoin it with a Dumb Principle like, nothing can be true of Chase (like being tenured) without there being some additional entity (like the property of being tenured) and some relation borne to that entity (like the instantiating-a-property relation).
Step 3. Generate an infinite regress by reusing the Dumb Principle to postulate, next, a third entity (like the property of bearing the partaking in the instantiating-a-property relation) and so on.
Step 4. Block the regress by rejecting the Dumb Principle

Adapting this to the present case, namely, to deride the representation relation, yields:
Step 1: Start with some uncontested fact like that Chase thinks that Buffy the Vampire Slayer is blonde.
Step 2: Conjoin it with a Dumb Principle like nothing can be thought about by Chase (like that Buffy the Vampire Slayer is blond) without there being some additional entity (like the non-actual possible state of affairs of Buffy the Vampire Slayer being blond) and some relation borne to that entity (like the representation relation).
Step 3. Generate an infinite regress by reusing the Dumb Principle to blah blah blah…
Step 4. Block the regress by rejecting the Dumb Principle