Archive for August, 2006

Lost in Space

Wednesday, August 23rd, 2006



Danger! Danger!

Originally uploaded by Pete Mandik.

Subjects distinct from me must have non-experiential causal grounds distinct from me. Plausibly, these distinct causal grounds are physically distinct (or so I argued in “Evans, Experience, and Abiding Causal Grounds“) Subjects distinct from each other must be physically distinct from each other (or so I argued in “Dualism, Physicalism, and Spaceballs“) However, the previous remarks leave open the question of where subjects are. In other words, just because subjects are individuated spatially, doesn’t mean that their spatial locations are identical to the locations of the physical processes that constitute their individuating properties. For example, a version of externalism may be correct whereby the supervenience base of Smith’s mind is larger than his brain, but that leaves open the possibility that Smith’s mind and brain are located in the same place.

One place where the question of the location of subjects is brought to a head (pun intended) is in Dan Dennet’s thought experiment in his famous “Where Am I” article. In brief, the thought experiment involves having your brain separated from your body, but your brain remains in (remote) control of the body via radio links. While viewing your disembodied brain with your remotely (yet self-) controlled eyes, you are invited to contemplate: where am I?

The thought experiment helps show, 1 that its not obvious that we should identify the subject’s location with the brain’s and 2 what a plausible alternative is, namely that the subject is where it seems to the subject to be. Another way to describe the two main options is in terms of vehicles and contents. That the subject be located 1 where the vehicles of the mental representations are, namely in the brain or 2 where the contents of the mental representations say the subject is, namely where the (perhaps brainless) body is.

In his “Locating Subjects of Experience in the Natural Order” (available as a video podcast here), Rick Grush argues for the latter view. As he puts it in an abstract, he is

“arguing that the subject is primarily a content-level phenomenon, roughly, the subject is implicitly defined or determined by the contents – including perceptual and experiential – jointly grasped. And because many of the contents grasped have implicit and explicit location information built into them, the subject determined by these contents has its location determined by the location-relevant aspects of those contents.”

One thing I find problematic with such a view is that it involves the identification of what something is (in this case, with what its location is) with how it is represented. Or, more briefly, it identifies something’s being F with its being represented as F. I think this notion is deeply problematic and one of the main problems I have with it is that I think nothing can consist in its being represented. A very brief argument for this view goes as follows. We can represent things that don’t exist. But failing existence, those “things” instantiate no properties whatsoever. Thus, representing is not property confering. Nothing instantiates a property in virtue of being represented. (I say more about this line of thinking in my “Unicorns and Monitoring Theories of Consciousness”)

Now, this line of argument against representation being property-conferring is only denying that representation bestows properties onto the intentional objects of representations. I do not wish to deny that representations themselves have properties. Nor do I wish to deny that representations have contents. What remains, then, is the view that whatever representational content amounts to in terms of properties, it amounts to properties of the representations. It seems, then, that the preferable choice in answering “where am I?” is the vehicular choice: the subject is where the brain is.


Royale With Cheese
Fig 1. Photograph by Pete Mandik. Who is where his brain is.


Snakes on a Brain

Monday, August 21st, 2006

Are there belief-dependent properties? More specifically, does having a belief that a is F suffice to confer any properties whatsoever to a? If you were a certain kind of idealist, you would think that such a belief sufficed to confer the property of being F to a’s. If you weren’t an idealist, but nonetheless bought into certain kinds of relational accounts about belief, then you would think that the belief in question at least conferred to a’s the property of being believed. As I’ve argued in “Unicorns and Monitoring Theories of Consciousness” there are no such properties and this spells bad news for various theories of consciousness. Here’s Dennett on belief dependent properties. And snakes:

(1) Many people (wrongly) believe that snakes are slimy.

This is a fact about people, but also about snakes. That is to say,

(2) Snakes are believed by many to be slimy.

This is a property that snakes have, and it is about as important a property as their scaliness. For instance, it is an important ecological fact about snakes that many people believe them to be slimy; if it were not so, snakes would certainly be more numerous in certain ecological niches than they are, for many people try to get rid of things they think to be slimy. The ecological relevance of this fact about snakes does not “reduce” to a conjunction of cases of particular snakes being mistakenly believed to be slimy by particular people; many a snake has met an untimely end (thanks to snake traps or poison, say) as a result of someone’s general belief about snakes, without ever having slithered into rapport with its killer. So the relation snakes bear to anyone who believes in general that snakes are slimy is a relation we have reason to want to express in our theories. So too is the relation any particular snake (in virtue of its snakehood) bears to such a believer. (From “Beyond Belief” in The Intentional Stance pp. 176-177.)

I guess I’m not seeing what it is about snakes an anti-relationalist about belief would be missing out on here. Suppose one were to affirm that

(3) Snakes get killed.

and

(4) People have snakes-are-slimy beliefs.

Couldn’t I cite causal relations between the entities described in (3) and (4) without also having to admit the truth of

(5) There exists such a property of snakes as being-believed-to-be-slimy

???

I don’t see why not. What hinges on this? Well, first and foremost, various theories of consciousness are going to be in trouble, like those that hold that being a conscious state is being a represented state and those that hold that being a phenomenal property is being a represented property. Also, I got to write something called “Snakes on a Brain”.


Fig 1. Samuel L. Jackson regards mutha-f$%&*#g snakes as existing independently of his mutha-f$%&*#g mind.

Phenomenal realism and empirical depth

Thursday, August 17th, 2006

I think the only interesting positions regarding the metaphysics of qualia are dualisms like Chalmers’, idealisms like Dennnet’s, and identity theories like the Churchlands’. And only the latter two strike me as at all appealing. In this post I’d like to air the following beef with dualism: it constitutes an untenable combination of realism with a lack of empirical depth.

To spell this out, I’ll start by spelling out some terminology, especially what I take the relevant notions of idealism, realism, and empirical depth to be. Let realism about x be the view that x facts outstrip x beliefs and idealism about x to be the view that there is nothing more to x facts than our x beliefs. (Note that in other discussions of realism and idealism what is at stake might be a much broader notion of mind-dependence/independence than the notion of belief-dependnce/independence that I want to focus on here.) By “emprical depth” I mean that property of theories in virtue of which they have the kind of “surplus meaning” that guides scientific discovery and provides predictive and explanatory power.

I suppose that the best reasons for being a realist about anything have to do with the gains in empirical depth thereby achieved. In the physical sciences, being a realist about unobservable particles buys you predictive and explanatory power unavailable to a phenomenalist (a kind of idealist) that construes particle-talk as reducing to statements concerning sets of observations. This is not to say that idealism is always bad. If there were a domain which seemed to lack empirical depth, then we would have good grounds for being idealists about entities in that domain. Here’s a domain that lacks empirical depth: nice shirts. There’s probably not much we can predict or explain about nice shirts. Some shirts are nice, some are not, the end. The science of nice-shirt-ology is not forthcoming. A good position to take on nice shirts then is a kind of idealism: there are no facts about nice shirts (qua nice shirts) that outstrip what we think about nice shirts.

Enough about shirts. What about qualia? Does qualia-talk carry with it any empirical depth? It depends on who you ask. As best as I can tell from dualists, there’s not much empirical depth to be gained from qualia-talk. (Dualists famously maintain that attributing qualia buys you no explanatory or predictive power not had by attributions of zombie-hood.) Identity theorists however, get you empirical depth (for more on this point see “Consciousness, Data, Electricity, and Rock ” ) What I wonder about dualists is this. If qualia-talk lacks empirical depth then what justifies their phenomenal realism? Why not change teams to idealism and buy into something like Dennett’s “first-person operationalism”? Alternately, maybe they could change their mind about qualia-talk and trade shallows for depths.



Figure 1. Pete Mandik dived for empirical depth and Chris Eliasmith took this cool picture of him.

Fig 2. Nice shirt.

Hammering the Mirror: Against Self-reflection

Tuesday, August 15th, 2006



Grinch

Originally uploaded by Pete Mandik.

The old idea that consciousness is self-consciousness, that conscious states are states of which one is aware, is the target of yesterday’s post, “The Transitivity of Consciousness as a Contingent Reference Fixer.” (See also the query I posted (and ensuing discussion) over at the group-blog, Brains, “What Are You Conscious of When You Have Conscious Experiences.”)

Here are some further thoughts to help make clearer what my interest in this all is, excerpted from my paper “Phenomenal Consciousness and the Allocentric-Egocentric Interface“:

[T]here are philosophical reasons for being suspicious of the transitivity thesis.

First off, according to advocates of the transitivity thesis it is supposed to be intuitively obvious that it is a requirement on having a conscious state that one is conscious of that state. If the transitivity thesis is true it should be obviously incorrect to say of a state that is was conscious before any one was conscious of it. However, if we consider a particular example, it seems that the transitivity thesis is not obviously correct (which is not, of course, to say that it is obviously incorrect). Consider, for example, how one might describe what happens in motion induced blindness experiments when the yellow dots pop into and out of consciousness. [See the demo at the end of "Motion-Induced Blindness and the Concepts of Consciousness"] It seems equally plausible to say either (1) that first the perception of the yellow dot becomes conscious and then you become conscious of your perception of the yellow dot or (2) the perception of the yellow dot becomes conscious only if you also become conscious of your perception of the yellow dot. If the transitivity thesis were pre-theoretically obvious, then option (1) would be obviously incorrect and (2) would be obviously correct. However, since neither (1) nor (2) seem obviously correct (or obviously incorrect), the transitivity thesis is not pre-theoretically obvious.

A second consideration that casts suspicion on the transitivity thesis
concerns how easily we can explain whatever plausibility it has without granting its truth. We can grant that the transitivity thesis may seem plausible to very many people but explain this as being due to the fact that counterexamples would not be accessible from the first-person point of view. If we ask a person to evaluate whether the transparency thesis is true, they will call to mind all of the conscious states of which they have been conscious. But this can not constitute conclusive proof that conscious states are necessarily states that their possessor is conscious of. Consider the following analogy. Every tree that we have ever been aware of is, by definition, a tree that we have been aware of. But this is not due to the definition of being a tree, but only due to the definition of being aware of it. The fact that every tree that we are aware of is a tree of which we have been aware cannot constitute proof that trees are essentially the objects of awareness or that no tree can exist without our being aware of it. By analogy we should not conclude from our being conscious of all of our conscious states that we have been aware of from the first-person point of view that all conscious states are necessarily states that we are conscious of. We should instead view our first-person access to conscious states as a way of picking out a kind of state that we can further investigate utilizing third-person methods. The description “states we are conscious of ourselves as having” thus may be more profitably viewed as a contingent reference fixer of “conscious state” that leaves open the possibility that it is not part of the essence of conscious states that we are conscious of them. Instead, the essence of conscious states is that they are hybrid representations that exist in the allocentric-egocentric interface.

Spelling with Zombies

Monday, August 14th, 2006

Zombie font http://e-zombie.com



Fig. 1. Zombie spelling his own sound-effect.

The Transitivity of Consciousness as a Contingent Reference Fixer

Monday, August 14th, 2006



Neurophilosophy

Originally uploaded by Pete Mandik.

Let us distinguish strong and weak versions of the transitivity thesis as follows:

Strong Transitivity: Necessarily, when one has a conscious experience one is conscious of the experience itself.

Weak Transitivity: When one has a conscious experience one is conscious of the experience itself.

One can argue against strong transitivity by arguing that it is possible to have a conscious experience without being conscious of the experience itself. One way in which this would be possible is if “being conscious of an experience” didn’t give the meaning of “having a conscious experience” but instead was a contingent way of fixing the reference of “having a conscious experience”.

Kripke illustrates the difference between giving the meaning of a term and contingently fixing its reference in terms of the example of “meter” and the standard metal bar in Paris. For the sake of simplicity, let us call that metal bar in Paris “Frenchy”. If “meter” simply meant “the length of Frenchy” then the phrase “Frenchy might have been longer than a meter” is necessarily false. However, it is relatively obvious that there is a way of thinking about “Frenchy might have been longer than a meter” whereby it is not necessarily false. On this way of thinking, we use “the length of Frenchy at time t” to identify a length that Frenchy has contingently and then we use “meter” to refer to that length for whatever object in whatever possible world has that length. There are possible worlds in which Frenchy does not have that length. Thus there are possible worlds in which Frenchy is not a meter long.

By analogy, I suggest, we use “states we are conscious of” to fix the reference of certain kinds of brain states, but there are possible situations in which those brain states occur without their possessors being conscious of them.

Intelligent Design

Saturday, August 12th, 2006



Intelligent Design

Originally uploaded by Pete Mandik.

I snapped and photoshoped this (former) critter I saw at the American Museum of Natural History.

Here are three fun papers by Roy Sorensen that arguably have something or other to do with this photograph (though probably nothing to do with intelligent design):

The Aesthetics of Mirror Imagery, Philosophical Studies 100/2 (2000) 175-191

Mirror Imagery and Biological Selection, Biology and Philosophy 17/3 (June 2002) 409-422

Para-reflections, British Journal for the Philosophy of Science 54 (2003) 93-10

What is the point of experimental philosophy if philosophy isn’t conceptual analysis?

Tuesday, August 8th, 2006

Experimental philosophy is largely taken up by experimental methods to find out what peoples’ intuitions are concerning topics of philosophical interest. Why should philosophers care about experimental philosophy? As best I can tell, the answer to that question is bound up with an answer to the following question. Why do philosophers care about intuitions? As best as I can tell what the answer to that question is, it has something to do with philosophers’ (perhaps tacit) acceptance of the following analogy between philosophy and natural science: intuitions are to philosophical theories what observations are to scientific theories. Scientific theories are supposed to offer simple and coherent explanations of past observations and are tested by their ability to predict future observations. Mutatis mutandis for philosophical theories and intuitions. Suppose that it is indeed intuitive that on TwinEarth “water” means XYZ not H20. Explanation: meanings are determined by causal-historical chains. Prediction: we would say of Swampman (a creature bearing no causal historical relations to anything) that his utterances mean nothing. If that Swampman proposition strikes lots of people as highly un-intuitive, then externalism faces, if not a refutation, then at least a challenge. So the story goes. And if the story had a title it would be something like “Philosophy is Conceptual Analysis”.

The view that philosophy is conceptual analysis is a hypothesis that is supposed to explain how philosophers can come to know stuff by just thinking. Philosophers, I guess, are different from natural scientists who know stuff by looking. (This is, of course, a terrible distinction, but let’s run with it for the sake of argument.) It might turn out, though, that the hypothesis that philosophy is conceptual analysis is a bad hypothesis. One consideration against it is that maybe concepts don’t have analyses. Another consideration is that maybe there are no such things as intuitions (as a distinctive kind of mental state). I won’t pursue these sorts of considerations much here. More interesting to me is the following. There are lots of times in which knowledge is gotten by thinking. Lots of math answers to this description. And parts of physics, like the thought experiments of Galileo and Einstein, answer to this description as well. However, in neither case is the hypothesis that what’s going on is conceptual analysis very promising. And more to the point concerning experimental philosophy, in neither case would a scientific survey of people’s intuitions help us learn anything about math or physics. To be sure, such surveys could yield data of interest to cognitive scientists re folk-math and folk-physics. Similarly, I’ll grant, surveys of the folk concerning their intuitions about knowledge, meaning, and free-will might yield data of interest to cognitive scientists re folk-philosophy. But assuming that philosophy is as distinct from folk-philosophy as physics is from folk-physics, I still wonder why philosophers should care about experimental philosophy.

One intriguing answer to that question might be that the point of experimental philosophy for philosophy is to help us see that philosophy is not conceptual analysis. So if we didn’t know whether philosophy was conceptual analysis, then, for example, finding out that people’s intuitions vary widely about what names refer to might help convince us that the conceptual analysis hypothesis is a bad metaphilosophy for the philosophy of language. But suppose you are already convinced that philosophy isn’t conceptual analysis. What point then, would there be for philosophers in the continued collection of data about people’s intuitions? Perhaps the best answer is “none” and this can be brought out by a reassessment of the analogy between philosophy and natural science. Scientific observation in, e.g., chemistry, isn’t collected by simply surveying the folk as to what they’ve observed about chemicals. The observations that scientific theories answer to are made by trained experts following procedures that are themselves highly informed by the bodies of theory the procedures are designed to test. An analogous view of philosophy casts it as the formulation of theories in light of judgments made by experts. In neither case should the snap judgments of non-experts count for much more than as data for a science of the snap judgments of non-experts.




Fig. 1. Socrates. Just standin’ around. Thinkin’. Not surveying the intuitions of all the slave-boys in Athens.

Sick Horse

Monday, August 7th, 2006



Sick Horse

Originally uploaded by Pete Mandik.

I’ve been having a few technical issues with the host for Brain Hammer, but hopefully a re-install of WordPress puts it back in whack. The philosophizing will be back shortly. In the meantime, enjoy this very sick horse. Also enjoy the wonderful things Chase Wrenn has done with Supergoo.

Consciousness, Data, Electricity, and Rock

Thursday, August 3rd, 2006

There’s been some interesting discussion over at Dave Chalmers’ blog (Fragments of Consciousness) on the falsifiability or lack thereof of various theories of consciousness (in particular, Chalmers’ and Dennett’s). (See this, this, this, this, and this.) A paraphrase of the main question I’m interested in right now might go something like this:

What data, either first-person accessible or third-person-accessible, are predicted by your theory that could conceivably/possibly fail to obtain?

Since that wasn’t exactly the question put to Chalmers, it wouldn’t be exactly correct to say that he answered that there are none. I think, however, that’s something like the spirit of his responses, but don’t take my word for it, follow the links above and judge for yourself.

Re: data and consciousness, Eric Schwitzgebel has no shortage of interesting things to say about introspection over at his blog (The Splintered Mind). See, for example, his recent post on afterimages and weigh in on the question of whether conscious experience always involves afterimages (and how you would know).

Re: afterimages and falsifiability again, one pretty sweet thing about various versions of psychoneural identity theory is that they do predict falsifiable data about consciousness. And not just third-person accessible data. Paul Churchland makes an excellent case for one such account in his recent “Chimerical Colors: Some Novel Predictions from Cognitive Neuroscience” in which very odd color experiences are predicted by a neural model of chromatic information processing. In brief, the differential fatiguing and recovery of opponent processing cells gives rise to afterimages with subjective hues and saturations that would never be seen on the surfaces of reflective objects. Such “chimerical colors” include shades of yellow exactly as dark as pitch-black and “hyperbolic orange, an orange that is more ‘ostentatiously orange’ than any (non-self-luminous) orange you have ever seen, or ever will see, as the objective color of a physical object” (p. 328). Such odd experiences are predicted by a model that identifies color experiences with states of neural activation in a chromatic processing network. Of course, it’s always open to the dualist to make an ad hoc  addition of such experiences to their theory, but no dualistic theory ever predicted them. Further, the sorts of considerations typically relied on to support dualism—appeals to intuitive plausibility and a priori possibility—would have, you’d expect, ruled them out. (Seriously, a yellow as dark as black? Whodathunkit?)

A video of Churchland lecturing on the topic is available here.

In other news, unless there are massive blackouts (or chimerically dark yellow-outs) in New York, NC/DC (the Neural Correlates of David Chalmers) is playing tonight. And if there are massive blackouts in New York tonight, don’t blame us.


Fig 1. This is not Paul Churchland’s hyperbolic orange. A Churchlandish orange is more ostentatiously orange than that.


Fig 2. The cover art to Spinal Tap’s album, Brain-Hammer. Q: Why is Schwitzgebel’s  mind splintered and Chalmers’ consciousness fragmented? A: The Brain Hammer, baby.

Reference:
Churchland, Paul. 2005. “Chimerical Colors: Some Novel Predictions from Cognitive Neuroscience.” In: Brook, Andrew and Akins, Kathleen (eds.) Cognition and the Brain: The Philosophy and Neuroscience Movement. Cambridge: Cambridge University Press.

See also: The Phenomenal Consciousness of Inexistent Colors, Paul Churchland book,  Cognition  and the Brain