Archive for October, 2006

Wanted: An Actual Argument for the Knowledge Intuition

Friday, October 27th, 2006

The Gates of Central Park

Originally uploaded by Pete Mandik.

The Knowledge Intuition is what makes so much of the philosophy of mind go ’round. The Knowledge Intuition is this:

(KI): One cannot know what it is like to have an experience of a certain type unless one has had an experience of that type.

Consider, now, the following Bold Claim:

(BC): No one has ever given an argument for the Knowledge Intuition.

Before considering the truth of BC, let’s talk a bit about its alleged boldness. On the one hand, maybe it isn’t so bold. The Knowledge Intuition is an intuition after all, and nowadays calling something an intuition is a gentle way of admitting to yourself and others that you don’t have an argument for this thing you really want to believe anyway. On the other hand, however, BC is a non-analytic negative existential and those are notoriously difficult to prove short of an exhaustive search of the entirety of creation. And this leads us now to the question of the truth of BC…

Maybe someone somewhere has an argument for KI and thus a counterexample to BC. And maybe they could post it here on Brain Hammer as a comment to this post. I’m willing to be pretty liberal as to what counts as an argument for KI. Maybe it would be entailed by some theory of consciousness that has independent support. Or maybe someone has a weird argument for how, even though KI, strictly speaking, is supported by no argument, it merits acceptance due to the health-conducive effects of believing in it. Or something. Note, however, that mere assertions of the alleged intuitiveness or obviousness of KI are not welcome.

Anti-Mind, pt. 2 of 2

Thursday, October 26th, 2006

The world is everything that is the case

Originally uploaded by Pete Mandik.

Does it take a mind to detect a mind? If there could be a principled answer to this question the implications would be huge for the philosophy and science of mind.

Consider that so much of science depends on the unintelligent detection of unintelligents. Hydrogen samples are not particularly intelligent. Further, mechanisms capable of detecting the presence of hydrogen need not themselves be intelligent.

Maybe part of being a natural kind is that the unintelligent detection of instances of that kind is possible. Jerry Fodor has suggested that non-natural kinds like crumpled shirts or doorknobs can only be detected by minds. You have to be the sort of thing that knows a bunch of stuff in order to “light up” in the presence of a door knob.

In the Sterling short story “Swarm” excerpted in my previous post, the Nest is this asteroid that is mostly just a big super-organism that wanders the universe and whenever it is “invaded” it assimilates the invaders. Most of the various diverse species in the asteroid were once representatives of vast space-faring technological cultures that, when they encountered the Nest, got taken over and reduced to unintelligent animals and integrated into the Nest ecology inside of the asteroid. Swarm is an intelligent organism activated under certain instances for the protection of the Nest. Swarm explains how ultimately useless intelligence and consciousness is and suggests that the Nest is entirely unintelligent, and that the Nest grows a new Swarm whenever an intelligent invader needs to be dealt with. Once the intelligent invader is dealt with (rendered into a dumb slave animal) then Swarm self-destructs being no longer needed.

It occured to me that Swarm was to minds what antibodies are to germs, so I coined “anti-mind”. It also occurred to me that if Swarm was right that prior to the activation of Swarm, the Nest group organism was truly non-cognitive, then whatever mechanism that activates the growth of a new Swarm must itself be an unintelligent mechanism. So, the idea of an anti-mind is the idea of a thing that is not a mind but is capable of detecting minds. But this leads to what strikes me as some pretty interesting philosophical questions: Is there any way a dumb mechanism can detect the presence of intelligence? Can an unconscious mechanism detect the presence of consciousness?

If Dennett is right, intentional systems are detectable only from the intentional stance, which I take to entail that only minds can detect minds. If a lot of qualia-freaks are right, the only way to detect the presence of qualia is to have some yourself, and thus only consciousness can detect consciousness.

If these remarks are correct, the implications for science fiction are obvious: the “anti-mind” in the Sterling story is impossible. But enough about fiction: what about science? If the impossibility of unintelligent detection entails that the kinds that are intelligently detected are non-natural, then is a full-blown science of such kinds thereby doomed?

Anti-Mind, pt 1 of 2

Tuesday, October 24th, 2006


Originally uploaded by Pete Mandik.

Excerpts from Bruce Sterling’s Swarm

“You are a young race and lay great stock by your own cleverness, ” Swarm said. “As usual, you fail to see that intelligence is not a survival trait.”

Afriel wiped sweat from his face. “We’ve done well,” he said. “We came to you, and peacefully. You didn’t come to us.”

“I refer to exactly that,” Swarm said urbanely. “This urge to expand, to explore, to develop, is just what will make you extinct. You naively suppose that you can continue to feed your curiosity indefinitely. It is an old story, pursued by countless races before you. Within a thousand years—perhaps a little longer…your species will vanish.”


“Intelligence is very much a two-edged sword, Captain-Doctor. It is useful only up to a point. It interferes with the business of living. Life, and intelligence, do not mix very well. They are not at all closely related, as you childishly assume.”

“But you, then—you are a rational being—”

“I am a tool, as I said.” […] “When you began your pheromonal experiments, the chemical imbalance became apparent to the Queen. It triggered certain genetic patterns within her body, and I was reborn. Chemical sabotage is a problem that can best be dealt with by intelligence. I am a brain replete, you see, specially designed to be far more intelligent than any young race. Within three days I was fully self-conscious. Within five days I had deciphered these markings on my body. They are the genetically encoded history of my race…within five days and two hours I recognized the problem at hand and knew what to do. I am now doing it. I am six days old.”


“Technology, though I am capable of it, is painful to me. I am a genetic artifact; there are fail-safes within me that prevent me from taking over the Nest for my own uses. That would mean falling into the same trap of progress as other intelligent races.”

Everyone has four eyes, or: why optometrists are stupid.

Saturday, October 21st, 2006

The Ones that are Held for Pleasure

Originally uploaded by Pete Mandik.

1. Assume, for simplicity, that everyone has two eyes that they see with (apologies, then, to pirates and cyclopses).

2. Assume, for simplicity, that everyone has two eyes that are seen (again, apologies).

3. Eyes that are seen are objective, that is, everyone else can see them.

4. Eyes that you see with are subjective, that is, only you can see with them.

5. Eyes that are seen are spatial (they have locations and shapes) and this is readily observable.

6. Eyes that you see with are nonspatial (they have neither locations nor shapes) since they occupy no position or amount of the visual field.

7. Eyes that are seen with cannot be identical to eyes that are seen since an objective spatial thing cannot be identical to a subjective nonspatial thing.

Therefore, everyone has four eyes.

Bursting at the Seems 2: Electric Boogaloo

Wednesday, October 18th, 2006


Originally uploaded by Pete Mandik.

Scenario 1:
Smith and Jones see a dog that is in fact white but due to a trick of the electric lighting, seems blue. Smith is unaware of the facts about the lighting and so believes that the dog is blue. Jones is hip to the lighting situation and so believes the dog is white. Jones agrees, though, that in spite of his believing it to be white, the dog seems blue.

Question 1:
Is there something going on in the minds of Smith and Jones when they look at the dog that cannot be accounted for in terms of their various dispositions to make certain judgments?

Answer 1:
We’ll come back to this in a bit.

Scenario 2:
Smith and Jones are playing Let’s Make a Deal with Monty Hall. There are three doors for Smith and three for Jones. Behind one of Smith’s doors is a car. Likewise for Jones. They each pick their door number one. Before door number one is opened, Monty Hall opens door number three and reveals that there is a goat behind it. Monty asks if they’d like to keep door number one or switch to door number two. Smith figures there is a fifty/fifty chance that the car is behind door number one, so he believes door number two to not be a superior choice. Jones is hip to the explanation of the relevant probabilities and so believes correctly that there is an advantage in switching. Jones admits, though, that while he trusts the explanation, he doesn’t totally understand it, and sympathizes with Smith’s urge to not switch.

Question 2:
Is there something going on in the minds of Smith and Jones when they play Let’s Make a Deal that cannot be accounted for in terms of their various dispositions to make certain judgments?

Answer 2:
No. Here’s a straightforward and uncontroversial explanation of what is going on. Smith has a disposition to judge door number two to not be a superior choice and is aware of no overriding considerations against resisting his disposition. Jones similarly has a disposition to judge door number two to not be a superior choice, but is aware of overriding considerations in favor of resisting this disposition, so he resists. He believes door two to be superior but agrees that it seems not to be suprerior. In what does this latter seeming consist? It consists in his overridden disposition to make a certain judgment.

Answer 1 revisited:
No. Here’s a straightforward yet controversial explanation of what is going on. Smith has a disposition to judge the dog to be blue and is aware of no overriding considerations against resisting this disposition. Jones similarly has a disposition to judge the dog to be blue, but is aware of overriding considerations in favor of resisting this disposition, so he resists. He believes the dog to be white but agrees that it seems to be blue. In what does this latter seeming consist? It consists in his overridden disposition to make a certain judgment.

Objection 1:
This leaves out phenomenal consciousness! It is obvious that there is something else going on in the minds of Smith and Jones besides their various dispositions to make judgments: they have conscious experiences with blue qualia! It is obvious that there is more to seeming than epistemic seeming. It is obvious that there is, additionally, phenomenal seeming.

Reply 1:
Note that Objection 1, while containing many exclamation points, contains no arguments. What it does contain is an assertion that it seems like there are phenomenal seemings. Pending further argument, there’s no reason to not just assimilate this as more epistemic seeming. It epistemically seems to qualiophiles that there are phenomenal seemings. So what?

Objection 2.
But Mandik, you have claimed elsewhere to love qualia. You have also claimed elsewhere to have a theory of phenomenal consciousness. What is your major malfunction?

Reply 2.
So-called phenomenal seemings are reducible to a sub-class of epistemic seemings. There’s nothing going on in the mind in these scenarios that can’t be explained in terms of information bearing states (let’s call them sensations) and our conceptual reactions to them (let’s call them thoughts). So, what are qualia? They are introspectible properties of conscious states. What are conscious states? Not every thought is conscious. Nor is every sensation. Conscious states are hybrid states of mutually causally interacting thoughts and sensations. Dog triggers sensation which triggers thought? If that’s all that happens, neither thought nor sensation is conscious. Dog triggers sensation which triggers thought which feeds back and reactivates sensation? If that happens, both thought and sensation jointly comprise a conscious state. For more on this, see my “Phenomenal Consciousness and the Allocentric-Egocentric Interface“. What’s introspection? It’s the conceptual exploitation of the information that mental states carry about themselves. For more on this see “Churchlandik Introspection” and “The Instrospectibility of Brain States as Such“.

Happy Friday the 13th

Friday, October 13th, 2006

Bull Dog

Originally uploaded by Pete Mandik.

Friday the 13th is cool. Friday the 13th in October is cooler. For some pre-halloween spookiness, enjoy my scary skeleton pix I took at the American Museum of Natural History.

Bad Bird

Bad Bird

Churchlandik Introspection

Friday, October 13th, 2006

Below is my development of Paul Churchalnd’s development of Wilifred Sellars’ account of introspection. I used to call it “Churchlandish” until David Rosenthal suggested “Churchlandik”. For a longer version see my 2006 paper The Introspectability of Brain States as Such. In Brian Keeley, (ed.) Paul M. Churchland: Contemporary Philosophy in Focus. Cambridge: Cambridge University Press.

The Churchlandik account of introspection depends on a particular view of perception and an analogy between perception and introspection. The view of perception at play here is that “perception consists in the conceptual exploitation of the natural information contained in our sensations or sensory states.” (Churchland 1979, Scientific Realism and the Plasticity of Mind, p. 7). Analogously then, introspection is the conceptual exploitation of natural information that our sensations or sensory states contain about themselves. Fleshing out these views of perception and introspection requires us to flesh out what Churchland thinks the conceptual exploitation of natural information is. Crucial here is a distinction Churchland draws between two kinds intentionality that sensations can have, that is, two ways in which a sensation can be a sensation of something. A sensation can have “objective intentionality” as well as “subjective intentionality” or, in other words, a sensation can be a sensation of X in an objective sense and in a subjective sense. Adapting Churchland’s formulations (from ibid, p. 14) yeilds:

A given (kind of) sensation one has is a sensation of X (in the objective sense of “of”) if and only if under normal conditions, sensations of that kind occur in one only if something one’s perceptual environment is indeed X.

A given (kind of) sensation one has is a sensation of X (in the subjective sense of “of”) if and only if under normal conditions, one’s characteristic non-inferential response to any sensation of that kind is some judgment to the effect that something or other is X.

The objective intentionality of sensations is the information that sensations actually carry about the environment regardless of whether or not we exploit that information. The objective intentionality of sensations determines what it is that we are capable of perceiving. What we actually do perceive depends on subjective intentionality. That is, what we actually do perceive depends on what concepts we bring to bear in the judgments that our sensations non-inferentially elicit. So, for example, whether I am capable of seeing the tiny insect on the far side of the room depends on whether I have states of my visual system that reliably co-vary with the presence of that object, and if my eyesight is insufficiently acute, I will lack such states. Whether I actually do perceive that object depends on more than just good eyesight. It depends on whether I actually do employ my conceptual resources to interpret my visual sensations as indicating the presence of an insect.

The crucial aspects of this account of perception are those that allow for the reconstruction of the distinction between what is perceived without inference and what is inferred but not perceived.

Let us consider the following situation to illustrate this distinction. Two friends, George and John, are lunching in a well lighted location when, as part of some publicity stunt, a man in a realistic gorilla suit runs through the area. Suppose that both gorilla suit and gorilla act are quite realistic and convincing to the untrained eye. George, being a special effects expert for the film industry, is not fooled and can see quite clearly that this is indeed a man in a costume. John, however, is a novice and cannot help but be fooled: he sees this as a genuine gorilla, perhaps escaped from the nearby zoo. In fact, John the novice continues to see this individual as a genuine gorilla even after George the expert assures him that it is in fact a suited man. John may even come to believe George’s testimony for he trusts George’s expertise, but John cannot shake the impression that it is a real gorilla that is causing a ruckus in the restaurant.

There is a sense in which both John and George see the same thing. But only George sees that thing as a man in a suit. They both know that it is a man in a suit. However, in spite of his knowledge, John is incapable of seeing it as a man in a suit. John and George both have visual sensations with the same objective intentionality. They both have states of their visual system that causally co-vary with, for example, the distinctive way that a man in a gorilla suit moves. But only George is able to automatically (without an intervening inference) apply the concept of a man in a gorilla suit to the thing causing his current visual sensation and thus only George’s sensations have the subjective intentionality indicating the presence of a man in a gorilla suit. Unlike George, John is incapable of automatically (without an intervening inference) applying the concept of a man in a gorilla suit to the thing causing his current visual sensation, and thus John’s sensations lack the subjective intentionality indicating the presence of a man in a gorilla suit.

The appropriate analogy, then, to introspection would be the following. If a person knew that their mental states were identical to brain states, but was incapable of automatically applying the concept a brain state to a mental state, then in spite of their knowledge they would be incapable of introspecting their brain states as brain states. In contrast, if they were able to automatically apply the concept of a brain state to their brain states then they would be introspecting their brain states as such: their brain states would seem like brain states to them.

Fig. 1. Realistic gorilla suit
Fig. 2. Real gorilla

The Big Brain Blow-Up

Friday, October 13th, 2006

The World’s Largest Brain - Brain Balloon Project

On Saturday, October 14, 2006* Atlanta-based neuroscientists will team up to raise a nine-story hot air balloon in the shape of a human brain! we will also raise awareness about the brain and brain disorders.


From Jared Blank, via the CUNY Cog Sci list.

Bursting Apart at the Seems

Tuesday, October 10th, 2006

I assume that no interesting controversy exists over whether there is an epistemic sense of “seems”. I question whether there additionally exists a phenomenal sense of “seems”. I question whether there are phenomenal appearances in addition to epistemic appearances.

To get a handle on what the alleged distinction is supposed to be, it helps to consider the following picture. Smith and Jones are two little men in two little opaque boxes. Mediating between the interiors and exteriors of their boxes is a camera that feeds into a computer capable of reliably detecting the presence of dogs near the boxes. Smith’s dog-detecting device has a video readout that displays Smith the printed word “dog”. Jones’ dog-detecting device has a speaker that says “dog” aloud upon dog detection. Both Smith and Jones, via the use of their dog detectors, come to judge that a dog is just beyond the walls of their boxes. Smith and Jones are alike, then, with respect to the ways things epistemically seem to them: insofar as they judge that dogs are present, it epistemically seems to them that dogs are present. However, in spite of these similarities between Smith and Jones, there are notable differences. In particular, there are differences in the evidence they rely on as the bases of their dog-detecting judgments. Smith’s evidence concerns what’s happening on the video display whereas Jones evidence concerns what’s happening with the audio speaker.

Is the above picture a good model for distinct kinds of appearance? Suppose we do something to transform the differences between Smith’s and Jones’ box interiors into a difference between the insides of Smith’s and Jones’ heads. Suppose we kick them out of the boxes and have only their own senses mediate between them and dogs. Smith is deaf but sighted and Jones’ is blind but has his hearing intact. Smith and Jones come to both believe that there is a dog present, but one does so by seeing the dog and the other does so by hearing the distinctive bark. Are the differences that arise in spite of their similarity in judgment worth calling a distinct kind of appearance? Are the ways dogs appear to Smith and Jones epistemically identical but phenomenally distinct? I think not.

If there is indeed a distinction to be made sense of, we need to be able to make sense of two different kinds of cases: one in which phenomenal appearances remain constant while epistemic appearances change and one in which epistemic appearances remain constant while phenomenal appearances change. Neither of the versions of the story about Smith and Jones involve the requisite changes. Whatever changes would be required to change Smith into Jones would change various beliefs Jones had, beliefs like whether he was looking at a monitor versus listening to a speaker

In the first version of the story, Smith and Jones differ with respect to their evidence, e.g. screen vs. speaker. In the second story, there is also a difference with respect to evidence. But it is a big mistake to think that in either story the evidence is something in the heads of Smith and Jones. In the second story, the different evidence is the difference between light reflected and sound emitted. These different kind of events trigger in Smith and Jones certain beliefs which in turn give rise to inferences, the conclusions of which are the common belief that a dog is present.

Fig. 1. Smith and Jones.

Fig. 2. Dog, detected. (Photo by No Fixed Abode)

Cognitive Cellular Automata

Friday, October 6th, 2006

Abstract: In this paper I explore the question of how artificial life might be used to get a handle on philosophical issues concerning the mind-body problem. I focus on questions concerning what the physical precursors were to the earliest evolved versions of intelligent life. I discuss how cellular automata might constitute an experimental platform for the exploration of such issues, since cellular automata offer a unified framework for the modeling of physical, biological, and psychological processes. I discuss what it would take to implement in a cellular automaton the evolutionary emergence of cognition from non-cognitive artificial organisms. I review work on the artificial evolution of minimally cognitive organisms and discuss how such projects might be translated into cellular automata simulations.

Forthcoming in Theoria et Historia Scientiarum special issue of on Philosophy and Artificial Life.

Link to full text of article.

Link to animation of the spontaneous evolution of Sayama’s evoloops in finite cellular automaton space.