Archive for September, 2006

Unicorns and Rainbows

Saturday, September 30th, 2006

David Rosenthal and I had an interesting mini-debate at the bar last night. Things got kicked off when NC/DC drummer Richard Brown asked David if inexistents had properties. I don’t think we got to the bottom of what the right answer should be or how this sits with the Higher-Order Thought theory of consciousness. But here’s a pretty relevant passage from Rosenthal’s (2002) “How many kinds of consciousness?”:

The pretheoretic notion of a mental state’s being conscious, I’ve argued elsewhere, is that of one’s being conscious of being in that state. Common sense doesn’t count as conscious any state of which a subject is wholly unaware. One’s HOTs need not be accurate; one can seem to be in a state that one isn’t in. But since one is conscious of oneself as being in such states, that’s not a case of being conscious of something that doesn’t exist.10 There is no problem about how a nonexistent state can have the monadic property of being conscious. States do not in any case occur independently of something of which they are states. And the occurrence of a conscious state is the appearance one has that one is in that state; compare the way we speak about rainbows. This will seem problematic only if one regards the phenomenological appearances as automatically veridical.11

Since HOTs make one conscious of oneself as being in a particular state, what it’s like for one to be in a state is a function of how one’s HOT represents that state. Does this mean that phenomenality is, after all, a property only of HOTs, and not the qualitative states that HOTs are about?12 Here the distinction between thick and thin phenomenality is crucial. Thin phenomenality, which occurs independently of our being in any way conscious of it, is a property of qualitative states, not HOTs. By contrast, thick phenomenality, which simply consists in the subjective appearance of phenomenality, occurs solely in connection with HOTs. Only if one sees the two types of phenomenality as a single, indissoluble property will there be an appearance of a problem here.

10 Plainly one can be conscious of existent things in ways that are inaccurate, e.g., in respect of properties the thing doesn’t have, and the commonsense idea that being conscious of something is factive must bow to that.

11 For more on the HOT hypothesis, see my ‘‘Two Concepts of Consciousness,’’ Philosophical Studies, 49, 3 (May 1986): 329–359; ‘‘Thinking That One Thinks,’’ in Consciousness: Psychological and Philosophical Essays, ed. Martin Davies and Glyn W. Humphreys, Oxford: Basil Blackwell, 1993, 197–223; ‘‘A Theory of Consciousness,’’ in The Nature of Consciousness: Philosophical Debates, eds. Ned Block, Owen Flanagan, and G€uven G€uzeldere, Cambridge, MA: MIT Press, 1997, 729–753; and ‘‘Explaining Consciousness,’’ in Philosophy
of Mind: Contemporary and Classical Readings, ed. David J. Chalmers, New York: Oxford Univ. Press, forthcoming 2002.

12 As Elizabeth Vlahos has argued, in ‘‘Not So HOT: Higher Order Thought as an Explanation of Phemomenal Consciousness,’’ delivered November 2000 at the New Jersey Regional Philosophical Association.

Comments:

One thing I’m especially curious about is whether appeal to notional states constitutes a satisfying response to my Unicorn argument. I see how appeal to notional states works against Vlahos’ and others’ empty HOT kinds of objections. But unlike their objections, built into mine is the premise that things that don’t exist don’t instantiate any properties. As I’ve said, unicorns fail to instantiate the property of being fast not because they are slow, but because they don’t exist. I gather that there’s a sense in which Rosenthal would disagree with that premise, and I gather this from the remark from the 2002 passage: “There is no problem about how a nonexistent state can have the monadic property of being conscious.” I guess I think there is a problem and I wonder why I shouldn’t. Immediately following that sentence Rosenthal writes:

States do not in any case occur independently of something of which they are states. And the occurrence of a conscious state is the appearance one has that one is in that state; compare the way we speak about rainbows. This will seem problematic only if one regards the phenomenological appearances as automatically veridical.

I suppose that if I knew more about what he had in mind regarding rainbows I would know more about how the instantiation of properties by inexistents is supposed to be unproblematic. Is he supposing that rainbows don’t exist? I’m inclined to think that while rainbows are creatures of appearance–that they have no existence apart from a network of what we might call appearances–they are not notional and thus it is true of rainbows, e.g., that they are visble after showers. The appearances that constitute rainbows strike me as more objective (having to do with angles of light rays etc) than whatever kind of appearance is constituted by potentially false beliefs. So I guess I’m not seeing how appeal to rainbows is supposed to help show as unproblematic the instantiation of properties by notional things since I’m not seeing rainbows as notional. My failure to see rainbows as notional is due to my thinking that whatever appearances they are constituted by are different than whatever might make it that Santa Claus or Sherlock Holmes are notional. But perhaps this distinction I’m appealing to cannot be sustained. Or perhaps my problem is that I’m regarding a certain class of appearances as, in Rosenthal’s words “automatically veridical”.

Whatever the strengths the notional state response has against Empty HOT objections, when considered as a reponse to The Unicorn, it is question begging insofar as it begs the question against my premise that inexistents lack properties.

Fig 1. Nice shirt.

One Great Thing…

Saturday, September 30th, 2006

One great thing about every generalization I’ve ever made about inexitents is that there exist no counterexamples.

Analytic Functionalism and Evolutionary Connectionism

Friday, September 29th, 2006

(I leave in a few hours to go see Jerry Fodor give a talk at the CUNY Graduate Center. I thought it appropriate, then, to post something on how awesome evolution and neural networks are.)

What makes analytical functionalists functionalists is their belief that what makes something a mental state is the role that it plays in a complex economy of causal interactions. What makes analytical functionalists analytical is their belief that which roles are essential is to be discovered by a consultation of common sense knowledge about mental states.

There are three serious related problems that arise for analytical functionalism. The first problem is that analytical functionalism appears to be committed to the existence of analytical truths and various philosophers inspired by Quine have been skeptical of analytical truths. As Prinz (20**) succinctly sums up this Quinean skepticism, the objection is that “[r]oughly, definitions of analyticity are either circular because they invoke semantic concepts that presuppose analyticity, or they are epistemically implausible, because they presuppose a notion of unrevisability that cannot be reconciled with the holistic nature of confirmation” (p. 92). There are two main ways in which analytic functionalism seems saddled with belief in analytic truths. The first is concerns the nature of psychological state types such as beliefs and desires. Analytical functionalism is committed to there being analytic truths concerning the necessary and sufficient conditions for being a belief. The second concerns the meaning of mental representations. The most natural theory of meaning for the analytic functionalist to adopt is that what makes one’s belief about, say, cows, have the meaning that it does, is the causal relations it bears to all other belief states. However, it is likely that no two people have all the same beliefs about cows. Thus, on pain of asserting that no one means the same thing when they think about cows, the functionalist cannot allow that every belief one has about cows affects the meaning of one’s cow thoughts. In order to allow that people with divergent beliefs about cows can both share the concept of cows, that is, both think about the same things when they think about cows, the analytic functionalist seems forced to draw a distinction between analytic and synthetic beliefs, eg., a distinction between beliefs about cows that are constitutive of cow concepts and beliefs that are not. But if Quinean skepticism about the analytic/synthetic distinction is correct, no such distinction is forthcoming.
The second problem arises from worries about how minds are implemented in brains. Many so-called connectionists may be seen to agree with analytical functionalists that mental states are defined in terms of networks. However, many connectionists may object that when one looks to neural network implementations of cognitive functions, it is not clear that the sets of nodes and relations postulated by common sense psychology will map on to the nodes and relations postulated by a connectionist architecture (see, e.g. Ramsey, et al., 1991). The question arises of whether folk-psychological states will smoothly reduce to brain states or be eliminated in favor of them. (I will not discuss further the third option that folk-psychological states concern a domain autonomous from brainstates.)
A third problem arises from worries about the evolution of cognition. If a mind just is whatever the collection of folk psychological platitudes are true of, then there seem not to be any simple minds, for a so called simple mind would be something that the folk psychological platitudes were only partially true of in the sense that only some proper subset of the platitudes were true of it. However a very plausible proposal for how our minds evolved is from simpler minds. It counts against a theory that it rules out a priori the existence of simpler minds than ours for it leaves utterly mysterious what the evolutionary forebears of our minds were. This third problem is especially pertinent to artificial life researchers.
One promising solution to these three problems involves appreciating a certain view concerning how information-bearing or representational states are implemented in neural networks and how similarities between states in distinct networks may be measured. Models of neural networks frequently involve three kinds of interconnected neurons: input neurons, output neurons, and neurons intermediate between inputs and outputs sometimes referred to as “interneurons” or “hidden-neurons”. These three kinds of neurons comprise three “layers” of a network: the input layer, the hidden layer, and the output layer. Each neuron can be, at any given time, one of several states of activation. The state of activation of a given neuron is determined in part by the states of activations of neurons connected to it. Connections between neurons may have varying weights which determine how much the activation of one neuron can influence the other. Each neuron has a transition function that determines how its state is to depend on the states of its neighbors, for example, the transition function may be a linear function of the weighted sums of the activations of neighboring neurons. Learning in neural networks is typically modeled by procedures for changing the connection weights. States of a network may be modeled by state spaces wherein, for example, each dimension of the space corresponds to the possible values of a single hidden neuron. Each point in that space is specified by an ordered n-tuple or vector. A network’s activation vector in response to an input may be regarded as its representation of that input. A state of hidden-unit activation for a three-unit hidden layer is a point in a three-dimensional vector-space. A cluster of points may be regarded as a concept (Churchland 19**).
Laakso and Cottrell (1999, 2000) propose a method whereby representations in distinct networks may be quantified with respect to their similarity. Such a similarity measure may apply even in cases where the networks in question differ with respect to their numbers of hidden units and thus the number of dimensions of their respective vector spaces. In brief the technique involves first assessing the distances between various vectors within a single network and second measuring correlations between relative distances between points in one network and points in another. Points in distinct networks are highly similar if their distinct relative distances are highly correlated.
Regarding the analytic/synthetic distinction related worries, the Laakso and Cottrell technique allows one to bypass attributions of literally identical representations to distinct individuals and make do instead with objective measures of degrees of similarity between the representations of different individuals. Thus if I believe that a cow once ate my bother’s hat and you have no such belief, we may nonetheless have measurably similar cow concepts. This is no less true of our psychological concepts such as our concepts of belief and concepts of desire. The so-called common-sense platitudes of folk psychology so important to analytic functionalism may very well diverge from folk to folk and the best we can say is that each person’s divergent beliefs about beliefs may be similar. And similarity measures are not restricted to the concepts that constitute various folk theories, we may additionally make meaningful comparisons between various folk theories and various scientific theories. This last maneuver allows us to retain one of the key insights of analytic functionalism mentioned earlier: that we are in need of some kind of answer to the question ‘how do you know that your theory is a theory of belief?” The answer will be along the lines of “because what I’m talking about is similar to beliefs.”
Regarding the question of simple minds, if there are no analytic truths, then there is no a priori basis (if any) for drawing a boundary between the systems that are genuine minds and those that are not. Similarity measurements between simple minds and human minds would form the basis for a (mind-body?) continuum along which to position various natural and artificial instances. How useful for understanding human minds will be the study of systems far away on the continuum? We cannot know a priori the answer to such a question.

Jombie On My Mind

Fig. 1. What are minds such that complex ones come from somewhere instead of nowhere? Photo by Pete Mandik.

References

Churchland, P. (19**)

Laakso, Aarre and Cottrell, Garrison W. (1998) How can I know what You think?: Assessing representational similarity in neural systems.(92k) In Proceedings of the Twentieth Annual Cognitive Science Conference, , Mahwah, NJ: Lawrence Erlbaum.
Laakso, Aarre and Cottrell, Garrison W. (2000) Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology, 13(1):47-76.
Prinz, J. (****) “Empiricism and State-Space Semantics” in Keeley, ed. Paul Churchland.

Ramsey, W., Stich S. & Garon, J. (1991) Connectionism, eliminativism, and the future of folk psychology, in: W. Ramsey, S. Stich & D. Rumelhart (Eds.) Philosophy and Connectionist Theory. Hillsdale NJ: Lawrence Erlbaum.

Sugar Skulls and Zombie Walks Pt. 2

Wednesday, September 27th, 2006

Link to: My Deitch Art Parade Flickr photoset

Link to: Sugar Skulls and Zombie Walks Pt. 1

Sugar Skull

Fig 1. Sugar Skull. Photo by Pete Mandik

Girls Gone Zombie

Fig 2. Girls Gone Zombie. Photo by Pete Mandik

Mandik+Robots+Movie+Hoboken

Tuesday, September 26th, 2006

I was interviewed a few years back for a documentary on robots that will be screened Thursday, Sept. 28 in Hoboken, NJ. I haven’t seen it yet, but I’m thinkin’ “Oscar”.

From the press release:

HOBOKEN, NJ–The next film screened by the Hoboken Film Society will be the documentary “UnWound” on Thursday, Sept. 28, 2006 at 8 pm at Symposia Bookstore, 510 Washington Street, Hoboken, NJ 07030.

“UnWound,” from producer/director Jeff Cioletti, explores the world of toy robots, beginning with the earliest Japanese scrap-metal pieces from the 1940s. It tracks more than six decades of history, including the modern-day remote-controlled battling robot craze. Old favorites such as Machine Man, Mr. Machine and Rock Em Sock Em Robots make appearances, as do toys based on TV and movie characters from the 50s, 60s, 70s and beyond. (Yes, good friends such as Robby from “Forbidden Planet,” Robot from “Lost in Space” and Star Wars legends R2-D2 and C-3PO feature prominently). The film incorporates insight from writers, collectors and academics, as well as a visit to Pennsylvania’s Toy Robot Museum, home to more than 3,500 pieces.

But the centerpiece of the film is a suspenseful toy auction in which a man anxiously waits for his childhood toy to go on the block. Items from what’s known as the “Golden Age” of space toys tend to fetch eye-popping sums when they go under the gavel.

http://www.fadproductions.com/index.php?categoryid=16&p2_articleid=39

Everybody Hates My Unicorn

Monday, September 18th, 2006

I’ve been developing an argument against both higher-order and first-order representational theories of consciousness (hereafter HOR and FOR, respectively). I call the argument the Unicorn and so far everybody (but me) hates it. To constructively focus the hate on a single blog post (instead of a longish paper draft), I briefly summarize here.

First, some quick and dirty definitions of my targets:

HOR – The property of being a conscious state consists in being a represented state.

FOR – The property of being phenomenal consists in being a represented property.

And now…the Unicorn:

P1. Things that don’t exist don’t instantiate properties.

P2. We represent things that don’t exist.

P3. Representing something does not suffice to confer a property to that thing.

C1. Representing a state does not suffice to confer the property of being conscious to that state (so HOR is false).

C2. Representing a property does not suffice to confer the property of being phenomenal to that property (so FOR is false).

That’s the argument. Here are some quick notes on it.

N1. Re: P1, cheetahs, not unicorns, are the fastest animals. This is not because unicorns have the property of being slow. This is because they have no properties whatsoever.

N2. Re: P2, if contrary to P2, we don’t represent things that don’t exist, then P2 is meaningless since it would represent nothing at all (it wouldn’t represent so-called people-who-represent-things-that-don’t-exist). P2, however, is not meaningless. Therefore, it’s true.

N3. Re: P3, it follows from P1 and P2. I don’t think it takes a lot of fancy work to show that it does, so I won’t bother.

N4. Re: P4, mutatis mutandis for how C1 and C2 follow from P3.

N5. HOR and FOR can’t dodge the Unicorn by simply adding existence to the list of criteria for consciousness/phenomenality. The key question HOR and FOR address is “in what consists the property of being conscious/phenomenal?” The problems the Unicorn raises for the answer “it consists in being represented” cannot be solved by requiring the existence of the representational target, since existence is not a property. Not being a property, existence adds nothing in “being represented and existing” not already present in plain old “being represented”. I take it that something like this dodge is at work in so-called same-order representational theories of consciousness (SOR). It (and they) won’t work.

Fig 1. Don’t hate me, hate my unicorn. (Photo by Ray Gunn.)

Fig 2. This is not my unicorn. (Photo source: http://www.fortgreenepups.org/03/images/unicorn.jpg )

PMS WIPS

Friday, September 15th, 2006

PMS WIPS: Philosophy and/of Mind (and/or) Science Works In Progress Sessions

PMS WIPS is an online forum for the discussion of works in progress in the philosophy of mind, cognitive science, and related areas.

Submissions for PMS WIPS will be reviewed by co-editors Brian Keeley (Pitzer College), Pete Mandik (William Paterson University), and Dan Weiskopf (University of South Florida). Accepted contributions and discussion forums will be hosted on Pete Mandik’s blog, Brain Hammer. Accepted contributions will remain on the blog for only six months (but may be removed earlier at the contributor’s request) to ease any worries contributors might have regarding prior publication of works to be sent later to the journals.

Our current plan is to post accepted contributions twice a month, but we may increase the frequency later. Our schedule for the next several weeks is:

Please email contributions (with accompanying abstracts) to Pete Mandik (petemandik @ petemandik.com). Feel free to contact any of the co-editors with questions. We look forward to hearing from you.

Brian Keeley (brian_keeley @ pitzer.edu)
Dan Weiskopf (weiskopf @ shell.cas.usf.edu)
Pete Mandik (petemandik @ petemandik.com)

(remove spaces from above e-mail addresses)

PMS WIPS 001 - Tad Zawidzki - The Function of Folk Psychology: Mind Reading or Mind Shaping?

Friday, September 15th, 2006

“The Function of Folk Psychology: Mind Reading or Mind Shaping?” by Tad Zawidzki, George Washington University.

ABSTRACT: I argue for two claims. First I argue against the consensus view that accurate behavioral prediction based on accurate representation of cognitive states, i.e. mind reading, is the sustaining function of mental state ascription. This practice cannot have been selected in evolution and cannot persist, in virtue of its predictive utility, because there are principled reasons why it is inadequate as a tool for behavioral prediction. Second I give reasons that favor an alternative account of the sustaining function of mental state ascription. I argue that it serves a mind shaping function. Roughly, mental state ascription enables human beings to set up regulative ideals that function to mold behavior so as to make it easier to coordinate with.

[Link to full text of article]
[Link to further info on PMS WIPS]

Carl’s got a good point about the SSPP

Tuesday, September 12th, 2006

Carl Gillett, posting @ Brains, writes:

Just a quick post to encourage people to submit papers to the SSPP conference, April 5-7th 2007, in Atlanta (deadline Nov. 15th). Everyone knows how good the SPP conference is, but I think folks are less aware of the recent rejuvenation of the Southern Society for Philosophy and Psychology. In recent years, the contributed program at the SSPP has been attracting a lot of good papers in the philosophy of mind, philosophy of science, epistemology, metaphysics of science and ‘naturalistic’ philosophy generally. Since the SSPP has always been an especially open and friendly conference, this has made for philosophically lively meeting. I strongly encourage people to submit a paper, especially since this year’s Philosophy Program chair, Chase Wrenn, has put together a great invited program:

Keynote: John Searle

Invited Speaker: David Rosenthal

Invited Speaker: Colin McGinn

Symposium on Realization: Gene Witmer, Carl Gillett and Ken Aizawa, Chase Wrenn

Symposium on Normative Naturalistic Epistemology:David Henderson, Charles Wallis, Michael Bishop and J.D. Trout

Symposium on Intentionality: Chris Gauker, Pete Mandik, John Tienson.

Details on the conference are here (see the links for the CFP or the latest newsletter):

http://sun.soci.niu.edu/~sspp/index.html

Signs of Consciousness in Vegetative Patients?

Friday, September 8th, 2006

I was interviewed for a column appearing in today’s Wall Street Journal on an intriguing case of possible conscious states in a vegetative patient (“There May Be More To a Vegetative State Than Science Thought” by Sharon Begley).

In the case in question, scientists recorded brain activity in a vegetative patient in response to being asked to imagine playing tennis.

Remarkably, this made neurons fire in the premotor cortex, a region that hums with activity when you mentally practice sophisticated movement, from a jump shot to a backhand. Then they asked her to imagine walking through each room of her house. This time her parahippocampal gyrus, which generates spatial maps, became active, again just as in healthy volunteers.

I think that if the same activity also shows up in patients under general anesthesia, then that activity doesn’t suffice for consciousness. The proposal that people under general anesthetic are conscious after all is an intolerable skeptical hypothesis (do you really want to believe that people suffer their major surgeries?). Only a tiny bit of my point got into the article, though:

There also is the possibility that people in other mental states regarded as unconscious, such as patients under general anesthesia, may show similar brain activity, suggests philosopher Peter Mandik of William Paterson University, Wayne, N.J., who studies consciousness.

Lamme et al (1998) suggest that the responses elicited by stimuli in anesthetized animals constitute merely feed-forward activation of representations in perceptual networks and lack feed-back activations from representations higher in the processing hierarchy. I suggested (but it didn’t make it into the article) that a good case for consciousness in the vegetative patient could be made if the following was found in the vegetative but not anesthetized patients: reciprocal activity of higher-level representations (like abstract representations of tennis) and lower-level representations (like motor-representations in a body-centered reference frame) as in Mandik (2005).

(Cross-posted at Brains)

Update Sept. 12, 2006: On this elsewhere: @Mind Hacks; @Rebecca Skloot; @Milinda’s Questions.

References:
Begley, S. There May Be More To a Vegetative State Than Science Thought. Wall Street Journal September 8, 2006.

Lamme, V. A. F., et al. (1998). Feedforward, horizontal, and feedback processing in the visual cortex. Current Opinion in Neurobiology, 8, 529 – 535.

Mandik, P. (2005) Phenomenal Consciousness and the Allocentric-Egocentric Interface. In: R. Buccheri et al. (eds.); Endophysics, Time, Quantum and the Subjective. World Scientific Publishing Co.