The Egocentricity of Phenomenal Knowledge




Hey, good lookin’!

I here continue my elaboration of the first premise (P1) of the Transcending Zombies argument by spelling out another way in which K* falls short of capturing P1.

Recall that P1 and K* are as follows:

P1: If I know that I am not a zombie, then phenomenal character is (a certain kind of) conceptualized egocentric content.

K*: Smith knows that Smith has qualia –> (Smith believes that Smith has qualia & Smith has qualia)

In the previous post I discussed the first of three ways in which K* falls short of P1. I turn now to the second key way in which K* falls short of capturing P1. In K* it is insufficiently spelled out how it is that Smith’s belief comes to be a belief about Smith. Not just any representation of Smith by Smith will do. The representation needs to be egocentric.

Egocentric representations are distinctive not only in that they represent the creature that has them but also in how they do so. The most frequently discussed kind of egocentric representations are egocentric spatial representations, representations of the spatial location of objects and features in a frame of reference defining locations relative to the representing subject. Egocentric representations do not only represent the representing subject but do always represent things in relation to the representing subject.

There are non-spatial examples of egocentric representations as well, including egocentric representations of time (R. Grush, 2009) and temperature (Mandik, 2001). Egocentric representations are oft found at low levels of sensory processing hierarchies and Prinz (2005, pp. 384-385) urges that such representations may be found in audition, touch, taste, olfaction, and interoceptive perception, including the interoceptive perception of bodily states forming the basis for emotional experience.

Several authors, the current one included, suggest that egocentric representations are action-oriented representations (Rick Grush, 2001; R. Grush, 2009; Hurley, 1998; Mandik, 1999; Noe, 2004). In contrast to abstract conceptual representations which are detached from action, the “here and “now” aspects of egocentric representation have immediate connections to the motor abilities of the representing subject. Unlike a conceptual representation of the representing subject, like when Pete Mandik thinks “Pete Mandik’s pants are on fire” my egocentric representation of my pants being on fire is connected to a host of pants-extinguishing action dispositions that do not require for their triggering a mediating step of identification along the lines of the italicized middle step in the inference “Pete Mandik’s pants are on fire. I am Pete Mandik. Therefore, I ought to go jump in a lake” (Kaplan, 1989).

We are now in a position to appreciate what’s being required of phenomenal knowledge in requiring that it have egocentric content. Without egocentric content, zombie-related skeptical hypotheses become live options for me. If my experience has any aspect that is not egocentric then even if I have an experience with that aspect and I know that someone or other has an experience with that aspect, then I still wouldn’t know whether I am the one that has an experience with that aspect. I may know that someone or other was a non-zombie, but I wouldn’t know whether the non-zombie was me. Relatedly, I might, while gazing at a red rose and enjoying a red quale, know that someone or other is having a red quale, but be in the dark about whether that someone was me.

Previous Posts:
1. Introducing Transcending Zombies
2. Anti-Skeptical Maneuvers
3. I Know I’m Not a Zombie
4. Some Remarks on Phenomenal Knowledge

22 Responses to “The Egocentricity of Phenomenal Knowledge”

  1. The fascinating thing about egocentric representation is that the human organism has no sensory transducers by which to detect and represent the 3D volumetric property of the real world. This means that the brain’s machinery for giving us a phenomenal egocentric spatial perspective must be an innate endowment.

  2. Pete Mandik says:

    Hi Arnold,

    I agree with your conclusion but don’t follow your reasoning. No one has automobile transducers but I doubt knowledge of cars is innate.

  3. Eric Thomson says:

    Pete–are you assuming that the only way to know someone is not a zombie is to be that person?

  4. Pete Mandik says:

    Eric, No, not at all. I may have worded something in a sloppy way if that’s not coming across.

  5. Eric Thomson says:

    I think I may be the one being sloppy, in my reading.

    You said “In contrast to abstract conceptual representations which are detached from action, the “here and “now” aspects of egocentric representation have immediate connections to the motor abilities of the representing subject.”

    My misinterpretation was I thought you were saying that to have a concept of an egocentric representation, you must be hooked up to those motor abilities. But clearly you aren’t saying that. I can have a concept of a motor ability without having that motor ability myself.

    However I will probably need to read your post a few times to really get it, and post anything half intelligent.

  6. Pete,

    I didn’t say that *knowledge* about the volumetric property of the world is innate. What I thought I was saying is that our phenomenal sense of being at the origin (the egocenter) of an expansive volumetric surround (the world) must be an innate property of the cognitive brain because we have no sensory apparatus to *detect* the coherent volumetric property of the physical world. I think this is a critical factor for understanding the biological basis of conscious content. It leads us to ask questions about the structure and dynamics of a brain system that can provide a biological analog of our egocentric perspective. This is what the retinoid model is aimed at explaining.

  7. Kelly says:

    Arnold makes a good point I think. Knowledge is one thing. Phenomenal experience is something else. I’m uneasy about the way you try to connect the two.

    Would you say that simpler animals, mice for instance, feel subjective pain? Have pain qualia? If so, how are knowledge, belief, and concepts involved in this experience for them?

    Still thinking about the demon thing from your Type Q Materialism paper. It’s hard to pin down exactly what exactly conclusions to draw from that.

    If your brain is being tampered with in such a way as to alter your beliefs, then nothing is safe, right? It seems to me that you could be made to believe that 1+1=3. And when asked to explain why you believe this, despite being shown counter examples, you could then be programmed to spout a bunch of nonsense and made to believe that the nonsense was a coherent air-tight proof that 1+1=3.

    It seems to me that what the demon thought experiment shows is that an “illegitimate” belief (brought about by demons or brain damage or swamp transitions) renders everything it touches nonsensical.

  8. [...] Brain Hammer Pete Mandik’s Intermittently Neurophilosophical Weblog « The Egocentricity of Phenomenal Knowledge [...]

  9. Pete Mandik says:

    Arnold,

    My point about cars doesn’t need to be made in terms of knowledge. I am aware of cars. I detect cars. But I don’t have car transducers. I have transducers for brightness, etc. My ability to detect luminance increments is something I acquired by being born (or being conceived), but my car detection ability is something acquired by learning. So it doesn’t follow, in general, from my lacking transducers for something that my ability to be aware of it is due to an innate property of my brain.

  10. Pete Mandik says:

    Kelly,

    Mice have, in addition to egocentric representations, allocentric representations. And I take allocentric representations to be the basis for concepts, beliefs, and knowledge. If you want to see more about what I think about allocentric representations in non-human animals, check out my paper “The Neural Accomplishment of Objectivity”.

    http://www.petemandik.com/philosophy/papers/accompobj.html

    Re demons and whether “nothing is safe”, I’d agree with Descartes that demons can’t fool you about whether you’re thinking. Along similar lines, while you can be fooled about the way things are, you can’t be fooled about the way things seem. If the demon makes you think that 1+1=3, then that’s how things will seem to you.

  11. Pete,

    You write: “… but my car detection ability is something acquired by learning. So it doesn’t follow, in general, from my lacking transducers for something that my ability to be aware of it is due to an innate property of my brain.”

    I don’t think you have properly addressed the crucial distinction between what we experience and what we learn. Anything we learn about the world depends on our prior phenomenal experience of the world. Having a phenomenal experience of the world, as I see it, means having a transparent brain representation of the world (our volumetric surround) and some of its contents from a privileged egocentric perspective. We do have sensory transducers for all perceivable contents of the world, but we do not have sensory transducers for the coherent volumetric property of the world we live in. In other words, while we can perceive, through our sensory afferents, cars, sunsets, and the computer screen in front of us, remarkably, we have no such sensory afferents for the space within which they and we exist. So this basic pervasive phenomenal experience of a coherent, expansive spatial surround must be innately given in the human brain.

    I think you are mistaken when you say that “.. my car detection ability is something acquired by learning.” It seems to me that a person who has never seen a car before could detect the “novel” object that we call a “car”. What this primitive person would have to learn is that the strange thing he sees is *called* a “car”. For a computer simulation of the putative brain mechanisms that can do this kind of thing, see Trehub *The Cognitive Brain*, Ch. 12 “Self-Directed Learning in a Complex Environment”, MIT Press 1991.

    Responding to Kelly, you said: “… I take allocentric representations to be the basis for concepts, beliefs, and knowledge.”

    I agree, and I make the same point in *The Cognitive Brain*. Transforming retinocentric representations into egocentric representations, and then transforming egocentric representations into allocentric representations for deeper cognitive processing is one of the trickiest thing that the human brain must do. This is what my theoretical model of the retinoid system and the synaptic matrices is aimed at accomplishing.

  12. Kelly says:

    >> I’d agree with Descartes that demons can’t fool you about whether you’re thinking

    Some schizophrenics claim that some of their thoughts are not their own, that these thoughts are “inserted”. So, it seems to me that you might could be fooled about whether you’re thinking. You might be convinced that someone else is using your brain to think. Which is nonsensical, BUT, that’s the point.

    Belief doesn’t have to be rational (as in schizophrenia and Anton-Babinski syndrome), and it would seem that irrational belief is capable of trumping everything.

    Again, if you had a demon who made you believe that you were not experiencing wine qualia, even though you were (the opposite case of the thought experiment in Type Q Materialism), I think all that this would mean is that you couldn’t then reason rationally about your wine-qualia, because the demon-created irrational belief would corrupt the process.

  13. Kelly says:

    Is there a belief that’s so irrational that it’s impossible to hold? Seems like the answer would be no.

    It seems that there are beliefs so irrational that they are impossible to arrive at by rational thought, BUT if you get some help from demons, or brain damage, or swamp-transitions, then it would always seem possible to “jump” to a state of having these beliefs.

  14. Josh Weisberg says:

    Hmmm… very interesting.

    So, to know I am not a zombie, I need egocentric knowledge. This is like the alleged absurdity of “there’s a pain in the room, but I’m not sure if it’s mine,” I take it. Or the weirdness of wondering about the cogito, as Kelly points out.

    Something is a bit odd to me: so, I might be aware “allocentrically” of a phenomenal character–pain, say–and still wonder whether I am zombie because, hey, it might not be my pain. But a person in this case would still know what it’s like to experience pain, right? Mary, i.e., would lack this allocentric knowledge. So allocentric phenomenal knowledge would equal knowledge of what it’s like for one to experience a certain phenomenal character.

    But that still would leave open the sort of first-person skepticism you mention–I might still wonder if I am a zombie. So the knowledge argument and the zombie argument come apart, or so it seems.

    But let’s say I have allocentric knowledge of some occurent pain, but I’m not sure whose pain it is. Then I find out a bunch more allocentric stuff–that the pain is happening to someone who’s a philosophy professor in Houston who likes higher-order thoughts and single-malt Scotch, who is six feet tall and bald, who went to CUNY, etc. etc. And let’s say this allocentric knowledge guides my action–perhaps without my awareness or with my awareness but not under the description of “I”. So, I’ve got all the allocentric knowledge (which solves the Mary problem) and my action is guided appropriately. What more do I need to know I am not a zombie? Is this last little indexical bit really so essential? Or perhaps the scenario I’ve described is impossible, or obviously lacking something we all take to be crucial to first-person knowledge.

    Anyway, I wonder about the role of the egocentric knowledge as it relates to phenomenal knowledge. I’m not sure it’s the end all, be all, nor am I sure it’s where the essential action is.

  15. Josh,

    Where do you think the “essential action” is for phenomenal knowledge/experience is if it is not in the egocentric brain representation?

  16. Josh Weisberg says:

    Arnold–

    In allocentric representations of sensory qualities. This is not to say I don’t think there’s something important about egocentric knowledge.

  17. Josh,

    The interesting thing is that if you examine the relationship between egocentric representations and allocentric representations within the framework of competent neuronal mechanisms, you find that a cognitive analysis of the properties of any allocentric representation (the essential action?) occur by means of a selective decomposition within an egocentric representation. In other words, there is an overlay of the centroid of the allocentric image on the egocentric axis.

  18. Pete Mandik says:

    Hi Josh,
    This is very interesting stuff. If I follow the gist of what looks like a potential argument against me that you’re sketching, I think it needs to go something like the following.

    The Allocentric Josh Argument
    If Mandik is right that egocentric representation is essential for first personal certainty of non-zombiehood, then it ought to be impossible for there to be a being of whom all of the following is true: (1) the being is an allocentric and behavioral doppelganger of Normal Josh, call him “Allocentric Josh,” (2) Allocentric Josh, unlike Normal Josh, is completely devoid of egocentric contents, and (3) Allocentric Josh and Normal Josh are equally certain of their own non-zombie hood. (1)-(3) are compossible. Therefore, Mandik is wrong.

    Mandik’s Responses:
    Response A: If (2) is true, then (3) must be false. If Allocentric Josh only has beliefs with existentially quantified contents like “there is an x who likes single malt scotch, teaches Texans, etc,” then Allocentric Josh cannot rule out, and thus cannot be certain, that the description is actually true of someone else.

    Response B: If (1) is true, then (2) is false. (This may be along the lines of something Arnold said, but I’m not sure). A being can’t have allocentric reps just like you that are wired to its behavior so that it behaves just like you without there being mediating egocentric reps. There’s just no way for light going into its eyes to eventuate in the judgment “Josh better duck” when a beer bottle is thrown at his head without the causal connections thereby satisfying the criteria for action-oriented, egocentric representations.

    Whaddya think?

  19. Pete,

    Your response B is consistent with my claim. An allocentric representation cannot have a systematic causal influence on cognitive processing or behavior unless it is processed within a mediating egocentric representation. This is the clear implication of the retinoid model.

  20. Josh Weisberg says:

    Pete–

    Yup, that’s a nice way to put what I was thinking.

    As for your responses, I think B is probably the stronger one. With A, it seems to me that at some point a hyper-detailed allocentric story would get close enough to the egocentric story to close off any reasonable doubt. And if that allocentric knowledge is automatically applied (i.e. “direct”), it would seem as obvious as egocentric knowledge, I would think. To fend this kind of claim off, you would need to jack up the certainty requirements, and that might start looking weird, given the epistemology of TQM.

    B has a John Perry type feel, so it has some standing independently of zombies. And from the third-person interpretive perspective, we’d must likely posit that for any critter X, a representation playing such a vital role in action just is an egocentric representation.

    Which makes me wonder: imagine we come across some alien critter and we’re trying to tell if it’s employing egocentric and allocentric representations, or only allocentric ones. What evidence would decide between these two possibilities? Is a pure allocentric critter not (nomologically?) possible? (I’m assuming the critter in question is active and thriving in some environment.) I take it you’ve got some line on all this with C elegans and fram-sticks and other denizens of the Mandikian menagerie.

  21. Pete Mandik says:

    Josh,

    I think the alien critter question is a terrific one. I’ve written a whole paper (to be published in French) aimed at addressing that sort of thing. Here it is in English:

    http://www.petemandik.com/philosophy/papers/accompobj.html

    The gist of my current thought on this goes something like this. You can have a pure egocentric critter (such as my all of my Framstick critters), but not a pure allocentric critter. Having any reps at all while also being a critter depends on egocentric reps. A critter with any reps at all needs to map sensory inputs (which in the first instance are egocentric) to motor commands (which are also egocentric). The hard problem, and the one the above paper is largely about, is how you tell there are any allocentric reps. And the answer that I push is that you look for evidence of certain kinds of memory, kinds marked by their flexibility across multiple contexts.

  22. [...] 2. Anti-Skeptical Maneuvers 3. I Know I’m Not a Zombie 4. Some Remarks on Phenomenal Knowledge 5. The Egocentricity of Phenomenal Knowledge 6. The Knowing and the Known 7. My Physical Properties Fix My Conceptualized [...]