Phenomenal Content is Conceptual Content


Originally uploaded by Shaolin Tiger.

Short version of the argument: Necessary knowablity entails exhaustive conceptual constitution. Phenomenal experience is necessarily knowable. Therefore phenomenal experience is exhaustively constituted by conceptual content.

Long version of the argument: Suppose that there is a rock that is heavy, lumpy and igneous. Suppose that George has the concepts of lumpiness and heaviness, but no concept of being igneous. Suppose further that at no point does George acquire the concept of being igneous. What, then, can George know about the rock? He may know that it is lumpy and heavy, but barring acquisition of the concept of being igneous bars George from knowing that the rock is igneous. That the rock is igneous is, relative to George, un-conceptualized residue. Since idealism about rocks is false, rocks are the sorts of things that can have lots and lots of un-conceptualized residue. In worlds with rocks but no knowers, rocks are 100% un-conceptualized residue.

Let’s turn from rocks to phenomenal experiences and ask whether they can be, in whole or part, unconceptualized residue. One important thing to note about phenomenal experience is its first-person necessary knowablity. This means that if a person has phenomenal experience then that person is necessarily able to know that they have phenomenal experience. I take it that not only am I not a zombie, but I know that I am not a zombie. I may not be able to know whether or not you are a zombie, but that would simply be a failure of third-person knowability. If phenomenal experiences are the sorts of things that might even be beyond the knowability of the persons that have them, then for all that person knows, they are a zombie, which I take to be absurd. If a phenomenal experience has any phenomenal quality, q, that is beyond the knowability of the person having the experience, then for all that person knows, they lack experiences with q. They would be a q-zombie for all they know. Again, I take that to be absurd. Since non-zombies can know of themselves that they are non-zombies, phenomenal experiences can have no phenomenal qualities that are necessarily unknowable from the first-person point of view. If something is necessarily knowable by me in that every aspect of it is necessarily knowable by me, then it can have no aspect that outstrips my concepts. If there were such an aspect it would inaccessible from the first person point of view.

So far this seems to show only that there must be a correlation between phenomenal characteristics and phenomenal concepts. Why make the further step of identifying phenomenal character with the contents of phenomenal concepts? Here’s why: If, with regards to the phenomenal, character is distinct from conceptual content, then it would be possible for me to be in two different phenomenal states even though I had the same doxastic state. That is, I could believe that I had an experience with quality q on both occasions but in one case the belief would be true and the other case it would be false. However, if this is possible, then there would be states that are phenomenally distinct but subjectively indiscernible. I would be unable to know, from the first person point of view, whether I was in a state with quality q or not. I could, for all I know, be a q-zombie. I take this to be absurd. It follows, then, that with respect to the phenomenal, character is not distinct from conceptual content.

So, what about the widely attributed non-conceptual contents of the perceptual states of babies and nonhuman animals? They can allegedly phenomenally experience colors etc that outstrip their conceptual repertoire. If their experience has content, then the content must be non-conceptual content. But guess what? I am neither a baby nor a non-human animal (I swear!). While I know that I am not a zombie, for all I know, those cute critters are zombies. Further, they are zombies for all they know too. Not only am I not in a position to know whether they are zombies, neither are they. Whatever it is that is going on in babies and nonhuman animals it can’t be the sort of thing that’s going on in me, because the sort of thing that’s going on in me is such that I know that I am not a zombie.

18 Responses to “Phenomenal Content is Conceptual Content”

  1. [...] Some further thoughts on condition K: My conscious states are necessarily knowable by me as such: knowable by me as my conscious states. In order for this to be true, conscious states must be conceptualized and egocentric. Knowablity entails conceptualization (as I’ve argued here). And “as my own”-ness entails egocentricity (I’ll flesh this out further in a future post). [...]

  2. Hi Pete,
    I have two separate comments.

    At the start of the text you mention “knowability” and “able to know”, which I take to imply possible knowledge, and not necessary one.
    However later you say:

    If something is necessarily knowable by me in that every aspect of it is necessarily knowable by me, then it can have no aspect that outstrips my concepts.

    It seems to me that the valid conclusion would be that it have no aspect that outstrips my possible concepts. As necessary knowability to me implies potential of knowing, which might not be actualized.

    You later say:

    If, with regards to the phenomenal, character is distinct from conceptual content, then it would be possible for me to be in two different phenomenal states even though I had the same doxastic state. That is, I could believe that I had an experience with quality q on both occasions but in one case the belief would be true and the other case it would be false.

    But the conclusion (i.e. that it would entail that in the one case it has to be true, and in the other false) would be true only if we take that same concept can’t be applied to different phenomenal character. But this doesn’t seem to be true to me. Let me explain what I have on mind through an example…
    For a person who isn’t color professional, two (in fact a lot) different green colors will be covered by general concept of green. And I don’t think that one can deny the accessibility of the phenomenal difference between those greens even they fall under same concept, as it is hard to see how the person can learn the colors, if he can’t spot the difference (i.e. if it isn’t available to him in the phenomenal).

  3. petemandik says:

    Hi Tanasije,

    Thanks for the comments.

    I think your point #1 is correct. However, I think that there is nonetheless a way of understanding what someone is able to know such that it is relativized to a set of concepts. Thus, relative to George and the set of concepts that does not include the concept of being igneous, George cannot know that the rock is igneous. Of course, there is the point you raise, which I take to be the point that it still makes sense to say of George that he can come to know that the rock is igneous. This is not relativizing the claim, then, to a fixed set of concepts. George can come to learn that the rock is igneous by coming to acquire the concept of being igneous.

    Regarding point #2, a person may be able to visually distinguish different shades of green without have the concepts used by professionals who make similar discriminations. Nonetheless, I don’t see how a person could distinguish the shades without having some concepts with which to do so. That is, a person could not distinguish that this experience is different from that one while having no concepts sufficient to draw some distinction in their beliefs about the concepts.



  4. Eric says:

    Your position seems to commit you to the claim that monkeys are not conscious, or that they have concepts about their own consciousness.

    As for the former, the fact that in phenomena like binocular rivalry the dynamics seem similar in humans and monkeys, is at least suggestive. As is our ability to make them drunk, hallucinate, observe REM. Again, not conclusive, but suggestive. Also, is there anything in your allocentric-egocentric model that would justify saying that monkeys are not conscious: they have both types of representations (in fact, the hippocampus of the lowly rat is your example for allocentric representations)?

    If monkeys are conscious (and don’t know it), as you mention at the end of the post, then phenomenal content isn’t conceptual content (at least not conceptual content that involves knowledge of one’s consciousness).

    If the above is right, then you need to claim that monkeys/rats know they are conscious, which would be unorthodox but interesting. The implausibility of the first horn of the dilemma seems to push in this direction, anyway.

  5. Pete,
    Thanks for the answers.

    I guess #2 might be approached in different ways. What I had in mind is scenario of ostensive teaching the specific greens to someone by showing him e.g. two circles one beside the other, one scarab green, and other spring green. If we allow that this kind of learning is possible, the difference will have to be accessible before the concepts are acquired, even in order for concepts to be acquired.
    So to connect to #1, I agree that any phenomenal characteristic have to be knowable, but only potentially so.

  6. petemandik says:

    Hi Eric,

    The last paragraph of my post was perhaps too hasty to adequately convey my view. I don’t mean to suggest that only non-infantile humans have consciousness, but that if allegations that only non-infantile humans have concepts are true, then only they have consciousness.

    I think, however, that criteria for concept possesion are satisfied by the non-humans you mention: hippocampus-intact rats and monkeys in binocular rivalry experiments. There are likely others besides. Whether they fail to meet other criteria for consciousness, I’m not super sure, but I don’t think they lack concepts.



  7. petemandik says:

    Hi Tanasije,

    I think you raise an important point. However, I think that the objection suggested can be met. Perhaps it will be useful to think of things in the following way.

    I am prepared to grant both that (i) a person can acquire the concepts of scarab green and spring green and that (ii) in order to do so they must first have, in some sense, pre-conceptual access to scarab green and spring green. However, the situation envisioned in (ii) would constitute a problem for my view only if the access involved is conscious access. It remains open, though, that if this access involves no conceptualization of scarab green (lacking either the concept of scarab green or some other concept useful in tracking that shade), then the access is not conscious access.



  8. > While I know that I am not a zombie, for all I know, those cute critters are zombies.

    This is way off and seems to me to involve unnecessary overkill. You can perfectly well know that they are not zombies (have conscious experiences) from the manifestations of consciousness they display in naturally expressive behavior. It is true their species of conscious experience may be importantly different from the conceptually informed species of experience had by discursive creatures. But it is still manifestly a species of conscious experience.

    > Further, they are zombies for all they know too.

    Fair enough, on the idea that they don’t know anything at all in the sense of being able to state and justify a knowledge claim discursively. But this should not obscure the fact that they can still be said to feel things, pain say.

    > Not only am I not in a position to know whether they are zombies, neither are they. Whatever it is that is going on in babies and nonhuman animals it can’t be the sort of thing that’s going on in me, because the sort of thing that’s going on in me is such that I know that I am not a zombie.

    OK. Though it can be a different species of a common genus.

    Compare: it is sometimes said that while other animals feed, only human beings dine. Yet one could take the view that dining is a special form of feeding engaged in by cultural creatures. Using this analogy, having conscious experience would be like feeding, not like dining: though lines may be hard to draw, we can still see that lots of non-human animals clearly have it.
    It might help to invent a different term, say “discursive experience” to parallel “dine” and demarcate the special variety of experience that only possession of concepts makes possible. Then we wouldn’t have to land in the unhappy position of appearing to deny that non-human animals and pre-linguistic human beings have conscious experiences like feeling pain.

  9. Pete Mandik says:

    Hi Anders,

    I’m not particularly happy with my last paragraph in the post and so am not moved to take extrodinary lengths to defend it. Mostly what bothers me about it is the way it seems like I’m granting that babies and nonhumans lack concepts. I think it is dubious babies lack them and you’d have to go pretty far down the phylogenetic scale to find clear cases of nonhumans lacking concepts.

    However, once we got down that far, I don’t see why we are supposed to be compelled to attribute any kind of experience. I certainly can’t stop someone from stipulating that “experience” just means “input” and attributing kinds of experiences to earthworms, anesthetized people, retinal ganglia, and thermometers. But I would wonder what the point was.

  10. OK. Most adherents of Sellars take babies to lack concepts; you are working with a different concept of “concept”.

    I think it’s still an important point that creatures can be said to have experiences like pain *whether or not* one takes them to have concepts, as long as one distinguishes non-discursive from discursive varieties of pain experience. So taking the Sellarsian line doesn’t land one in the absurdity of denying that other creatures can feel pain etc.

  11. Pete Mandik says:

    Hi Anders,

    I think creatures can lack concepts and have mechanisms that detect damage. I also think such damage detection is unconscious damage detection. One thing I’d be very interested in seeing is an argument that there can be conscious damage detection by creatures that posess no concepts.

  12. I was just relying on an idea like one I find in this quote of Wittgenstein’s:

    “Look at a stone and imagine it having sensations. - One says to oneself: How could one so much as get the idea of ascribing a sensation to a thing? One might as well ascribe it to a number! - And now look at a wriggling fly and at once these difficulties vanish and pain seems to be able to get a foothold here, where before everything was, so to speak, too smooth for it.”

    Whether one deigns to include flies or not, concepts like that of feeling pain can stilll get a grip on the doings of non-verbal creatures because they behave in ways that are naturally expressive of conscious sensations.

  13. Pete Mandik says:

    Hi Anders,

    The Wittgenstein example doesn’t cut much ice unless one makes the dubious assumption that all sensations are conscious sensations.

  14. Well I take it the conception of sensations like feeling pain at issue takes them to be states of consciousness.

    I’m not sure if you want to say that non-human animals can be seen to feel pain but all their pain is unconscious. That seems a weird usage to me. But possibly this is purely verbal, if you think that unconscious pain has all the properties others would apply to feeling pain simpliciter. (E.g. that it is wrong to inflict unconscious pain.)

    Perhaps one could say they are not cognizant of their pain insofar as they lack a concept of pain. Still one can say they hurt. I just don’t know what it means to say they hurt unconsciously.

  15. Pete Mandik says:


    I take it that attributing unconscious pains would be similar to attributing unconscious perceptions of visual form to visual form agnosiacs. Visual form agnosiacs claim to be unable to see the shapes and orientations of objects, they are at chance on saying what the forms and orientations of objects are, but with eyes open in well lighted rooms, they behave appropriately (in some ways) with respect to those objects in ways that someone could not if genuinely blinded.

  16. OK, but I can’t see how this would work for attributing unconscious pains to non-verbal animals in the sort of case I am thinking of. An organism with merely unconscious registration of bodily damage in this sense would not by default show any signs they were in pain, just as visual agnosics don’t show signs that they are not blind. We would have to get them in some special setup to show that they are better than chance at some discrimination. But that is nothing at all like what we are talking about when we say that animals feel pain.

    I am simply relying on the presence of molar, organism-level behavior that can be expressive of the organism’s being in pain. True, this behavior can simultaneously constitute a form of detection (at the organism level, not the sub-organismic level) of information about damage. But it’s not a merely dispassionate discrimination of that information by the organism.

    If it is found at this organism-level, it is a state of consciousness of the organism that is being expressed, whatever the implementation details inside them.

    In that sense we can say that non-verbal animals can feel pain (because it can be expressed in their non-verbal behavior) while linguistic creatures can have a different, discursive form of pain (that expresses itself also in acquired forms of verbal behavior, e.g. saying “I have a headache”.)

  17. Pete Mandik says:


    I don’t think your characterization of agnosia is quite right. The famous visual form agnosiac, DF, reported on by Milner & Goodale, can snatch a pencil out of your hand, put a letter in a mail slot, and walk on a mountain path. It’s not like the only way to probe for her unconscious vision is with elaborate forced-choice tasks.

  18. Sorry, you are correct that I misunderstood the case (mistakenly taking it to be blindsight coming up again.)

    If I have it now, what we have in the visual agnosics is, in effect, non-verbal behavior characteristic of seeing, but no verbal expressions of seeing (together, also, with failure to make use of the seeing in reasoning.) That is a schism that one can find in creatures who have both verbal and non-verbal forms of expression.

    My idea is that for the non-discursive experience undergone by non-human animals there are only the non-verbal expressions. Since they lack the capacity for language and the rational cognition it makes possible, there can be no such schism in that case. There is no way within their capacities that they fail to behave as if in pain.

    Perhaps this is purely verbal. I really am being very simple-minded here, insisting that our ordinary concept of feeling pain and consciousness generally applies equally to creatures who lack language and concepts. It seems you are making conceptualization criterial for application of the term “conscious”. So what I call the non-discursive variety of pain, you call “unconscious pain”.

    That seems to me to be a terminological mistake. It seems to suggest they don’t really hurt. Yes, it marks a distinction worth marking somehow, but not using terms which already have a sense in which they can apply to creatures in virtue of expressive behavior alone, I would say.