Hammering the Mirror: Against Self-reflection


Originally uploaded by Pete Mandik.

The old idea that consciousness is self-consciousness, that conscious states are states of which one is aware, is the target of yesterday’s post, “The Transitivity of Consciousness as a Contingent Reference Fixer.” (See also the query I posted (and ensuing discussion) over at the group-blog, Brains, “What Are You Conscious of When You Have Conscious Experiences.”)

Here are some further thoughts to help make clearer what my interest in this all is, excerpted from my paper “Phenomenal Consciousness and the Allocentric-Egocentric Interface“:

[T]here are philosophical reasons for being suspicious of the transitivity thesis.

First off, according to advocates of the transitivity thesis it is supposed to be intuitively obvious that it is a requirement on having a conscious state that one is conscious of that state. If the transitivity thesis is true it should be obviously incorrect to say of a state that is was conscious before any one was conscious of it. However, if we consider a particular example, it seems that the transitivity thesis is not obviously correct (which is not, of course, to say that it is obviously incorrect). Consider, for example, how one might describe what happens in motion induced blindness experiments when the yellow dots pop into and out of consciousness. [See the demo at the end of "Motion-Induced Blindness and the Concepts of Consciousness"] It seems equally plausible to say either (1) that first the perception of the yellow dot becomes conscious and then you become conscious of your perception of the yellow dot or (2) the perception of the yellow dot becomes conscious only if you also become conscious of your perception of the yellow dot. If the transitivity thesis were pre-theoretically obvious, then option (1) would be obviously incorrect and (2) would be obviously correct. However, since neither (1) nor (2) seem obviously correct (or obviously incorrect), the transitivity thesis is not pre-theoretically obvious.

A second consideration that casts suspicion on the transitivity thesis
concerns how easily we can explain whatever plausibility it has without granting its truth. We can grant that the transitivity thesis may seem plausible to very many people but explain this as being due to the fact that counterexamples would not be accessible from the first-person point of view. If we ask a person to evaluate whether the transparency thesis is true, they will call to mind all of the conscious states of which they have been conscious. But this can not constitute conclusive proof that conscious states are necessarily states that their possessor is conscious of. Consider the following analogy. Every tree that we have ever been aware of is, by definition, a tree that we have been aware of. But this is not due to the definition of being a tree, but only due to the definition of being aware of it. The fact that every tree that we are aware of is a tree of which we have been aware cannot constitute proof that trees are essentially the objects of awareness or that no tree can exist without our being aware of it. By analogy we should not conclude from our being conscious of all of our conscious states that we have been aware of from the first-person point of view that all conscious states are necessarily states that we are conscious of. We should instead view our first-person access to conscious states as a way of picking out a kind of state that we can further investigate utilizing third-person methods. The description “states we are conscious of ourselves as having” thus may be more profitably viewed as a contingent reference fixer of “conscious state” that leaves open the possibility that it is not part of the essence of conscious states that we are conscious of them. Instead, the essence of conscious states is that they are hybrid representations that exist in the allocentric-egocentric interface.

29 Responses to “Hammering the Mirror: Against Self-reflection”

  1. Berkeley should have read your paragraph about the tree!

    I wonder if some of the confusion people have about these issues comes from an ambiguity (remarked by Chisholm and Jackson, among others) between epistemic and phenomenal senses of terms like “aware”, “appears”, etc. — and I think “conscious”, too.

  2. Richard Brown says:

    Hey Pete, this is an interesting discussion…

    Let me take another run at it…

    I can agree with a lot of what you say. So for instance I think that it is important to see that the transitivity principle (TP) is an important way to pick out certain states that we may then investigate using third person techniques.

    The problem is that you say that we should leave open the possiblility that the state in question is essentially a ‘conscious state’ . Surely we may find out that the state in question was already conscious independently of our being conscious of it, but we might also find out that it was our being conscious of it that made it a conscious state. It is going to depend on what a conscious state is! That is to say TP is a hypothesis about the nature of consciousness. It is a theory, and yes it does have predictions (more on that in a sec) but that it may is a possibility; however if it turns out to be right then it is necessary that every conscious state is a state of which we are conscious. So the question is why think it is right?

    It claims to have several things in its favor. You mention one of them but do so in an unfavorable way (understandable though it is). Here is a more friendly way of putting it. That there are unconscious mental states is evident from common sense as well as theortical reasons (e.g. Freud, subliminal perception, primiing, etc). What is an unconscious state? An inuitive answer is that an unconscious (say) desire is a desire that you were not aware of having. Similar remarks can be said about perceptual states. This is the sense on which TP is supposed to be inuitively obvious. It immediately suggests itself as a way of explaing how it is possible to have conscious states at all. It is not at all plausible to say that an equivelant to your 1 above is equally plausable! That would amount to saying that first the desire becomes conscious and then I become aware of the desire which would in effect be to deny that there are unconscious desires!

    I guess the inuitive picture that people on your side of the fence have (and I admit that I am attracted to it as well) is that we have what might be called an experiential field which I can be more or less conscious of to varying degrees depending on where I am focusing at (or thinking about, or whatever). It is inuition that my experience is there and I am able to pick out various aspects/parts of it. The problem with this view is that unless you appeal to TP you will not be able to give any explanation at all of consciousness. It will simply be a big mystery and that will be as far as we can get. The problem being, of course, that we need to explain consciousness in terms of something that does not itself involve consciousness but yet is still mental. So if the state is already itself conscious then we lose our ability to say anything meaningful about consciousness at all.

    I do not think on your account you want it to turn out that consciousness is a mystery. You are trying to explain what it is to be a conscious state in terms of a hybrid allocentric-egocentric state. So iguess what I would like to see is an argument that this is not a way of implementing TP. It seems to me that what your theory says is that we become ’suitably aware’ of our brain states when they interact at the egocentrinc-allocentric interface. To have two states mutually interacting in the way you describe is the way in which the organism becomes conscious of their experience. Can you tell me why you think that you are not implemting TP?

    Another way to make the point: can you tell me whether you think that one of the hybrid states can occur unconsciously? Could there be a mutually interaacting pair of states at an itermediate level of processing that I was in no way aware of? A kind of dilemma awaits you here. If you say that it can occur unconsciously then it being that kind of state does not explain consciousness in the way that you intended. If you say that they cannot occur unconsciously (which I thought your discussion of the motion induced blindness stuff suggests would be you answer) then you satisfy strong TP. Point to the counter-example and you will point to a counter-example to AEI, hence AEI is an implemention of TP.

    The Kripke stuff does not get you out of this problem. You want to be able to say that the conscious state might not have been a conscious state. What you mean by that is that the state that we picked out via the contingent property of our being aware of it might have occurred anyone being aware of it. This is something that all parties to the debate can and do agree on! However if the state is itself already a conscious state it will be necessarily false that it be an unconscious state!

  3. Pete Mandik says:

    Eric, I have to confess that I’m pretty suspicious of that distinction. But maybe we can agree that much of interest about consciousness concerns whether there is such a distinction and if so, how best to handle it. And perhaps we can further agree that one should neither assume that there is such a distinction nor that there isn’t and that whatever one thinks of it, it merits argument. I’m sure I’ll have more to say about this in the future, and have kicked it around a little bit at this post:

    Phenomenal Content is Conceptual Content

  4. Pete Mandik says:

    Thanks Richard. I’m not sure I entirely follow, but I do think you raise a lot of good points.

    I think it is important that I be able to show that AEI is not just an implementation of TP. I don’t think I make that sufficiently clear in my paper. The gist, though, is in spelling out what the representational contents of the hybrid representations are: the neither the allocentric nor the egocentric representations are meta-representational, therefore no implementation of TP.

    I don’t quite see the dliemma you mention in your second to last paragraph. Are you assuming, there, that ‘occurs unconsciously” = “occurs without one’s being aware of it”? I certainly don’t. That is what I am at pains to deny. So, I don’t see how I get stuck with either horn of the dillemma.

  5. Richard Brown says:

    RE Meta-representational: It is true that niether of them taken seperately is meta-representational, but taken together, as a ‘dynamic individual’ they do represent the organism as being in some state or other…so your view turns out to be in the same category as SOR theories; the dyamic idividual is a (noval?) kind of self-representing state…

    RE the dillemma: I think I rushed through it, let me go around again…What I want to know is whether or not one of the mid-level processing states can occur when I am in no way aware of it? (yes, I said ‘unconsciously’ to mean that because I wonder what you coud mean besides that, I just can’t help but think thaqt an unconscious state simply is a state that I am not aware of, but the dillemma was not supposed to rely on it).

    First Horn: If these state do occur when I am in no way aware of them then I would want to know why they still count as conscious states. I suppose that you will tell me that they are conscious states because they make you conscious of some state of affairs (or something). Now if you say that then I think you have a problem because if I am in no way aware of the state then I am in no way aware of what the state represents. If I am in no way aware of what the states represents in what sense does it make me conscious of it (what it represents)? You have nothing to reach out to here! If you simply assert that having the state makes you conscious of what it represents then you simply assert that having the state gaurentees that you become aware of it in the suitable way!! And so you have failed to explain the difference between conscious and unconscious states which is what you were trying to do and so you have not explained consciousness in the way that you intended.

    Second Horn: if you say that these states cannot occur without our being aware of them then you satisfy TP. Which is good since I argued in the other post that you did implement TP and I think I just argued that even the Dretske-style approach has to implement TP….

    Which of these paths to embracing TP must you take? Last time I thought that what you say about motion induced blindness suggested that you thought that these state could not occur except when we are aware of them, but it is really that they cannot occur without being conscious-in-whatever-the-hell-way-that-means and so you see your view as compatible with weak TP. No, you want to take the first horn and argue that the hybrid states can occur when I am not aware of them (that is the gist of the Kripke stuff, right?) and still count as conscious.

  6. Thanks for the link, Pete! I find I disagree with what you say in the older post. I think our knowledge of our own phenomenology is very poor, and often we can’t tell the difference between even radically different phenomenological possibilities (such as that experience is rich with detail vs. that we pretty much only experience what we attend to, where this is *not* to be construed — as I think it needn’t necessarily be — as merely a debate about terms like “experience” or “attention”). So we’re starting from different places, regarding the relationship between knowledge and phenomenology!

  7. Pete Mandik says:

    Heya Richard,

    Re: Metarepresentation. I’m still not getting it. Egocentric representations, I’ll grant, represent, mamong other things, the organism who has them. But they need not represent the organism as having representations. They need not represent representations at all. So, they need not be metarepresentational. Gluing them to allocentric representations would give rise to metarepresentations only on some semantic theory I have yet to be exposed to, so I don’t see how this is supposed to work.

    Re: dillemma. I guess I’d take the first horn, then. But I’d argue that the horn is not quite so pointy. The crucial part of the question you suggest is the one along the lines of “what makes allocentric-egocentric interface states count as conscious?” My answer is someting like this: we introduce these states into the conversation as the states of which we are conscious. THESE states are what we are talking about. The allocentric-egocentric interface hypothesis is that what makes these states hang together as a scientific kind is that they involve reciprocally interacting egocentric and allocentric representations of perceptible properties. It is open that such states could occur without our being conscious of them. I think of this as analogous to the following. We introduced whales into our scientific conversation long ago as “those big fish over there”. It was open, of course, to discover that whales weren’t fish. this is because the concept of being a fish is not analytically contained by the concept of being a whale. And this latter point may very well hold because no concept is analytically contained by any other (with the exception of stipulated technical definitions). So unless the Transitivitiy Principle is supposed to be an anlytic truth, I don’t see that I’m in any trouble here.

  8. Pete Mandik says:

    Eric, I can agree with a lot of your examples of introspective poverty. But let me ask you this: do you know that you are not a zombie? Or is introspection sufficiently impoverished that we can be wrong about even that?

  9. It seems to me I know I’m not a zombie. But I’m ready to hear arguments to the contrary!

    Why might someone be tempted to think we can’t know whether we’re zombies? Maybe epiphenomenalism about consciousness. But epiphenomenalism about consciousness is one way of insulating claims about consciousness from empirical test — so we’re right back on your issue.

    I wonder if identity theory, just as much as contemporary property dualism, entangles itself in empirically meaningless commitments, to the extent it’s committed to metaphysical positions about nomologically impossible possible worlds.

  10. Richard Brown says:

    Pete, I gota go shoe shopping with my gf so I have to make this quick

    You are missing the point! I will agree with you that we pick certain states out as the ones which we are conscious of. I even agree that THOSE states, the ones that I picked out as being the ones that I am aware of, could occur when I am not conscious of them. I then also agree that the allocentric-egocentric stuff is a hypothesis about the nature of those states, it may even be correct, I don’t know. But where in there was the argument that THOSE states are conscious states independently of my being conscious of them? While it may be true that THOSE states are AEI states, why think that those states are necessarily conscious states? See what I was asking was what is it that allows you to say whether or not the states in question are conscious independently of our being conscious of them? I then argued that the only answer that I know of to this question is the dretske-style answer but that that kind of answer must make a tacit appeal to the transitivity principle in order to work: If the state is one which I am in no way aware of how can we plausably say that I am aware of what the state represents? If we cannot say that I am aware of what the state represents then why would anyone say that the state makes me consicious of something? The transivity principle is not analytic, rather it suggests itself as a way of explaining how it is that they state makes us conscious of what it represents. We become aware of what the state represents by representing ourselves as being in that state. I still don’t see how you are not really saying the same thing in a different way (in precisely the way that does not involve metarepresentation)…

    One other quick question. Is this something that you think about qualitative aspects of consciousness only? Or is it that you think that my desires, intentions and beliefs are like this as well? I think I see that you feel that it makes sense to say that I could have the experience of red, and that there is something that it is like to have this experience, and that I could be unconscious of it. But does this make sense when talking about desires and such?

  11. Pete Mandik says:

    Hi Richard,

    Maybe I’m still missing the point. Perhaps I’m having a hard time interpreting your question. Perhaps, further, I’m not sure whether it is best interpreted as “Why is this a good theory of consciousness?” or “Why is this a good theory of consciousness?”.

    Regarding the first interpretation, there’s a lot of detail spelled out in the paper concerning third-person and first-person data about conscious states that the AEI theory explains. Re third person data there’s stuff like why infero temporal neurons are the ones most highly associated with the conscious state in binocular rivalry experiements or why feedfoward activation of midlevel representations without feedback is insufficient for conscious states. (Note, this stuff is pertinent tot he question of how I can do a better job than, e.g., Dretske in accounting for the difference between conscious states with representational content and unconscious states that nonetheless have representational content.) Re first person data there’s stuff about how only mental states with both egocentric and allocentric content are conscious.

    Regarding the second interpretation, I’m a bit mystified as to why what I’ve said so far doesn’t get at the question. But maybe I’m still not understanding the question. Or maybe the following will help make some progress toward an answer. Why wouldn’t this question be analogous to someone asking of the contemporary electromagnetic theory of light “Why is this a good theory of light?” Why isn’t the person in question (call him “Richard Brown”) who is wedded to the transitivity principle just like someone wedded to the thesis that only visible light can really be light? How can there be light that isn’t visible? If we wanted, we could define the word “light” so that it only applied to visible regions of the EM spectrum. Similarly, we could define “consciousness” so that it only applied to states of which we are conscious. However, neither decision really locks onto anything with any real empirical depth. Whether some bit of EM radiation is visible or not doesn’t really carve the spectrum at any joints and instead is more a statement about us and how we came to know that there was any EM radiation in the first place. A much more powerful theory regards our mode of introducing the topic (”that visible stuff”) as merely a mode of introducing the topic, and not a description that gets at the “essence” of the thing thereby introduced.

    Regarding your quick question: my theory is intended to generalize to all conscious states. Part of how that is supposed to work depends on what I say about the qualitative v. intentional distinction, which is that all and only conscious states have qualitative properties. There are beliefs and desires that lack qualitative properties, but they also lack consciousness. The beliefs and desires that are conscious also have qualitative properties.

  12. Richard Brown says:

    Pete, I agree that it is wrong to DEFINE state consciousness via the tranitivity principle, nor do I think that anyone has ever done so! That is why I agree with you that initially we use the fact that we are aware of certain states as a means of picking out certain states; We are aware of THOSE states. We can then, noticing that some states are conscious while others are not, wonder what is it that makes an unconscious state conscious. It is at that point that the transitivity principle, which is initially only used as a means of picking out certain states, suggests itself as an EXPLANATION of the phenomenon that does not render it mysterious. For there to be something that it is like for me to have that state is for me to represent myself as being in it…

    You on theother hand start from the same point as I do, by picking out some states that we are interested in as the ones that we are aware of. You then investigate them and find out that they may be AEI states and then you say that AEI states are consious states. But what you do not give is an independent specification of what a conscious states is that does not make appeal to the transitivity principle. You argue that the difference between consious states and unconscious ones is that conscious ones are AEI states. I then ask ‘by wht criteria do you classify them as CONSCIOUS states? For, I would say that they occur unconsciously.’ You cannot respond that the are AEI states because I am asking why it is that those states count as consious! The only answer that I have seen you give is that they are states that make you conscious of something (though, yes, yes I see that you want to deny the transparency thesis because of your love for brain state introspection). But you have yet to answer the argument that really that kind of answer makes a tacit appeal to the transitivity principle.

    Notice that this shows why the analogy that you have to light fails to be a good analogy. I agree that we pick out light as being that visible stuff or whatever. Then we investigate what that stuff is and we find a way to INDEPENDENTLY characterize it that allows us to explain it and it is in light of that characterization (punintended) that we call the nonvisible stuff light as well (in fact we end up calling radio waves a form of light!). That being an AEI state is not an independent characterization can be made evident when one notices that it is reasonable to ask ‘yes but can an AEI state occur and not be a conscious state?’ in a way that it is not reasonable to ask ‘yes, but can a state of which I am aware occur and not be a consious state?’ does not.

    Here is another way to make the point that just occured to me. One of the things that I find interesting is how both you and Roisenthal use some of the same evidence, one for the transitivity principle, the other against it. So, on the one hand Rosenthal argues that the reason so many people are reluctant to embrace his view that there are unconscious pains (a pain that is in no way painful) is because every pain that they are aware of is a painful pain and so they think that all pains must be painful. In fact I would almost suspect that one could make this charge against you except that you offer the same kind of evidence as against the transitivity principle. You say that we can explain why people so readily assent to the transitivty principle becasuse any counter-example will be one which you are not aware of. Now how can the existence of states that we are not conscious of be both evidence for and against the claim that those states are conscious states in their own right? I have suggested that the answer to this question is that depending on how you independently characterize conscious states this will seem to go one way or the other and that s why the issue is pressing…

    Now I can imagine that at this point you are thinking to yourself, ‘ah yes, but how do YOU independently characterize conscious states?’ The answer may be longer than you expect. We use ’states of which I am conscious’ as a way of picking out all kind of states and then we want to know more about them. At this point it is an open question whether those states occur consciously without our being aware of them. In the course of our third person experimenting (by which I include psychological experimentation as well as folk psychological behavioral explination) we notice some very interestingthings about these states. In particulr we notice that the seem to occur when the person in which they occur is in no way consious that they do so. In fact we notice that these states that the subjsct is in no way aware of see to play all the same causal roles (yuck! The words choke in my throat!) that they do when the subject is aware of them. No what is the difference between these two states? It is not that they behave differently, in fact the person pretty much will act the same whether they are aware of the state or not. By far the most natural thing to say is that the deifference between the two states is that one of them is a state that I am aware of having while the other is not. In fact upon reflection one might be led to think that that is the only way in which they differ, surely it would be the onus of the other side to illustrate a way in which they differ besides my being aware of one as opposed to not the other…Oh wait here is another plausable way in which they differ. For one there it is something that it is like to have the state while for the other there is nothing that it is like to have the state. If it seemed reasonable to think that they differed in only one respect before it seems to me still reasonable to think that they do so now, and so we end up with the transitivity principle as our independent characterization. It turns out that the thing that we used to pick out the states is what makes those states conscious.

    Ok, one last final thing. You say that beliefs have qualitative properties, which as I think you know I agree with, but how is the allocentric-egocentric hierachy going to work here? Is it that there are state which represent me as believing something and then there are states that represent someone or other as believing something and a consious belief is on that is intermediate between these? That seems weird!!!!

  13. Pete Mandik says:

    Hi Richard,

    The independent specifiablity stuff is an interesting way of framing the debate, but I guess I don’t see how it counts against me and I guess that I don’t see how the natural indepependent specifications (consciousness of, what it is like, first-person and third-person data, etc) all tacitly apeal to the transitivity principle. You claim to have argued for this, but maybe it went by way too fast for me to be convinced by it. I’m still convinced that you are like some dude asking “Yes, but why call that invisible stuff “light”?”

    Re beliefs. My belief that Dogs are mammals has an allocentric content–the proposition that dogs are mammals–and whatever qualitative properties attach to it when I believe it consciously are perceptual properties that would involve e.g. the visual image of seeing some dog from some point of view and/or the motor/tactual imagery of mouthing the words “dogs are mammals”.

  14. Richard Brown says:

    And you’re some dude who says ‘unconscious states are conscious’

    The arg. in brief is imagine a state that I am in no way aware of, then I would not in anyway be aware of what the state represented. If I am in no way aware of what the state represents then in what sense does the state make me conscious of what it represents? The transitivity principle let’s us explain how it is that a state makes us conscious of things in the world…

  15. Pete Mandik says:

    Richard, thanks for the clarification. I think I see a bit better what’s supposed to be going on with this argument. It seems, though, that a big problem arises for the first premise. I take the first premise to be equivalent to

    (1) If I am not conscious of state S, then I am not conscious of what state S represents

    Which, by contraposition, is equivalent to

    (2) If I am conscious of what state S represents, then I am conscious of state S.
    Most fans of the Transitivity Principle that I am aware of also hold that

    (3) You are conscious of S if and only if you mentally represent S.

    Propositions (2) and (3) entail

    (4) If I mentally represent what state S represents, then I am conscious of state S,

    Which, if “I mentally represent what state S represents” is rewritten to accommodate the natural assumption that mentally representing what state S represents just is having state S, leads to

    (5) If I have a mental represententation, S, then I am conscious of state S.

    And, here’s the punchline, if you combine (5) with the Transitivity Principle, then you end up with the proposition that all mental representations are conscious mental states. I think that you and I agree that this last proposition is false. So what went wrong? I think things went wrong with step (1).

  16. Richard Brown says:

    Not so fast homie,

    let s=state S and r=what state S represents and C=conscious of and R=mentally represent

    Then your (4) is symbolized as Rr–>Cs, however your (5) comes out Rs–>Cs and the fallacy is evident. If I mentally represent what state S represents then yes I am simply in state S, however to represent the state itself is for me to represent myself as being in the state so your (5) is NOT the result of an innocent translation of (4)! So (1) is fine it is the step from (3) to (4) that is suspect.

    By the by, there are some (like Uriah) who think that the punch line is correct…oooppsss! Iam going to be late for the final I am giving right now!!!

  17. Pete Mandik says:

    Richard, I’m not following. How is “to represent the state itself is for me to represent myself as being in the state” consistent with 1,2, and 3?

  18. Richard Brown says:

    The point is that as written 1,2, and3 do not capture my argument, rather they should be rewritten as

    1* If I am not conscious of myself as being in state S, then I am not conscious of what state S represents

    2* If I am conscious of what state S represents, then I am conscious of myself as being in state S

    3* You are conscious of yourself as being in S if and only if you mentally represent yourself as being in S

    Now once rewritten this way your 4 does not follow.
    4 If I mentally represent what state S represents, then I am conscious of state S

    rather what follows is

    4* If I mentally represent myself as being in S, then I am conscious of S

    which is just the transitivity principle and this is just the argument I gave to being with! So again the problem with the way you put my argument is that you do not take into account that according to the transitivity principle I have to be conscious of the state in a particular way: I need to be conscious of myself as being in it. It is NOT premise one which seems fine…

  19. Richard Brown says:

    Oh and I forgot to mention that if this argument is right then of course your view turns into a novel implemention of TP. IF you argue that the hybrid states are the states in virtue of which we become conscious of various objects in the world, or in essence that we become conscious of what the represent, is is because I become suitably aware that I am in that state. On your view this amounts to saying that the two representations are mutually interacting/interfacing/whatever. So when an allocentric state interacts with a egocentric state a new hybris state is formed and that is the mechanism via which I become conscious of myself as being in that state. You are right that this is not done by my representing myself as being in that state; not all theories that respect TP are representational! One plausible candidiate is mental representation but I don’t see why yours isn’t another way…

  20. Pete Mandik says:

    Richard, there are still problems here.

    4* is not the Transitivity Principle. The Transitivity Principle states, at a minimum, that a state is conscious only if one is conscious of it, or, more strongly, that a state is conscious if and only if one is conscious of it.

    3* is entailed by 3 and I know of no reason that the TP-heads would want to reject 3. So, I see no reason for blocking 4. Same for 5. And 5, plus the real Transitivity Principle, winds you up with all mental reps being conscious.

  21. Richard Brown says:

    ok, detail, detail, 4* is the mechanism via which the transitivity principle gets enacted, whatever….the point is the same

    and uh I don’t see why 3 entails 3*! And every TP head should reject 3 because if it is true then it already follows that every mental representation is conscious, which NO ONE wants!!! Can’t I mentally represent something without representing myself as being in the state that is doing the representaion? Sure, can’t I do that even in the case of my own first order states? I do not see why not! I might represent it as the state where a certain memory can be found or whatever. What makes the state conscious is that I represent that I am in the state, not just that it is mentally represented. Still problems?

  22. Pete Mandik says:

    3* is a substitution instance of 3, or something very much like 3, if you read the S in 3 as a variable. What I have in mind here is something like “For all x, you are conscious of x only if you mentally represent x”. Something like that is pretty explicitly endorsed by TP-heads such as Lycan and Rosenthal. It, by itself, doesn’t entail that every mental representation is conscious. It only does so in conjunction with other stuff, like your first premise. So I still see no grounds for the TP-head to deny 3. Instead, what I see is grounds for the denial of your first premise.

  23. Richard Brown says:

    Dude I still don’t get it! What you wrote is symbolized as (x) RxCx and this says that for whatever x is, if I mentally represent it I am conscious of it and if I am conscious of it I mentally represent it. It is the first part of the biconditonal that is in question because I have been arguing that I can mentally represent something and fail to be conscious of that thing; this much is obvious. But then I also argued that we can mentally represent our own first order states in a similar way. If I represent a first order state as the product of a certain calculation, or whatever then I am not committed to saying that I am now conscious of the state. It is not just mentally representing the state that makes it conscious. It is that I mentally represent myself as being in that first order state. With 3* the biconditional is fine. I cannot mentally represent myself as being in a state and yet fail to be conscious of myself as being in that state (and so thereby be conscious of what the state represents). So 3* is not a substitution instance of 3. Again, no TP head claims that any higher-order representing is going to do the trick. It is a particular kind of representaton. So my premise is good and yours is bad ;^) I have a feeling we are going to be at this forever!! At anyrate this is helpful to me and now I think I have to write a pepr on it!

  24. Pete Mandik says:

    Yes, you should write a paper on this stuff. By the way, the deadline for the Fall New Jersey Regional Philosophical association meeting is coming up, and I’ll be submitting some of my related stuff for a talk.

  25. Richard Brown says:

    cool, if you need a commentator I would be glad to do it!

  26. Richard Brown says:

    Hey Pete, I have been doing a lot of thinking about this stuff (and a little writing) and now I think you are right that Rosenthal does not accept 1. His argument against Dretske seems to be that since perception always makes us conscious of what we are percieving and since perception is not always conscious it follows that being a conscious state is not simply being a state that makes us conscious of something. I now see that I am arguing against the claim that an unconscious state makes me conscious of anything, so thanks, this discussion was helpful.

    Oh and let me see if this works…

  27. Pete Mandik says:


    Something seemed not to work. What was it?

    Re: the rosenthal thing: That’s cool. Keep me up to date with what you come up with. By the way, I disagree with Rosenthal (and I guess you do too?) that one can have an unconscious state and nonetheless be conscious of something.

  28. Richard Brown says:


    I just found this suprising paragraph in Rosenthal’s “How Many Kinds of Consciousness?”

    “If we agree, however, not to worry about which mental phenomena deserve the honorific title ‘consciousness,’ it may seem that there is nothing left about which Block and I disagree. We might even agree to apply the term ‘conscious’, in a special sense, to states that exhibit only thin phenomenality. Though we are in no way aware of those states, being in them does result in our being conscious of various things. So those states do have an essential connection with consciousness. Still, this construal does have the disadvantage of counting as conscious all
    thinly phenomenal states, thereby disallowing the contrast between such states being conscious and not being conscious, on which the commonsense notion of consciousness depends.”

    Thin phenomenality for him is the kind of phenomenality that a mental state has when we are in no way aware of it…tres interesting, no?

  29. Pete Mandik says:

    Tres! I, however, prefer fat phenomenality.