Archive for the ‘Control Consciousness’ Category

Control Consciousness Draft

Friday, March 6th, 2009

I’ve posted a draft of my paper, “Control Consciousness,” (link) which had previously been serialized as Brain Hammer posts. I’m very grateful to those of you who left comments, they will be reflected in a later draft. I’m pretty happy with how the serialization experiment went, and will be doing it again soon. Stay tuned!

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained
7. Libet’s Puzzle of Will
8. Control Phenomenology
9. On the Apparent Lack of Control Phenomenology

On the Apparent Lack of Control Phenomenology

Wednesday, February 25th, 2009

In the previous post on control consciousness, I presented a case in favor of non-sensory control phenomenology. Despite these considerations in favor of thinking that control infects large portions of our phenomenology, there is something tempting about the claim that there is no non-sensory control phenomenology. If a theory does not credit such a temptation as leading to the truth, then it would be a virtue of the theory if it could explain why such a temptation exists any way. I will here briefly sketch two main ways in which the temptation may be accounted for. The first concerns the differential bandwidth between prototypical instances of sensory inputs and motor outputs. The second concerns the degree to which introspection is itself an act.

Differential bandwidth.
Sensory inputs may be compared with each other and with motor outputs in terms of bandwidth. Estimates of the bandwidth of the human eye for color vision range from 4.32 x 10^7 bits/sec (Jacobson, 1950, 1951) to a more recent estimate of 10^6 bits/sec (Koch et al., 2006) aka a megabyte per second (1MB/sec). It is perhaps not surprising that hearing has a significantly lower bandwidth than vision (a picture being worth a thousand words, and all). Jacobson (1950, 1951) gives an estimate of 9,900 bits/sec for the bandwidth of the human ear. He also gives a bandwidth estimate of 4.32 x 10^6 bits/sec for the eye for black and white vision. These differences in bandwidth perhaps account for widespread intuitions such as the intuition that visual “qualia” are ineffable, the intuition that a person blind from birth can never be told what its like to see (Hume, Locke), and the intuition that a person reared in a black and white environment wouldn’t know what its like to see red (Jackson, 1982). The auditory channel is relatively impoverished compared to the visual channel, and the black and white visual channel relatively impoverished compared to the color channel.

So what happens when we turn our attention to motor systems? Bandwidth estimates for motor output systems are far lower than either vision or hearing. Fitts (1992) estimates motor output bandwidth at 10 to 12 bits/sec. I offer that bandwidth differences between various sensory systems and output systems can serve as a basis for explaining why many may have the intuition that there is no distinctive phenomenology for control consciousness.

Introspection as Mental Action.
Another explanation of why some may have supposed that there is no control phenomenology, an explanation that may work together with the bandwidth-based explanation, hinges on the fact that introspecting is itself an act. As such, it is reasonable to suppose that a greater load is presented in introspecting control consciousness than in introspecting sensory consciousness. To spell this out further, let us assume, for purposes of illustration, that motor systems and sensory systems have the same bandwidth. If so, bandwidth alone would not serve to account for an apparent difference in phenomenological richness. If, however, there were some additional factor present that inhibited the ability to introspectively attend to motor systems but not sensory systems, then that factor would serve to explain a difference in apparent richness.
What could such a factor be? It is a relatively well known that attempting simultaneous multiple control tasks diminishes the capacity one would otherwise have to do them singly. If introspection is itself an act, then introspecting motor control is a doubling of tasks in a way that introspecting otherwise passive sensory input is not. The doubling introduced in introspecting control consciousness thus serves as the sought-after factor that can explain a comparative lack of richness between control and sensory systems.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained
7. Libet’s Puzzle of Will
8. Control Phenomenology

Control phenomenology

Monday, February 23rd, 2009

Some have claimed that introspection of control consciousness reveals no distinctively non-sensory component. Others make a contrary claim about the relevant phenomenology. Phenomenological disputes are notoriously difficult to adjudicate leading some researchers to be quite skeptical of the phenomenological enterprise and the reliability of introspection. Nonetheless, claims about consciousness should be made to square with the self-reports of subjects, if not to explain then to explain away. If there’s controversy regarding some point of phenomenology, it can be quite satisfying to discover an explanation of why such a controversy arises.

While I think careful reflection reveals a distinctively non-sensory component to control consciousness, I do think there is something worth taking seriously in various claims against non-sensory control phenomenology. In this post I sketch the case for non-sensory control phenomenology. In the next post, I’ll offer some possible explanations why it may have seemed obvious to some that there would be no such thing.

One point worth stressing is that control plays a relatively direct role in sensory consciousness. When we examine the phenomenology of sensory consciousness we note a role that control plays in this phenomenology. Two main ways in which control is involved in sensory consciousness is (1) vividness or intensity of sensory experience is correlated with what is contrary to control and (2) differential degrees of direct control is one of the main ways in which a person is able to distinguish, from the first-person point of view, between perception and imagery.

As has been noted by other authors (Chalmers, 1996, p. 224) (Rosenthal, 1986, p. 412), the vividness or intensity of a mental state is correlated with or constituted by the strength of its connection to action. As I shall like to put this point, especially as it applies to sensory consciousness, is that vividness is correlated with or constituted by what is contrary to control. The more vivid a pain, the more it compels our attention toward it and our acting to alleviate it. The less vivid, the more control we have over whether we are going to do anything about it.

The point generalizes beyond pain. As pointed out by Weiskrantz et al. (1971), self-administered tactile stimulations are perceived as more intense than when administered by another. The vividness of a visual experience of red needs to be characterized in terms of its inverse relation to control and cannot instead be defined in terms of sensory properties of the stimulus. The point is best drawn out by attending to differences between a sensory experience of a particular shade of red and a mental image of the same shade of red. Another useful comparison might be between an experience and a thought of one and the same shade of red. The sensible properties of the shade represented in experience—the shade’s hue, saturation, and brightness—underdetermine the differential vividness of experience and imagery (as well as experience and thought) since one can have an a thought or image that captures the properties of hue, saturation, and brightness definitive of a given experienced shade, but the experience may still be more vivid than either the corresponding thought or image.

One point that combines both the point about vividness and the point about imagery is that imagined pain is less vivid than experienced (non-imagined) pain. It is worth noting as a general point that, besides the various commonalities between sensory perception and sensory imagery, the main way in which we are able to distinguish an image from a percept with similar content is by the differential degrees of direct control that we have over the image (Kosslyn, 1994, pp. 102-104). For example, in imaging an apple, I can rotate, enlarge, or distort the shape of the apple. But in perceiving an apple, I can do no such thing, especially if I cannot get my hands on the apple.

It is worth noting that, due to similarities between percepts and images, subjects do sometimes confuse the two (Perky, 1910). However, the degree to which subjects confuse a percept and an image can be manipulated experimentally by introducing factors that either vary how difficult the imagery task is (Finke, Johnson, & Shyi, 1988) or whether the images are created intentionally rather than incidentally (Durso & Johnson, 1980) (Intraub & Hoffman, 1992). An intentionally formed and difficult to manipulate image (say, an image of a rotating, relatively complex three-dimensional figure) is less likely to be mistaken in memory for a percept than a comparatively less difficult image.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained
7. Libet’s Puzzle of Will

Libet’s Puzzle of Will

Friday, February 20th, 2009

One advantage of the Allocentric-Egocentric Interface (AEI) account of control consciousness is the way it delivers a satisfying way of absorbing the results of Benjamin Libet’s widely discussed work on readiness potentials and whether conscious will is an illusion (Libet, 1999).

Libet’s experiment involved having experimental subjects note, while looking at a clock, at what time they made the conscious decision to flick their wrist. Libet found, by noting EEG recordings of a readiness potential (a marked increase of neural-electrical activity preceding the wrist-flick), that there was a delay of 300 to 500 milliseconds between the readiness potential and the reported time of the conscious decision (the subjective time or time in which the decision seemed to the subject to be made).

One, perhaps troubling, implication of Libet’s result is that control consciousness is an illusion. We do not consciously will anything. Willing occurs prior to a conscious state which itself, the conscious state, is a by-product of the act of willing, not the willing itself.

It is worth noting that this sort of result is to be expected according to the AEI account of control consciousness. The highest levels of activation in a motor processing hierarchy occur unconsciously and prior to the recurrent signaling in intermediate levels that constitute the conscious state. Further, the parallel accounts that AEI gives of sensory consciousness and control consciousness allows for an interpretation of Libet’s result that is far less troubling than the will-as-illusion interpretation.

It is no more an illusion that we will consciously than that we perceive consciously. The distal objects of our perception, that is, the external-world events that we perceive, are perceived consciously even though they, the external events, are causal antecedents of our states of perceptual consciousness. If we find such a view non-paradoxical and non-puzzling, then we should be able to come to a similarly non-troubling view of the implications of Libet’s results for control consciousness. Just as external events are consciously perceived even though they are causal antecedents of states of consciousness, certain inner events are conscious willings or consciously willed even though they are causal antecedents of states of consciousness.

We could, if we wanted, apply overly stringent criteria to perception to generate a “puzzle of conscious perception” that parallels the puzzle of conscious will that many see raised by Libet’s results. One overly stringent criterion is a time-of-occurrence-criterion according to which in order to be distinct from a memory, a perception of an event has to occur at the same time as the event perceived. Another overly stringent criterion is a factivity criterion according to which in order to be distinct from an illusion, a perception of the time of occurrence of an event as now has to be accurate (the perception that now is noon cannot occur a little after noon without counting as an illusion). If we applied such criteria then we can derive that we never have accurate perceptions and, instead, we either have accurate memories or illusions of what’s happening now. More natural, however, is to avoid such overly stringent criteria and thus go on, just as common sense does, saying that we frequently perceive events at their time of occurrence.

Prinz writes, “of the fact that that the felt decision to move occurs 250 milliseconds after a readiness potential in motor areas of the brain,” that “[i]f the conscious experience of intention supervened on motor representations, we might expect the felt intention to co-occur with the onset of the motor response.” (J. Prinz, 2007, pp. 344-345).

Prinz’s use of the Libet point seems to assume that motor signals should suffice for consciousness on the motor theory. But this is no more a reasonable expectation for the motor theory than to saddle Prinz with the assumption that mere retinal input suffices for consciousness.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained

Control Consciousness Explained

Wednesday, February 18th, 2009

The applicability of AEI (the Allocentric-Egocentric Interface theory of consciousness) to motor systems looks to be a relatively straightforward affair. First, motor systems are arranged hierarchically. Focusing here just on cortex, the highest level is in prefrontal cortex, the lowest level in primary motor cortex, and relatively intermediate is premotor cortex. Further, there exist both forward projections and back projections between successive levels of the motor hierarchy (Churchland, 2002, p. 72). We may further characterize levels in the motor hierarchy as differing along an allocentric-egocentric dimension.

The neuroanatomical features of the motor system make it quite natural to suppose that both intermediacy and recurrence can apply to motor processing. The basic suggestion here is twofold. First, unconscious action involves motor signals originating in relatively high levels and propagating down to lower levels without any recurrence from lower to higher. Second, the conscious aspect of conscious action is to be identified with states consisting in reciprocally interacting pairs of motor representations where one member of the pair is relatively more allocentric than the other.

While the application of AEI to control consciousness is not an instance of what I have called a pure motor theory, since not just any outgoing motor signaling counts as conscious, it is still clearly an instance of a motor theory, since it allows for conscious control to arise without any sensory input or imagery thereof.

It is perhaps worth briefly noting a mapping between the basic elements of pseudo-closed-loop control and the AEI account of control consciousness. Outgoing signals from the highest levels of the hierarchy may be identified with the specification of a goal state. The next lowest level receives the goal states and sends on the inverse mapping. This inverse mapping may be sent to lowest levels eventuating in command signals. But it, or more precisely, a copy of it, may be sent down to intermediate areas wherein activation is utilized as a forward model with results that may be propagated back up to higher levels.

Non-parsimonious?

A sensory theory of control consciousness may seem to lead to an overall more parsimonious view of the mind than a motor theory. The thought here is something like the following. Since it seems difficult to deny that at least some consciousness is sensory consciousness, the sensory theory, in holding that all consciousness is sensory, leads to a simpler view than the motor account. Proponents of a motor account of control consciousness seem, on the face of it, to need to commit to two different accounts of consciousness: one for sensory consciousness and one for control consciousness. But with AEI on hand, it is easy to see that a motor theory of control consciousness need not lead to a less parsimonious view. A single coherent account of consciousness applies to both sensory consciousness and control consciousness: conscious states are constituted by patterns of recurrent activation in intermediate levels of processing hierarchies.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI

Control Consciousness: Introducing AEI

Monday, February 16th, 2009

As mentioned previously, the very basic form of the motor theory has the problem of being incapable of distinguishing conscious and unconscious action. Here a solution may be adapted from a similar problem that arises in distinguishing conscious from unconscious perception. Not just any input to sensory systems gives rise to a conscious percept. Instances of subliminal perception and blindsight are two kinds of example. The solution I advocate for distinguishing conscious from unconscious perception is twofold. I shall label the two parts of the solution “intermediacy” and “recurrence.” The first part, intermediacy, involves identifying conscious perceptual states with states at relatively intermediate levels of sensory processing hierarchies. The second part, recurrence, restricts consciousness to intermediate-level states involved in recurrent interaction between representations at relatively high and at relatively low levels of sensory processing hierarchies.

The “what” and “why” of intermediacy. Sensory processing, as in for example, vision, is hierarchical, with the lowest levels constituted by neural activations close to the sensory periphery which represent local and egocentric visible features and the highest levels constituted by abstract and allocentric representations employed in categorization and recognition.

It is natural to ask where in a sensory processing hierarchy conscious states reside. It is crucial to any account of consciousness that it connect the reality accessible from the third-person point of view (e.g. states of activation in neural circuits) with the appearance of what it’s like from the first-person point of view. Further, both introspective and observational methods converge to indicate that conscious states are relatively intermediate between the highest and lowest levels of the hierarchy. My visual perception of a coffee cup represents the cup as having a specific orientation relative to my point of view and relatively specific location in my visual field. However, the percept is not so high-level as to merely indicate the presence of a cup in a way abstracting from all observer-relative information. Nor is it so low-level as to register every change in irradiation of various regions of my two retina (the lowest levels are prior to even the integration of information from the disparate retinas). The intermediacy criterion on sensory consciousness means that not just any neural response to a sensory input will count as a conscious percept. This goldilocks criterion will exclude from consciousness those neural activations that are too high or too low.

The “what” and “why” of recurrence. While intermediacy is necessary, it seems not to alone suffice for consciousness. Strictly feed-forward activation of representations in sensory processing hierarchies can occur without consciousness. Pascual-Leone and Walsh (Pascual-Leone & Walsh, 2001) showed, with precisely timed pulses of transcranial magnetic stimulation, that visual consciousness was suppressed if recurrent activation was suppressed and only feed forward was allowed. Additionally, Lamme et al. (Lamme, Supèr, & Spekreijse, 1998) suggest that responses to stimuli in animals under general anesthetic are feed-forward activations without accompanying recurrence.

Elsewhere I defend what I call the Allocentric Egocentric Interface theory of consciousness (AEI) (P. Mandik, 2005, 2009). AEI incorporates both intermediacy and recurrence in the following manner: conscious states are intermediate-level states in processing hierarchies which, the intermediate states, are constituted by pairs of recurrently interacting allocentric and egocentric representations. Previous discussion of AEI has focused on sensory processing hierarchies. I turn, in the next post, to consider a natural extension of AEI to motor processing hierarchies.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI

Control Consciousness: The Imagery Theory

Friday, February 13th, 2009

Prinz (2007) supplies a concise statement of his view, captured here in the following quotation:

The feeling of agency could be explained by a kind of prediction that the brain makes when we are about to act. If you elect to move your arm, you will be able to anticipate its movement. According to some leading neurobiological theories, when a plan is generated in the premotor cortex, a representation is sent to the somatosensory cortex corresponding to what the bodily senses should perceive when that action is executed. That representation is called a “forward model.” A forward model is an anticipatory somatosensory image. When our bodies carry out motor plans, the forward model is compared with the actual changes that take place in our body as we move. The feeling of agency may arise from this matching process. If a match occurs, we feel we are in control. If a match doesn’t occur, it’s because our bodies didn’t move as we predicted they would, and that results in an experience of being passively moved by an external source. (p. 342).

One way of appreciating a problem with Prinz’s view involves the way it combines a concept from control theory, that of a forward model, with the concept of a sensory image. That forward models are involved in the control of bodily movement is a highly plausible suggestion. That they be regarded as sensory images is somewhat less plausible. Before further fleshing out the problem, a bit more needs to be said about the distinct notions of a forward model and a sensory image.

Many philosophers are aware of control theory via the work of Rick Grush (e.g. (Grush, 2001)) and I here rely on his exposition of its basic ideas. In the simplest kind of control system, open-loop control, a desired goal signal is fed into a controller, which sends control signals to a target system or plant. Applying these concepts to motor control involves viewing parts of the musculo-skelatal system as plants and neural systems generating motor commands as controllers. The controller implements a mapping, the inverse mapping, of goal states onto command sequences. The plant implements a mapping, the forward mapping, of command sequences onto goal states (Grush, 2001, pp. 352-353). A slightly more complex control system, closed-loop control, has all of the components as in open-loop control plus the addition of feedback signals from the plant to the controller. While for many control purposes, closed-loop control is superior to open-loop control, closed-loop control is not without certain problems. If, for example, there are significant delays in the receipt of the feedback signal due to slow signal speeds and/or a relatively distant plant, then the system can oscillate wildly through potentially destructive cycles of overshooting and overcompensation. A slightly more complex control system that potentially overcomes such problems is pseudo-closed-loop control. One way of conceiving of pseudo-closed loop control is by thinking of it as built by adding features to open-loop control. The first addition involves a second signal being sent by the controller, an efferent copy, which is a duplicate of the signal sent to the plant. This duplicate signal, however, is not sent to the plant, but instead to an emulator or forward model, which, in turn sends signals back to the controller.

Now, it is tempting to follow Grush in calling the signal from the forward model “mock sensory information about what the real target system would do under various conditions” (p. 356 emphasis added), but I will want to resist such temptation.

It is useful here to consider the following two questions. First, what is involved in something’s being sensory in the sense of the term relevant to the current discussion? Second, do we have adequate reason for thinking that a forward model is relevantly sensory?

Starting with the first question, it is useful to look at Prinz’s own account of what makes something sensory. Prinz writes:

I will define a perceptually conscious mental state as a mental state that is couched in a perceptual format. A perceptual format is a representational system that is proprietary to a sense modality. To say that phenomenal states are perceptual is to say that their representational vehicles always belong to one of the senses: touch, vision, audition, olfaction, and so on. (J. Prinz, 2007, p. 336)

Further elaboration comes from what Prinz takes the negative aspects of his key thesis to be: “We do not have conscious states couched in non-perceptual formats. If I am right, we never have conscious states in our motor systems, and no conscious experiences are constituted by amodal representations…” (J. Prinz, 2007, p. 337).

In an earlier work dedicated to elaborating Prinz’s brand of empiricism, he spells out his view that “the senses are dedicated input systems” (J. J. Prinz, 2002, p. 115). Crucial to Prinz’s characterization is that each sense has both a proprietary class of inputs (physical magnitudes) and a proprietary representational format (thus denying that separate senses share a ‘common code’ (J. J. Prinz, 2002, p. 117).)

It is worth noting that in this earlier work Prinz endorses a view of imagery whereby “we can form mental images by willfully reactivating our input systems” (J. J. Prinz, 2002, p. 115). It seems natural to suppose that what is responsible for these reactivations counting as sensory imagery is that it is input systems that are reactivated.

With these remarks about what the “sensory” in “sensory imagery” consists in, let us return to the question of whether forward models need be conceived of as sensory imagery. In the basic outlines of pseudo-closed loop control, there is nothing that makes compulsory a sensory-imagery interpretation of the forward model. The forward model is not receiving sensory inputs and thus cannot count as a sensory system as characterized by Prinz. A fortiori, it cannot count as sensory imagery since it does not count as the reactivation of an input system.

Of course, it should be noted that there may be alternate architectures that incorporate forward models satisfying criteria for being sensory. However, the core idea of a forward model does not alone satisfy such criteria. It is also worth noting that the characterization of imagery as the willful reactivation of input systems threatens to make the imagery account collapse into a kind of non-sensory view. This is so if a crucial part of a state’s being imagery is its activation of a control signal.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory

Control Consciousness: The Motor Theory

Wednesday, February 11th, 2009

What is the motor theory of control consciousness? In its most basic form, the motor theory is comprised of a pair of theses, one negative and one positive. The negative thesis is that sensory inputs are alone insufficient for distinguishing between consciously controlled movements and mere movements (movements that aren’t actions since they aren’t controlled either consciously or not). The positive thesis is that motor outputs make a relatively direct contribution.

The notion of relative directness employed in the positive thesis requires further clarification. The main idea of “relative directness” here means something like “not mediated by sensory inputs or imagery thereof”. To illustrate, even on a pure perceptual model, motor commands have an indirect influence: I turn my head and thus see something else. But here the changes exerted on the conscious state by the motor command are mediated by changes in sensory input. If motor commands themselves or copies of motor commands, so-called “efference copies” can make a difference on conscious states without the difference being mediated by changes caused to sensory inputs, then this would be an instance of the influence of motor commands being relatively direct.

The most basic form of the theory is inadequate for distinguishing conscious from unconscious action. Not just any contribution from motor commands will make a contribution to consciousness. This is especially evident in the case of unconscious action. What is needed, then, is a means for distinguishing (1) bodily motions resulting from motor commands but unconsciously and (2) genuine instances of acting consciously. I will temporarily postpone presenting a solution to this problem until §3.

I devote the remainder of the present post to another problem alleged to beset the motor theory, namely that the motor theory, especially in contrast with its competitors, is a relatively un-parsimonious treatment of the variety of conscious phenomena. Prinz (2007) has argued that a motor theory of control consciousness lacks in both the simplicity and unification aspired to by accounts that view control consciousness as just a species of perceptual consciousness. I think that Prinz is mistaken here, and as I’ll detail in subsequent posts, there is a simple and unified treatment of what a conscious state consists that admits of both perceptual and motor varieties. Unlike the pure perceptual and imagery theories, control consciousness is not a species of perceptual consciousness. Nonetheless, there is a parsimonious account of what consciousness consists in that gives a unified treatment of both sensory consciousness and non-sensory control consciousness.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model

Control Consciousness: The Pure Perceptual Model

Monday, February 9th, 2009

There are various reasons why it might be appealing to assimilate control consciousness to a form of sensory consciousness. Of our various mental states, the ones most vividly present to us are our states of sensory consciousness. Sensory consciousness, especially visual consciousness, is, from the point of view of science, perhaps our best-understood form of consciousness. Further, the institution of science itself is influenced by a long tradition of empiricism, an early motto of which is that there is nothing in the mind that is not first in the senses.

What, then, would it mean to assimilate control consciousness to sensory consciousness? It is natural among researchers to take deliberate bodily motion as a basic and paradigmatic case of an action. And the most natural way to assimilate consciously moving parts of one’s own body to a form of sensory consciousness is to do so in terms of sensory feedback from the muscles, tendons, and skin as the body parts in question are moved. Now, part of what makes the pure perceptual model pure is its denial of any direct contribution of a motor command signal to the subjective aspect of control consciousness. And it is precisely this denial that leads to the key weakness of the pure perceptual model of control consciousness.

We can perhaps best appreciate the shortcomings of the pure perceptual model by noting that the pure perceptual model entails the following two possibilities.

Possibility 1: Two subjectively indistinguishable arm movements that, though subjectively indistinguishable, differ objectively in that one was the consequence of the subject’s motor command and the other was unaccompanied by any motor command of the subject.

Possibility 2: An arm movement resulting from the subject’s issuing a motor command but, due to effects of anesthesia, is unaccompanied by sensory feedback, and, lacking sensory feedback, the subject is completely unaware of having either moved or even having tried to move his or her own arm.

In some sense of possibility, say, logical possibility, (1) and (2) represent genuine possibilities. However, in a sense of possibility more relevant to scientific interests, say natural or empirical possibility, there’s good reason to believe that neither (1) nor (2) are possible. For example, contra (2), as pointed out by Prinz (2007, p. 344) and Peacocke (2007, p. 359), a subject may be quite aware that they are moving a body part even though they are not perceiving the part due to either local anesthetic or severing of afferent nerves.
It is tempting to conclude from such cases that motor commands are not irrelevant to control consciousness in the way entailed by the pure perceptual model. However, to put the point in a way that is more neutral between the motor theory and the imagery theory, sensory input alone seems insufficient for control consciousness. Something more than sensory input is needed to account for such cases. The motor theory holds that the something more is the contribution of motor commands. The imagery theory holds that the something more is the contribution of sensory imagery. In the next post, I examine these two proposals in further detail.

Previous Posts
1. Control Consciousness

Control Consciousness

Friday, February 6th, 2009

I’m working on a new paper, “Control Consciousness,” and will be serializing a draft on Brain Hammer over the next three weeks (posting on a MWF schedule). Here begins chunk one of nine. Enjoy!

We act, sometimes consciously, sometimes not. What does our acting consciously consist in? One hypothesis that will be useful to consider, though I’ll ultimately reject it, is that the conscious part of acting consciously always resolves into a more basic form of consciousness, namely sensory consciousness. Now, I don’t deny that sensory consciousness is often part of the story. When I consciously flip a fried egg without breaking the yolk or consciously attach a delicate component to a scale model, much of my complex mental state integrates what I see arrayed before me as well as what I feel in my skin and muscles. Nonetheless, despite acknowledging the role that perceptual input often plays in contributing to consciousness during episodes of controlled action, I’ll argue for the possibility of instances of conscious control that involve no form of sensory feedback (input?) either real or imagined. Also, I’ll argue that in all cases of conscious control, some aspect of the consciousness involves (in a relatively direct way) non-sensory signals.

The organization of the remainder of these posts is as follows. First, I’ll lay out the three main kinds of approaches to understanding control consciousness. The first is a pure perceptual model, which, to my knowledge, no one defends but it is instructive to see why such a simplistic model is inadequate. Further, the pure perceptual model aspires to an ideal of parsimony in the way it gives a unified account of sensory consciousness and so-called control consciousness: control consciousness is just more sensory consciousness. It will be useful to see how competing theories rate with respect to parsimony. The second is a model defended by Prinz (2007), which shores up the shortcomings of the pure perceptual model by positing a crucial role for sensory imagery. The third is a motor theory of control consciousness, a version of which I will defend. The central idea of the motor theory is to posit a crucial role for motor commands, here construed as neural output signals that are neither sensory inputs nor instances of sensory imagery. Next, I’ll spell out how the motor theory account of control consciousness is an offshoot of an account that also is adequate for sensory consciousness, an account I’ve spelled out elsewhere, especially as pertains to visual consciousness (P. Mandik, 2005; P. Mandik, 2008; P. Mandik, 2009). Finally, I’ll offer some explanations for why some have supposed there to be no distinctively non-sensory control phenomenology.

Stay tuned!

References for the series
Bayne, T. (2008). The Phenomenology of Agency. Philosophy Compass, 3(1), 182-202.
Chalmers, D. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.
Churchland, P. S. (2002). Brain-Wise: Studies in Neurophilosophy: MIT Press.
Durso, F. T., & Johnson, M. K. (1980). The effects of orienting tasks on recognition, recall, and modality confusion of pictures and words. Journal of Verbal Learning and Verbal Behavior, 19(4), 416–429.
Finke, R. A., Johnson, M. K., & Shyi, G. C. W. (1988). Memory confusions for real and imagined completions of symmetrical visual patterns. Memory & Cognition, 16(2), 133-137.
Fitts, P. M. (1992). The Information Capacity of the Human Motor System in Controlling the Amplitude ofMovement. Journal of Experimental Psychology: General, 121(3), 262-269.
Gallagher, S. (2007). The Natural Philosophy of Agency. Philosophy Compass, 2(2), 347-357.
Grush, R. (2001). The Architecture of Representaion. In W. Bechtel, P. Mandik, J. Mundale & R. Stufflebeam (Eds.), Philosophy and the Neurosciences: A Reader (pp. 349-368). Oxford: Blackwell.
Grush, R. (2007). Skill Theory v2.0: dispositions, emulation, and spatial perception Synthese, 159(3), 389-416.
Intraub, H., & Hoffman, J. E. (1992). Reading and visual memory: Remembering scenes that were never seen. American Journal of Psychology, 105(1), 101-114.
Jackson, F. (1982). Epiphenomenal Qualia Philosophical Quarterly, 32, 127-136.
Jacobson, H. (1950). The Informational Capacity of the Human Ear. Science, 112(2901), 143-144.
Jacobson, H. (1951). The Informational Capacity of the Human Eye (Vol. 113, pp. 292-293): the American Association for the Advancement of Science.
Jeannerod, M. (2007). Consciousness of Action. In M. Velmans & S. Schneider (Eds.), The Blackwell Companion to Consciousness (pp. 540-550). Oxford: Blackwell.
Koch, K., McLean, J., Segev, R., Freed, M. A., Berry, M. J., Balasubramanian, V., et al. (2006). How Much the Eye Tells the Brain. Current Biology, 16(14), 1428-1434.
Kosslyn, S. M. (1994). Image and Brain: The Resolution of the Imagery Debate: MIT Press.
Lamme, V. A. F., Supèr, H., & Spekreijse, H. (1998). Feedforward, horizontal, and feedback processing in the visual cortex. Current Opinion in Neurobiology, 8(4), 529-535.
Libet, B. (1999). Do we have free will? Journal of Consciousness Studies, 6, 47-57.
Mandik, P. (2005). Phenomenal consciousness and the allocentric-egocentric interface. In R. Buccheri, A. C. Elitzur & M. Saniga (Eds.), Endophysics, Time, Quantum and the Subjective (pp. 463–485): World Scientific.
Mandik, P. (2008). An epistemological theory of consciousness? In A. Plebe & V. M. D. L. Cruz (Eds.), Philosophy in the Neuroscience Era (pp. 136-158). Rome: Squilibri.
Mandik, P. (2009). The Neurophilosophy of Subjectivity. In J. Bickle (Ed.), The Oxford Handbook of Philosophy and Neuroscience. Oxford: Oxford University Press.
Pascual-Leone, A., & Walsh, V. (2001). Fast backprojections from the motion to the primary visual area necessary for visual awareness. Science, 292, 510–512.
Peacocke, C. (2007). Mental Action and Self-Awareness (1). In B. McLaughlin & J. Cohen (Eds.), Contemporary Debates in Philosophy of Mind (pp. 358-376). Oxford: Blackwell.
Perky, C. W. (1910). An Experimental Study of Imagination. American Journal of Psychology, 21, 422-452.
Prinz, J. (2007). All consciousness is perceptual. In B. McLaughlin & J. Cohen (Eds.), Contemporary Debates in Philosophy of Mind (pp. 335-357). Oxford: Blackwell.
Prinz, J. J. (2002). Furnishing the Mind: Concepts and Their Perceptual Basis: MIT Press.
Rosenthal, D. M. (1986). Will and the theory of judgment. In A. O. Rorty (Ed.), Essays on Descartes’ Meditations (pp. 405-434). Berkeley: University of California Press.