Control Consciousness Explained

The applicability of AEI (the Allocentric-Egocentric Interface theory of consciousness) to motor systems looks to be a relatively straightforward affair. First, motor systems are arranged hierarchically. Focusing here just on cortex, the highest level is in prefrontal cortex, the lowest level in primary motor cortex, and relatively intermediate is premotor cortex. Further, there exist both forward projections and back projections between successive levels of the motor hierarchy (Churchland, 2002, p. 72). We may further characterize levels in the motor hierarchy as differing along an allocentric-egocentric dimension.

The neuroanatomical features of the motor system make it quite natural to suppose that both intermediacy and recurrence can apply to motor processing. The basic suggestion here is twofold. First, unconscious action involves motor signals originating in relatively high levels and propagating down to lower levels without any recurrence from lower to higher. Second, the conscious aspect of conscious action is to be identified with states consisting in reciprocally interacting pairs of motor representations where one member of the pair is relatively more allocentric than the other.

While the application of AEI to control consciousness is not an instance of what I have called a pure motor theory, since not just any outgoing motor signaling counts as conscious, it is still clearly an instance of a motor theory, since it allows for conscious control to arise without any sensory input or imagery thereof.

It is perhaps worth briefly noting a mapping between the basic elements of pseudo-closed-loop control and the AEI account of control consciousness. Outgoing signals from the highest levels of the hierarchy may be identified with the specification of a goal state. The next lowest level receives the goal states and sends on the inverse mapping. This inverse mapping may be sent to lowest levels eventuating in command signals. But it, or more precisely, a copy of it, may be sent down to intermediate areas wherein activation is utilized as a forward model with results that may be propagated back up to higher levels.

Non-parsimonious?

A sensory theory of control consciousness may seem to lead to an overall more parsimonious view of the mind than a motor theory. The thought here is something like the following. Since it seems difficult to deny that at least some consciousness is sensory consciousness, the sensory theory, in holding that all consciousness is sensory, leads to a simpler view than the motor account. Proponents of a motor account of control consciousness seem, on the face of it, to need to commit to two different accounts of consciousness: one for sensory consciousness and one for control consciousness. But with AEI on hand, it is easy to see that a motor theory of control consciousness need not lead to a less parsimonious view. A single coherent account of consciousness applies to both sensory consciousness and control consciousness: conscious states are constituted by patterns of recurrent activation in intermediate levels of processing hierarchies.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI

12 Responses to “Control Consciousness Explained”

  1. Josh Weisberg says:

    Hey Pete–

    I’m digging all of this–cool stuff.

    I’m still not clear on the phenomenon here: what is the content of a conscious control state–what’s the phenomenological data we’re trying to explain here? The feeling of “will”? The experience of guided, deliberate behavior? (I worry that the AEI view will make the wrong stuff conscious, but I’m not sure…)

    Also, just curious about long-distance driver type examples: thy look kind of intermediate and (I would speculate) recurrent. What’s your view on that kind of thing?

  2. Pete Mandik says:

    Heya Josh!

    Thanks for the comment. Glad you’re diggin’ it.

    Thanks for putting pressure on me re the explandum. Here’s some counter-pressure. You know how there’s a distinction between conscious and unconscious input? that is, conscious v unconscious perception? There’s also a distinction between conscious and unconscious output. The folk( and quasi-folk, i.e. other philosophers) track conscious output with phrases like “conscious willing”, “phenomenology of agency”, and “deliberate (as opposed to ‘thoughtless’) behavior”.

    Do you think that I am here conflating distinctions that the folk give us good reason to respect? If I were to describe the explanandum in a way that more explicitly favors HOT-heads, I would put it in transitivity-principle-ese as “when you are conscious of yourself as doing something (as opposed to merely being conscious of something happening to you)”.

    Re truck drivers, I don’t know how much credence I put in that as a datum as opposed to just regarding it as a bit of lore among a subset of philosophers. If the alleged cases take place over long stretches of time, I’m more inclined to chalk up the reports to a failure to store events in declarative memory than to say that the drivers are unconscious for long stretches. My own experience with long distance “unconsciousness” makes me disinclined to say that during the whole stretch of “being on autopilot” that I was as good as a super-blindsighter. I have an after-the-fact hunch of consciously perceiving and consciously acting during much of the episode. Of course, I’m not sure how much to hang on that. And I certainly don’t want to turn into Block here and insist on the reality of un-accessed phenomenology.

    How about you? Do you think that long-distance-truck-driver-ism is an important phenomenon for theories of consciousness to account for? And more to the point of the objection that you are developing, what reasons do you have for suspecting recurrence to pop up that aren’t also reasons for expecting consciousness to pop up?

  3. Josh Weisberg says:

    Right–I’m fine with the phenomenon. Just wanted to make sure I had it right (yes, that really was a clarificatory question!).

    As for the LDD case, I agree that it’s not so clear. But my thinking here is that there’s lots of motor stuff that’s well-controlled, goal-directed action plausibly involving intermediate-level recurrence–there is reciprocal feedback from the upper and lower levels adjusting and correcting the motor behavior–that we wouldn’t think of as conscious. Take the action of accelerating and braking while driving on a highway. My goal is to drive safely, get to school before my class starts, whatever, but the pressing, stamping, releasing isn’t usually conscious. Now, it sometimes becomes conscious in odd or dangerous situations–what is “added” here, assuming there is a phenomenological difference? I was thinking attention would be the thing to say, but then I wonder if you don’t collapse into Jessie’s AIR view. Of course, you still have a different view on content. Maybe this isn’t a problem–attention might be a particular form of recurrent connection, or an amped-up version of the usual connection.

    Another option, of course, is monitoring. But I have an odd feeling that’s not the way you’ll go!

    Another question–if the AEI representation is basically a bound version of the upper and lower states (this may be a contentious way of putting it…), why are we only conscious of some aspects of the bound complex and not others? If low-level perceptual representations are indeed in reciprocal connection with the upper levels, and this somehow *constitutes* the intermediate representation, why aren’t those lower-level features conscious? What screens them out? Why only some features (i.e., the right ones) and not others? The point is, recurrence might be too broad a brush to pin down the contents of consciousness.

  4. [...] Brain Hammer Pete Mandik’s Intermittently Neurophilosophical Weblog « Control Consciousness Explained [...]

  5. Pete Mandik says:

    Hey Josh,

    I like your new driver-based case. I think its clear that much of the braking and accelerating I do on my daily commute goes by unconsciously. My bet is that whatever recurrence is involved is at relatively low levels. What’s added when danger makes me conscious of e.g. whether I’m braking too hard on a slippery surface? Recurrent activation kicking in at intermediate levels. Of course its natural to describe this as attention thereby being grabbed. But I don’t think that AEI thereby turns into AIR. I think, for this particular case at least, “attention” and “consciousness”-based descriptions of what’s going on are going to be pretty much interchangeable folk descriptions of the explanans. I don’t think, though Jesse does, that attention should figure in the explanandum. I don’t mean this to be a statement of what, in general, distinguishes AEI and AIR. (there’s relatively complicated list of distinguishers) Just that, in this particular case, whatever attention has to do with it doesn’t thereby collapes AEI into AIR.

    Regarding your content question, I hold that all of the contents in the reciprocating hybrid are contents of consciousness. I don’t see that anything internal to my account entails that only some aspects enter in. Is there something external to the my story that I’ve missed but needs to be taken into account?

    One thing that perhaps needs clarification: I figure that there are whole bunch of levels beyond 3, and definitely beyond 2. The way you phrased it though makes it sound like the view is that there’s an upper level, a lower level, and the intermediate level emerges when there’s reciprocal interaction between the top and the bottom.

  6. Josh Weisberg says:

    I guess my worry is that recurrent loops are pretty ubiquitous in the brain, and that some rather low-level features might be involved in the loops. That would entail that that low level feature is conscious on your view. But (the intuition goes) we aren’t conscious of those low-level features. There’s too much stuff in the loop. How do we cordon off the right level(s) of loop to bind the proper features of experience?

  7. See Fig. 8 in my paper “Space, self, and the theater of consciousness” (*Consciousness and Cognition*, 2007). The features of experience are properly bound in egocentric retinoid space which receives recurrent (loop) projections from synaptic matrices in the lower-level processes shown in Figure 8. The structural and dynamic details of these neuronal mechanisms are given in *The Cognitive Brain*, MIT Press 1991.

  8. Josh Weisberg says:

    Arnold–

    Thanks! I’ll take a look.

  9. Pete Mandik says:

    Josh, I wonder if the following goes some way to addressing the “screening off” worry. One way of thinking of what lower-level stuff does and doesn’t get in might be illustrated with the following case concerning sensory consciousness. Suppose that we present a duck-rabbit visual stimulus to a perceiver who has both a duck concept and a rabbit concept and that due to a prior stimulus, their duck concept is primed. The high level duck representation will received even further activation, then, on the presentation of the duck-rabbit. The high level duck rep will send signals back down to lower levels that concern specific regions of the visual array, enhancing duckish ones and inhibiting rabbitish ones. This more duckish set of activations in the lower level array propagates back up, strengthening the activation even further of the higher level rep. The upshot of all of this is what, in the initial feedforwad sweep, contained lots of lowlevel stuff, through the influence of the higerlevel stuff results in lots of lowerlevel stuff getting suppressed.

  10. [...] 2. The Pure Perceptual Model 3. The Motor Theory 4. The Imagery Theory 5. Introducing AEI 6. Control Consciousness Explained 7. Libet’s Puzzle of [...]

  11. [...] 2. The Pure Perceptual Model 3. The Motor Theory 4. The Imagery Theory 5. Introducing AEI 6. Control Consciousness Explained 7. Libet’s Puzzle of Will 8. Control Phenomenology 9. On the Apparent Lack of Control [...]

  12. [...] Control Consciousness Explained « Brain Hammer [...]