Archive for February, 2009

Zombie Rights!

Friday, February 27th, 2009



Zombies are were people too.

The Zombie Rights Argument
Premise 1. If zombies are possible, then I can’t know whether you have qualia.
Premise 2. Qualia are obligation-inducing.
Premise 3. If zombies are possible, then I can’t know whether I’m obligated to e.g. refrain from torturing you.
Premise 4. My obligations can’t be unknowable by me.
——————
Conclusion. Zombies are not possible.

Commentary:
P1. All the evidence I have about you is exactly the same as the evidence I would gain from your zombie twin.
P2. Qualia, said Sellars, are what make life worth living. It is a good to have the pleasingness of pleasure and a harm to have it taken away. The painfulness of pain is what makes it a harm to be tortured.
P3. Seems to follow pretty straightforwardly from 1&2.
P4. Jason Zarri has a very nice post on this sort of thing. See his “Does moral realism entail moral verificationism?” where he discusses the following principle: “Necessarily, if someone has a duty to do something, it is possible for them to find out or discover that they have a duty to do it”
C. AARG!

This sort of argument has probably been made before. References welcome.

BTW, some relevant discussion can be found in the comment thread of this post by Eric Schwitzgebel: [link].

Swamp Mary to Rock New Jersey

Wednesday, February 25th, 2009

Any Brain Hammer-heads that anticipate spending today being in or near that very special part of New Jersey we know as William Paterson University should swing by and check out my talk “Swamp Mary’s Revenge: Deviant Phenomenal Knowledge and Physicalism” at 3:30pm - 5:00pm in 126 Atrium. Here’s a link to the paper (forthcoming in Philosophical Studies): link.

ABSTRACT: Deviant phenomenal knowledge is knowing what it’s like to have experiences of, e.g., red without actually having had experiences of red. Such a knower is a deviant. Some physicalists have argued and some anti-physicalists have denied that the possibility of deviants undermines both anti-physicalism and the Knowledge Argument. The current paper presents new arguments defending the deviant-based attacks on anti-physicalism. Central to my arguments are considerations concerning the psychosemantic underpinnings of deviant phenomenal knowledge. I argue that only physicalists are in a position to account for the conditions in virtue of which states of deviants constitute representations of phenomenal facts.

All is Dark Inside

Swampy?

On the Apparent Lack of Control Phenomenology

Wednesday, February 25th, 2009

In the previous post on control consciousness, I presented a case in favor of non-sensory control phenomenology. Despite these considerations in favor of thinking that control infects large portions of our phenomenology, there is something tempting about the claim that there is no non-sensory control phenomenology. If a theory does not credit such a temptation as leading to the truth, then it would be a virtue of the theory if it could explain why such a temptation exists any way. I will here briefly sketch two main ways in which the temptation may be accounted for. The first concerns the differential bandwidth between prototypical instances of sensory inputs and motor outputs. The second concerns the degree to which introspection is itself an act.

Differential bandwidth.
Sensory inputs may be compared with each other and with motor outputs in terms of bandwidth. Estimates of the bandwidth of the human eye for color vision range from 4.32 x 10^7 bits/sec (Jacobson, 1950, 1951) to a more recent estimate of 10^6 bits/sec (Koch et al., 2006) aka a megabyte per second (1MB/sec). It is perhaps not surprising that hearing has a significantly lower bandwidth than vision (a picture being worth a thousand words, and all). Jacobson (1950, 1951) gives an estimate of 9,900 bits/sec for the bandwidth of the human ear. He also gives a bandwidth estimate of 4.32 x 10^6 bits/sec for the eye for black and white vision. These differences in bandwidth perhaps account for widespread intuitions such as the intuition that visual “qualia” are ineffable, the intuition that a person blind from birth can never be told what its like to see (Hume, Locke), and the intuition that a person reared in a black and white environment wouldn’t know what its like to see red (Jackson, 1982). The auditory channel is relatively impoverished compared to the visual channel, and the black and white visual channel relatively impoverished compared to the color channel.

So what happens when we turn our attention to motor systems? Bandwidth estimates for motor output systems are far lower than either vision or hearing. Fitts (1992) estimates motor output bandwidth at 10 to 12 bits/sec. I offer that bandwidth differences between various sensory systems and output systems can serve as a basis for explaining why many may have the intuition that there is no distinctive phenomenology for control consciousness.

Introspection as Mental Action.
Another explanation of why some may have supposed that there is no control phenomenology, an explanation that may work together with the bandwidth-based explanation, hinges on the fact that introspecting is itself an act. As such, it is reasonable to suppose that a greater load is presented in introspecting control consciousness than in introspecting sensory consciousness. To spell this out further, let us assume, for purposes of illustration, that motor systems and sensory systems have the same bandwidth. If so, bandwidth alone would not serve to account for an apparent difference in phenomenological richness. If, however, there were some additional factor present that inhibited the ability to introspectively attend to motor systems but not sensory systems, then that factor would serve to explain a difference in apparent richness.
What could such a factor be? It is a relatively well known that attempting simultaneous multiple control tasks diminishes the capacity one would otherwise have to do them singly. If introspection is itself an act, then introspecting motor control is a doubling of tasks in a way that introspecting otherwise passive sensory input is not. The doubling introduced in introspecting control consciousness thus serves as the sought-after factor that can explain a comparative lack of richness between control and sensory systems.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained
7. Libet’s Puzzle of Will
8. Control Phenomenology

Unicorn Cover Uncovered

Tuesday, February 24th, 2009

My paper, “Beware of the Unicorn: Consciousness as Being Represented and Other Things that Don’t Exist” just came out in the latest issue of Journal of Consciousness Studies. [link to uncorrected proofs] [link to IngentaConnect] What I didn’t know until I got my hands on the issue, is that my unicorn made the front cover. I can’t wait to tell my mom. Enjoy, but beware, this jpeg:

unicorncover

Control phenomenology

Monday, February 23rd, 2009

Some have claimed that introspection of control consciousness reveals no distinctively non-sensory component. Others make a contrary claim about the relevant phenomenology. Phenomenological disputes are notoriously difficult to adjudicate leading some researchers to be quite skeptical of the phenomenological enterprise and the reliability of introspection. Nonetheless, claims about consciousness should be made to square with the self-reports of subjects, if not to explain then to explain away. If there’s controversy regarding some point of phenomenology, it can be quite satisfying to discover an explanation of why such a controversy arises.

While I think careful reflection reveals a distinctively non-sensory component to control consciousness, I do think there is something worth taking seriously in various claims against non-sensory control phenomenology. In this post I sketch the case for non-sensory control phenomenology. In the next post, I’ll offer some possible explanations why it may have seemed obvious to some that there would be no such thing.

One point worth stressing is that control plays a relatively direct role in sensory consciousness. When we examine the phenomenology of sensory consciousness we note a role that control plays in this phenomenology. Two main ways in which control is involved in sensory consciousness is (1) vividness or intensity of sensory experience is correlated with what is contrary to control and (2) differential degrees of direct control is one of the main ways in which a person is able to distinguish, from the first-person point of view, between perception and imagery.

As has been noted by other authors (Chalmers, 1996, p. 224) (Rosenthal, 1986, p. 412), the vividness or intensity of a mental state is correlated with or constituted by the strength of its connection to action. As I shall like to put this point, especially as it applies to sensory consciousness, is that vividness is correlated with or constituted by what is contrary to control. The more vivid a pain, the more it compels our attention toward it and our acting to alleviate it. The less vivid, the more control we have over whether we are going to do anything about it.

The point generalizes beyond pain. As pointed out by Weiskrantz et al. (1971), self-administered tactile stimulations are perceived as more intense than when administered by another. The vividness of a visual experience of red needs to be characterized in terms of its inverse relation to control and cannot instead be defined in terms of sensory properties of the stimulus. The point is best drawn out by attending to differences between a sensory experience of a particular shade of red and a mental image of the same shade of red. Another useful comparison might be between an experience and a thought of one and the same shade of red. The sensible properties of the shade represented in experience—the shade’s hue, saturation, and brightness—underdetermine the differential vividness of experience and imagery (as well as experience and thought) since one can have an a thought or image that captures the properties of hue, saturation, and brightness definitive of a given experienced shade, but the experience may still be more vivid than either the corresponding thought or image.

One point that combines both the point about vividness and the point about imagery is that imagined pain is less vivid than experienced (non-imagined) pain. It is worth noting as a general point that, besides the various commonalities between sensory perception and sensory imagery, the main way in which we are able to distinguish an image from a percept with similar content is by the differential degrees of direct control that we have over the image (Kosslyn, 1994, pp. 102-104). For example, in imaging an apple, I can rotate, enlarge, or distort the shape of the apple. But in perceiving an apple, I can do no such thing, especially if I cannot get my hands on the apple.

It is worth noting that, due to similarities between percepts and images, subjects do sometimes confuse the two (Perky, 1910). However, the degree to which subjects confuse a percept and an image can be manipulated experimentally by introducing factors that either vary how difficult the imagery task is (Finke, Johnson, & Shyi, 1988) or whether the images are created intentionally rather than incidentally (Durso & Johnson, 1980) (Intraub & Hoffman, 1992). An intentionally formed and difficult to manipulate image (say, an image of a rotating, relatively complex three-dimensional figure) is less likely to be mistaken in memory for a percept than a comparatively less difficult image.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained
7. Libet’s Puzzle of Will

Libet’s Puzzle of Will

Friday, February 20th, 2009

One advantage of the Allocentric-Egocentric Interface (AEI) account of control consciousness is the way it delivers a satisfying way of absorbing the results of Benjamin Libet’s widely discussed work on readiness potentials and whether conscious will is an illusion (Libet, 1999).

Libet’s experiment involved having experimental subjects note, while looking at a clock, at what time they made the conscious decision to flick their wrist. Libet found, by noting EEG recordings of a readiness potential (a marked increase of neural-electrical activity preceding the wrist-flick), that there was a delay of 300 to 500 milliseconds between the readiness potential and the reported time of the conscious decision (the subjective time or time in which the decision seemed to the subject to be made).

One, perhaps troubling, implication of Libet’s result is that control consciousness is an illusion. We do not consciously will anything. Willing occurs prior to a conscious state which itself, the conscious state, is a by-product of the act of willing, not the willing itself.

It is worth noting that this sort of result is to be expected according to the AEI account of control consciousness. The highest levels of activation in a motor processing hierarchy occur unconsciously and prior to the recurrent signaling in intermediate levels that constitute the conscious state. Further, the parallel accounts that AEI gives of sensory consciousness and control consciousness allows for an interpretation of Libet’s result that is far less troubling than the will-as-illusion interpretation.

It is no more an illusion that we will consciously than that we perceive consciously. The distal objects of our perception, that is, the external-world events that we perceive, are perceived consciously even though they, the external events, are causal antecedents of our states of perceptual consciousness. If we find such a view non-paradoxical and non-puzzling, then we should be able to come to a similarly non-troubling view of the implications of Libet’s results for control consciousness. Just as external events are consciously perceived even though they are causal antecedents of states of consciousness, certain inner events are conscious willings or consciously willed even though they are causal antecedents of states of consciousness.

We could, if we wanted, apply overly stringent criteria to perception to generate a “puzzle of conscious perception” that parallels the puzzle of conscious will that many see raised by Libet’s results. One overly stringent criterion is a time-of-occurrence-criterion according to which in order to be distinct from a memory, a perception of an event has to occur at the same time as the event perceived. Another overly stringent criterion is a factivity criterion according to which in order to be distinct from an illusion, a perception of the time of occurrence of an event as now has to be accurate (the perception that now is noon cannot occur a little after noon without counting as an illusion). If we applied such criteria then we can derive that we never have accurate perceptions and, instead, we either have accurate memories or illusions of what’s happening now. More natural, however, is to avoid such overly stringent criteria and thus go on, just as common sense does, saying that we frequently perceive events at their time of occurrence.

Prinz writes, “of the fact that that the felt decision to move occurs 250 milliseconds after a readiness potential in motor areas of the brain,” that “[i]f the conscious experience of intention supervened on motor representations, we might expect the felt intention to co-occur with the onset of the motor response.” (J. Prinz, 2007, pp. 344-345).

Prinz’s use of the Libet point seems to assume that motor signals should suffice for consciousness on the motor theory. But this is no more a reasonable expectation for the motor theory than to saddle Prinz with the assumption that mere retinal input suffices for consciousness.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI
6. Control Consciousness Explained

Control Consciousness Explained

Wednesday, February 18th, 2009

The applicability of AEI (the Allocentric-Egocentric Interface theory of consciousness) to motor systems looks to be a relatively straightforward affair. First, motor systems are arranged hierarchically. Focusing here just on cortex, the highest level is in prefrontal cortex, the lowest level in primary motor cortex, and relatively intermediate is premotor cortex. Further, there exist both forward projections and back projections between successive levels of the motor hierarchy (Churchland, 2002, p. 72). We may further characterize levels in the motor hierarchy as differing along an allocentric-egocentric dimension.

The neuroanatomical features of the motor system make it quite natural to suppose that both intermediacy and recurrence can apply to motor processing. The basic suggestion here is twofold. First, unconscious action involves motor signals originating in relatively high levels and propagating down to lower levels without any recurrence from lower to higher. Second, the conscious aspect of conscious action is to be identified with states consisting in reciprocally interacting pairs of motor representations where one member of the pair is relatively more allocentric than the other.

While the application of AEI to control consciousness is not an instance of what I have called a pure motor theory, since not just any outgoing motor signaling counts as conscious, it is still clearly an instance of a motor theory, since it allows for conscious control to arise without any sensory input or imagery thereof.

It is perhaps worth briefly noting a mapping between the basic elements of pseudo-closed-loop control and the AEI account of control consciousness. Outgoing signals from the highest levels of the hierarchy may be identified with the specification of a goal state. The next lowest level receives the goal states and sends on the inverse mapping. This inverse mapping may be sent to lowest levels eventuating in command signals. But it, or more precisely, a copy of it, may be sent down to intermediate areas wherein activation is utilized as a forward model with results that may be propagated back up to higher levels.

Non-parsimonious?

A sensory theory of control consciousness may seem to lead to an overall more parsimonious view of the mind than a motor theory. The thought here is something like the following. Since it seems difficult to deny that at least some consciousness is sensory consciousness, the sensory theory, in holding that all consciousness is sensory, leads to a simpler view than the motor account. Proponents of a motor account of control consciousness seem, on the face of it, to need to commit to two different accounts of consciousness: one for sensory consciousness and one for control consciousness. But with AEI on hand, it is easy to see that a motor theory of control consciousness need not lead to a less parsimonious view. A single coherent account of consciousness applies to both sensory consciousness and control consciousness: conscious states are constituted by patterns of recurrent activation in intermediate levels of processing hierarchies.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI

Control Consciousness: Introducing AEI

Monday, February 16th, 2009

As mentioned previously, the very basic form of the motor theory has the problem of being incapable of distinguishing conscious and unconscious action. Here a solution may be adapted from a similar problem that arises in distinguishing conscious from unconscious perception. Not just any input to sensory systems gives rise to a conscious percept. Instances of subliminal perception and blindsight are two kinds of example. The solution I advocate for distinguishing conscious from unconscious perception is twofold. I shall label the two parts of the solution “intermediacy” and “recurrence.” The first part, intermediacy, involves identifying conscious perceptual states with states at relatively intermediate levels of sensory processing hierarchies. The second part, recurrence, restricts consciousness to intermediate-level states involved in recurrent interaction between representations at relatively high and at relatively low levels of sensory processing hierarchies.

The “what” and “why” of intermediacy. Sensory processing, as in for example, vision, is hierarchical, with the lowest levels constituted by neural activations close to the sensory periphery which represent local and egocentric visible features and the highest levels constituted by abstract and allocentric representations employed in categorization and recognition.

It is natural to ask where in a sensory processing hierarchy conscious states reside. It is crucial to any account of consciousness that it connect the reality accessible from the third-person point of view (e.g. states of activation in neural circuits) with the appearance of what it’s like from the first-person point of view. Further, both introspective and observational methods converge to indicate that conscious states are relatively intermediate between the highest and lowest levels of the hierarchy. My visual perception of a coffee cup represents the cup as having a specific orientation relative to my point of view and relatively specific location in my visual field. However, the percept is not so high-level as to merely indicate the presence of a cup in a way abstracting from all observer-relative information. Nor is it so low-level as to register every change in irradiation of various regions of my two retina (the lowest levels are prior to even the integration of information from the disparate retinas). The intermediacy criterion on sensory consciousness means that not just any neural response to a sensory input will count as a conscious percept. This goldilocks criterion will exclude from consciousness those neural activations that are too high or too low.

The “what” and “why” of recurrence. While intermediacy is necessary, it seems not to alone suffice for consciousness. Strictly feed-forward activation of representations in sensory processing hierarchies can occur without consciousness. Pascual-Leone and Walsh (Pascual-Leone & Walsh, 2001) showed, with precisely timed pulses of transcranial magnetic stimulation, that visual consciousness was suppressed if recurrent activation was suppressed and only feed forward was allowed. Additionally, Lamme et al. (Lamme, Supèr, & Spekreijse, 1998) suggest that responses to stimuli in animals under general anesthetic are feed-forward activations without accompanying recurrence.

Elsewhere I defend what I call the Allocentric Egocentric Interface theory of consciousness (AEI) (P. Mandik, 2005, 2009). AEI incorporates both intermediacy and recurrence in the following manner: conscious states are intermediate-level states in processing hierarchies which, the intermediate states, are constituted by pairs of recurrently interacting allocentric and egocentric representations. Previous discussion of AEI has focused on sensory processing hierarchies. I turn, in the next post, to consider a natural extension of AEI to motor processing hierarchies.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory
4. The Imagery Theory
5. Introducing AEI

Control Consciousness: The Imagery Theory

Friday, February 13th, 2009

Prinz (2007) supplies a concise statement of his view, captured here in the following quotation:

The feeling of agency could be explained by a kind of prediction that the brain makes when we are about to act. If you elect to move your arm, you will be able to anticipate its movement. According to some leading neurobiological theories, when a plan is generated in the premotor cortex, a representation is sent to the somatosensory cortex corresponding to what the bodily senses should perceive when that action is executed. That representation is called a “forward model.” A forward model is an anticipatory somatosensory image. When our bodies carry out motor plans, the forward model is compared with the actual changes that take place in our body as we move. The feeling of agency may arise from this matching process. If a match occurs, we feel we are in control. If a match doesn’t occur, it’s because our bodies didn’t move as we predicted they would, and that results in an experience of being passively moved by an external source. (p. 342).

One way of appreciating a problem with Prinz’s view involves the way it combines a concept from control theory, that of a forward model, with the concept of a sensory image. That forward models are involved in the control of bodily movement is a highly plausible suggestion. That they be regarded as sensory images is somewhat less plausible. Before further fleshing out the problem, a bit more needs to be said about the distinct notions of a forward model and a sensory image.

Many philosophers are aware of control theory via the work of Rick Grush (e.g. (Grush, 2001)) and I here rely on his exposition of its basic ideas. In the simplest kind of control system, open-loop control, a desired goal signal is fed into a controller, which sends control signals to a target system or plant. Applying these concepts to motor control involves viewing parts of the musculo-skelatal system as plants and neural systems generating motor commands as controllers. The controller implements a mapping, the inverse mapping, of goal states onto command sequences. The plant implements a mapping, the forward mapping, of command sequences onto goal states (Grush, 2001, pp. 352-353). A slightly more complex control system, closed-loop control, has all of the components as in open-loop control plus the addition of feedback signals from the plant to the controller. While for many control purposes, closed-loop control is superior to open-loop control, closed-loop control is not without certain problems. If, for example, there are significant delays in the receipt of the feedback signal due to slow signal speeds and/or a relatively distant plant, then the system can oscillate wildly through potentially destructive cycles of overshooting and overcompensation. A slightly more complex control system that potentially overcomes such problems is pseudo-closed-loop control. One way of conceiving of pseudo-closed loop control is by thinking of it as built by adding features to open-loop control. The first addition involves a second signal being sent by the controller, an efferent copy, which is a duplicate of the signal sent to the plant. This duplicate signal, however, is not sent to the plant, but instead to an emulator or forward model, which, in turn sends signals back to the controller.

Now, it is tempting to follow Grush in calling the signal from the forward model “mock sensory information about what the real target system would do under various conditions” (p. 356 emphasis added), but I will want to resist such temptation.

It is useful here to consider the following two questions. First, what is involved in something’s being sensory in the sense of the term relevant to the current discussion? Second, do we have adequate reason for thinking that a forward model is relevantly sensory?

Starting with the first question, it is useful to look at Prinz’s own account of what makes something sensory. Prinz writes:

I will define a perceptually conscious mental state as a mental state that is couched in a perceptual format. A perceptual format is a representational system that is proprietary to a sense modality. To say that phenomenal states are perceptual is to say that their representational vehicles always belong to one of the senses: touch, vision, audition, olfaction, and so on. (J. Prinz, 2007, p. 336)

Further elaboration comes from what Prinz takes the negative aspects of his key thesis to be: “We do not have conscious states couched in non-perceptual formats. If I am right, we never have conscious states in our motor systems, and no conscious experiences are constituted by amodal representations…” (J. Prinz, 2007, p. 337).

In an earlier work dedicated to elaborating Prinz’s brand of empiricism, he spells out his view that “the senses are dedicated input systems” (J. J. Prinz, 2002, p. 115). Crucial to Prinz’s characterization is that each sense has both a proprietary class of inputs (physical magnitudes) and a proprietary representational format (thus denying that separate senses share a ‘common code’ (J. J. Prinz, 2002, p. 117).)

It is worth noting that in this earlier work Prinz endorses a view of imagery whereby “we can form mental images by willfully reactivating our input systems” (J. J. Prinz, 2002, p. 115). It seems natural to suppose that what is responsible for these reactivations counting as sensory imagery is that it is input systems that are reactivated.

With these remarks about what the “sensory” in “sensory imagery” consists in, let us return to the question of whether forward models need be conceived of as sensory imagery. In the basic outlines of pseudo-closed loop control, there is nothing that makes compulsory a sensory-imagery interpretation of the forward model. The forward model is not receiving sensory inputs and thus cannot count as a sensory system as characterized by Prinz. A fortiori, it cannot count as sensory imagery since it does not count as the reactivation of an input system.

Of course, it should be noted that there may be alternate architectures that incorporate forward models satisfying criteria for being sensory. However, the core idea of a forward model does not alone satisfy such criteria. It is also worth noting that the characterization of imagery as the willful reactivation of input systems threatens to make the imagery account collapse into a kind of non-sensory view. This is so if a crucial part of a state’s being imagery is its activation of a control signal.

Previous Posts
1. Control Consciousness
2. The Pure Perceptual Model
3. The Motor Theory

Happy Birthday, Darwin

Thursday, February 12th, 2009



Computers come from apes.

In celebration of Darwin’s 200th birthday, I’ll be participating in a panel discussion with members of my university’s departments of anthropology and biology. If you are both a Brain Hammer-head and a WillyPee-head, or whatever you call ‘em, come see “Evolution: Truth or Myth” at 6pm in the Student Center Multi-purpose Room (near the food court).

In preparation for the event I was thinking about some of my research on artificial life and evolving simple synthetic intelligences. A little auto-googling popped up this summary of a paper I co-authored with some former students, “Evolving Artificial Minds and Brains“. The following is excerpt from an introductory essay for the volume in which the paper appears. The editors of the volume and authors of the essay are Andrea C. Schalley and Drew Khlentzos. They do a pretty good job except for missing the point about how, mere responsiveness to stimuli being insufficient for mindedness, we are looking at nematode chemotaxis precisely because, in involving a memory, it crosses a threshold marking a difference in kind between mere reactivity and intelligence.

In “Evolving artificial minds and brains” Pete Mandik, Mike Collins and Alex Vereschagin argue for the need to posit mental representations in order to explain intelligent behaviour in very simple creatures. The creature they choose is the nem atode worm and the behaviour in question is chemotaxis. Many philosophers think that a creature’s brain state or neural state cannot count as genuinely mental if the creature lacks any awareness of it. Relatedly, they think that only behaviour the creature is conscious of can be genuinely intelligent behaviour. When the standards for mentality and intelligence are set so high, very few creatures turn out to be ca pable of enjoying mental states or exhibiting intelligent behaviour. Yet the more we learn about sophisticated cognitive behaviour in apparently simple organisms the more tenuous the connection between mentality and consciousness looks.

If there is a danger in setting the standards for mentality and intelligence too high, there is equally a danger in setting them too low, however. Many cognitive scientists would baulk at the suggestion that an organism as simple as a nematode worm could harbour mental representations or behave intelligently. Yet Mandik, Collins and Vereschagin argue that the worm’s directed movement in response to chemical stimuli does demand explanation in terms of certain mental representa tions. By “mental representations” they mean reliable forms of information about the creature’s (chemical) environment that are encoded and used by the organism in systematic ways to direct its behaviour.

To test the need for mental representations they construct neural networks that simulate positive chemotaxis in the nematode worm, comparing a variety of networks. Thus networks that incorporate both sensory input and a rudimentary form of memory in the form of recurrent connections between nodes are tested against networks without such memory and networks with no sensory input. The results are then compared with the observed behaviour of the nematode. Their finding is that the networks with both sensory input and the rudimentary form of memory have a distinct selectional advantage over those without both attributes.

Even if it is too much to require mental states to be conscious, there is still the sense that there is more to mentality than tracking and responding to environ­mental states. One worry is that there is simply not enough plasticity in the nema tode worm’s behaviour to justify the attribution of a mind. A more important worry is that the nematode does not plan - it is purely at the mercy of external forces pushing and pulling it in the direction of nutrients. In this regard, it is in structive to compare the behaviour of the nematode worm with the foresighted behaviour of the jumping spider, Portia Labiato. Portia is able to perform some quite astonishing feats of tracking, deception and surprise attack in order to hunt and kill its (often larger) spider prey. Its ability to plot a path to its victim would tax the computational powers of a chimpanzee let alone a rat. It has the ability to plan a future attack even when the intended victim has long disappeared from its sight. Portia appears to experiment and recall information about the peculiar habits of different species of spiders, plucking their webs in ways designed to arouse their interest by simulating the movements of prey without provoking a full attack. Yet where the human brain has 100 billion brain cells and a honeybee’s one million, Portia is estimated to have no more than 600,000 neurons!