Archive for the ‘Animat Semantics’ Category

Happy Birthday, Darwin

Thursday, February 12th, 2009



Computers come from apes.

In celebration of Darwin’s 200th birthday, I’ll be participating in a panel discussion with members of my university’s departments of anthropology and biology. If you are both a Brain Hammer-head and a WillyPee-head, or whatever you call ‘em, come see “Evolution: Truth or Myth” at 6pm in the Student Center Multi-purpose Room (near the food court).

In preparation for the event I was thinking about some of my research on artificial life and evolving simple synthetic intelligences. A little auto-googling popped up this summary of a paper I co-authored with some former students, “Evolving Artificial Minds and Brains“. The following is excerpt from an introductory essay for the volume in which the paper appears. The editors of the volume and authors of the essay are Andrea C. Schalley and Drew Khlentzos. They do a pretty good job except for missing the point about how, mere responsiveness to stimuli being insufficient for mindedness, we are looking at nematode chemotaxis precisely because, in involving a memory, it crosses a threshold marking a difference in kind between mere reactivity and intelligence.

In “Evolving artificial minds and brains” Pete Mandik, Mike Collins and Alex Vereschagin argue for the need to posit mental representations in order to explain intelligent behaviour in very simple creatures. The creature they choose is the nem atode worm and the behaviour in question is chemotaxis. Many philosophers think that a creature’s brain state or neural state cannot count as genuinely mental if the creature lacks any awareness of it. Relatedly, they think that only behaviour the creature is conscious of can be genuinely intelligent behaviour. When the standards for mentality and intelligence are set so high, very few creatures turn out to be ca pable of enjoying mental states or exhibiting intelligent behaviour. Yet the more we learn about sophisticated cognitive behaviour in apparently simple organisms the more tenuous the connection between mentality and consciousness looks.

If there is a danger in setting the standards for mentality and intelligence too high, there is equally a danger in setting them too low, however. Many cognitive scientists would baulk at the suggestion that an organism as simple as a nematode worm could harbour mental representations or behave intelligently. Yet Mandik, Collins and Vereschagin argue that the worm’s directed movement in response to chemical stimuli does demand explanation in terms of certain mental representa tions. By “mental representations” they mean reliable forms of information about the creature’s (chemical) environment that are encoded and used by the organism in systematic ways to direct its behaviour.

To test the need for mental representations they construct neural networks that simulate positive chemotaxis in the nematode worm, comparing a variety of networks. Thus networks that incorporate both sensory input and a rudimentary form of memory in the form of recurrent connections between nodes are tested against networks without such memory and networks with no sensory input. The results are then compared with the observed behaviour of the nematode. Their finding is that the networks with both sensory input and the rudimentary form of memory have a distinct selectional advantage over those without both attributes.

Even if it is too much to require mental states to be conscious, there is still the sense that there is more to mentality than tracking and responding to environ­mental states. One worry is that there is simply not enough plasticity in the nema tode worm’s behaviour to justify the attribution of a mind. A more important worry is that the nematode does not plan - it is purely at the mercy of external forces pushing and pulling it in the direction of nutrients. In this regard, it is in structive to compare the behaviour of the nematode worm with the foresighted behaviour of the jumping spider, Portia Labiato. Portia is able to perform some quite astonishing feats of tracking, deception and surprise attack in order to hunt and kill its (often larger) spider prey. Its ability to plot a path to its victim would tax the computational powers of a chimpanzee let alone a rat. It has the ability to plan a future attack even when the intended victim has long disappeared from its sight. Portia appears to experiment and recall information about the peculiar habits of different species of spiders, plucking their webs in ways designed to arouse their interest by simulating the movements of prey without provoking a full attack. Yet where the human brain has 100 billion brain cells and a honeybee’s one million, Portia is estimated to have no more than 600,000 neurons!

Evolving Virtual Creatures: The Definitive Guide

Wednesday, March 26th, 2008

Alex J. Champandard has compiled “Evolving Virtual Creatures: The Definitive Guide“. Excerpt:

AI research ties into games and simulations in many ways, but one of the most fascinating is the evolution of artificial life. Here’s a compilation of the best videos and white papers about applying genetic algorithms to generating the morphology and behavior of virtual embodied creatures in 3D worlds.

One of the cool things about the following video is it’s inclusion of an animation of the creature’s neural network:

Also supercool, this one:

Cognitive Cellular Automata

Friday, January 11th, 2008

An updated version of my paper “Cognitive Cellular Automata” is now available on my website [link].

ABSTRACT: In this paper I explore the question of how artificial life might be used to get a handle on philosophical issues concerning the mind-body problem. I focus on questions concerning what the physical precursors were to the earliest evolved versions of intelligent life. I discuss how cellular automata might constitute an experimental platform for the exploration of such issues, since cellular automata offer a unified framework for the modeling of physical, biological, and psychological processes. I discuss what it would take to implement in a cellular automaton the evolutionary emergence of cognition from non-cognitive artificial organisms. I review work on the artificial evolution of minimally cognitive organisms and discuss how such projects might be translated into cellular automata simulations.

Evoloop
Above: Hiroki Sayama’s self-reproducing cellular automaton pattern, Evoloop. (source: http://necsi.org/postdocs/sayama/sdsr/movies/evol-rep.html).Does it have beliefs about itself and its neighboring loops?

Excerpt from my paper:

Two remarks are especially in order. The first concerns Sayama’s attribution of beliefs to the deflecting loops. The second concerns how all three strategies employ an attack detector state. Regarding the belief attribution it is especially pertinent to the current paper whether it is in fact true since if it is, then Sayama has thereby produced a cognitive cellular automaton. The belief in question is the belief that “self-replication has been completed”. This is allegedly a false belief had by an attacker as the result of being tricked by a loop employing the deflecting strategy of self-protection. If an organism is capable of having a belief that “self-replication has been completed” then it makes sense to ask what kind of belief it is. Is it a perceptual belief? An introspective belief? A volitional belief? I take it that the most plausible candidate is perceptual belief. If the loop has a belief at all, then it has a perceptual belief. However, the having of a perceptual belief has certain necessary conditions that the loop fails to satisfy. In particular, a necessary condition on having my the perceptual belief that P–that is, a perceptual belief concerning some state of affairs, P–is that I have a state S that is at one end of an information channel which has at the other end P. Further, S must carry the information that P and be caused by P. Thus if I am to have the perception that there is a fly on the far side of the room, then I must have a state that carries the information that there is a fly. Lacking the ability to have such a state I might come to believe that there’s a fly, but that belief certainly cannot be a perceptual belief. In other words, perceivers of flies must be capable of detecting flies. Failing an ability to detect flies, one fails to perceive them and likewise fails to have perceptual beliefs about them. Do Sayama’s loops have any capacity to detect the termination of their self-reproductive procedures? It seems not, since they have no detector states that carry the information that self-replication has terminated. They thus fail to satisfy a crucial condition for the having of perceptual belief. And on the assumption that perceptual beliefs were the only plausible candidates, then we can conclude that insofar as Sayama’s attribution was literal, it is literally false. However, just because Sayama’s loops do not have detector states for replication termination, they are not devoid of detector states altogether. As previously mentioned, they have attack detecting states. The question arises as to how far the attack detection schemes in Sayama’s loops go toward the evolution of cognition. One thing to note is that the self-defensive strategies triggered by the attack detection state, as well as the attack detection state itself, were designed by Sayama and are not products of evolution in the loops.

Apsychogenesis, Bacterial Cognition, and The Greatest Paper Ever Written

Monday, August 27th, 2007

1. Apsychogenesis
If “abiogenesis” is the hypothesized origin of life from non-living systems, then a good term for the hypothesized origin of mind from non-mental systems would be “apsychogenesis”. A question I find fascinating is: What were the relative times of occurrence of abiogenesis and apsychogenesis?

I’m aware of no non-religious defense of the view that apsychogensisis preceded abiogenisis (and I’m not totally sure there are any religious ones, either). My own money is on the theory that abiogenesis preceded apscyhogenesis. If I understand their positions correctly, in defending the thesis of “strong continuity of life and mind”, theorists such as Fransico Varela and Evan Thompson are thereby committed to the co-occurrence of abiogenesis and apsychogenesis. (See Thompson’s article “Life and mind: From autopoiesis to neurophenomenology. A tribute to Francisco Varela” and his book Mind in Life: Biology, Phenomenology, and the Sciences of Mind)

2. Bacterial Cognition
One front where the battle between the “life-first, mind-later” and the “life and mind: same time” folks will need to duke it out is over various competing and compelling claims concerning whether genuine cognition is instantiated in bacterial control systems.

Lots of defenders of smart bacteria gave talks in Australia this past July. (See here for various abstracts in the ASCS proceedings. See here for Kate Devitt’s detailed notes of Pamela Lyon’s talk.)

3. The Greatest Paper Ever Written

I have absolutely no idea what the greatest paper ever written is. I do know, however, that my “Varieties of Representation in Evolved and Embodied Neural Networks” gets more hits, month after month, than any of my other online papers. I know, additionally, that I much prefer that paper’s sequel “Evolving Artificial Minds and Brains”, (EAMB) wherein “apsychogenesis” was coined. Both papers defend the instantiation of genuine mentality in relatively simple control systems (such as those hypothesized to explain bacterial chemotaxis). (EAMB Links: pdf for the uncorrected proofs; html for the penultimate draft.)

general_bacteria_l.jpg

Subjective Brain Ch. 7

Thursday, August 2nd, 2007

Chapter 7 of The Subjective Brain, “Animat Semantics” is up now. [link]

Excerpt:

An animat is an artificial animal, either computer simulated or robotic. Animat methodology involves three characteristic explanatory strategies: synthesis, holism, and incrementalism. The synthetic element involves explaining target phenomena by attempting to synthesize artificial versions of them, a characteristic inherited in large part from earlier versions of Artificial Intelligence (Good Old Fashioned Artificial Intelligence (GOFAI) as well as connectionist approaches). The holism referred to here is not necessarily restricted to the semantic holism familiar in other areas of philosophy of mind or cognitive science but is instead concerned with function more generally. The holistic take on function is that the function of an organ or a behavior is best understood in the context of the whole organism, or, more broadly still, in the context of the organism’s physical and/or social environment. It is thus both embodied and embedded (Clark 1997). However, this holistic impulse might seem to conflict with attempts to synthesize phenomena. Synthesis must simplify to be tractable, yet whole organisms are more complex than their subsystems, and social systems and ecosystems are even more complex. An older strategy of simplification involves focusing on subsystems of human cognitive processes, for example, as was done in GOFAI and connectionist models of word recognition. The comparatively newer strategy of simplification embraced by the Animat approach involves focusing on the entirety of organisms much simpler than the human case, thus heeding Dennett’s rallying cry/question, “Why not the whole iguana?” (1998: 309). In animat research projects of synthesis involve modeling the simplest intelligent behaviors such as obstacle avoidance and food finding by chemotaxis. The incrementalism of the animat approach involves building up from these simplest cases to the more complex via a gradual addition of complicating factors, as in, for instance, roboticist Rodney Brooks’(1999) ongoing project of building an incrementalist bridge from robotic insects like Attilla through to the humanoid robot, Cog.

The Economy Problem

Tuesday, February 20th, 2007


The Perfect Cow

Originally uploaded by Pete Mandik.

While there are many theories of what representational content is and how representations come to have it, it is not entirely clear that these theories are compatible with basic assumptions about the diverse roles that representations play in the internal causal economies of organisms. Let us call the problem of showing the compatibility of a theory of content and these pre- theoretic assumptions about the roles of representations within a causal economy “the economy problem.�

The economy problem is due in large part to the emphasis that perception has received in theories of content. Consider the kind of stock example typical of this literature. Smith has a mental representation heretofore referred to as “/cow/.� As the story goes, /cow/ means cow, that is, Smith has /cow/ in his head and /cow/ represents a cow, or cow-ness, or cows in general. On the standard story, Smith will come to have a tokenning of the representation type /cow/ when Smith is in perceptual causal contact with a cow and comes to believe that there is a cow, presumably by having, in his head /there/ + /is/ + /a/ + /cow/ or some other concatenation of /cow/ with various other mental representations. The main question addressed in this literature is how /cow/, a physical sort of thing in Smith’s head, comes to represent a cow, a physical sort of thing outside of the Smith’s head.

This focus on the perceptual case has made causal informational proposals seem rather attractive to quite a few people, so let us focus on the following sort of suggestion, namely, that /cow/ represents cow because in typical scenarios, or ideal scenarios, or in the relevant evolutionary scenarios, /cow/s are caused by cows, that is, /cow/s carry information about cows. Thus, tokenings of /cow/s in the heads of Smith and his relatives are part of the operation of a cow-detector. A widespread presumption of this kind of view, and a not necessarily bad one, is that the /cow/s you find in the perceptual case are the same things that will be deployed in the memory, planning, and counterfactual reasoning cases too. The presumption, inherited from a long empiricist tradition, is that what ever happens in perception to wed representations to their contents, can simply be passed along and retained for use in non-perceptual mental tasks. In its most literal form, this is the view that whatever happens to items in the perception “box� is sufficient to mark those items (picture them as punch cards, if you like) as bearing representational contents. Those items can thus be passed to other boxes in the cognitive economy, and retain their marks of representational content even after they may go on to play quite different causal roles.

This is an interesting suggestion, but certainly open for questioning. That is, what might seem like a good idea about the nature of representations in connection with perception may not generalize to all the other sorts of things mental representations are supposed to do. Presumably, /cow/s, that is, mental representations of cows, have a lot more work to do than take part in perceptions. Consider that /cow/s are used to remember cows, to make plans concerning future encounters with cows, and to reason about counterfactual conditions concerning cows (e.g., what if a cow burst into this room right now?). Perhaps, then, the sorts of conditions that bestow representational contents onto perceptual states are very different than the conditions on representation in memory, which are yet different from the conditions for representation in planning, counterfactual reasoning, and so on.

A second concern, not unrelated to the first, is how you tell what and where the /cows/ are in the first place. Focusing on the case of perceptual belief brings with it certain natural suggestions: point Smith at some cows and look for the brain bits that seem to “light up� the most. Much talk of representation in neuroscience is accompanied by precisely this sort of methodology. But are the bits that light up during the retrieval of memories of cows or counterfactual reasoning about cows the same bits that light up in perceptions of cows? And more to the point, how will various theories of representational content cope with the different possible answers to this question?

The economy problem might best be seen as decomposing into a pair of problems, the first concerning a question of representational content and the second concerning a question of representational vehicles. The economy problem for content is the question of whether the conditions that establish representational content for perceptual representations are the (qualitatively or numerically) same conditions that establish the representational contents of memories and intentions or whether distinct conditions are necessary. The economy problem for vehicles is the question of whether the vehicles of perceptual representations will be the (qualitatively or numerically) same vehicles as in memories and intentions or whether distinct vehicles are necessary.

(excerpt from
Mandik, P. 2003. Varieties of Representation in Evolved and Embodied Neural Networks. Biology and Philosophy. 18 (1): 95-130. )


Analytic Functionalism and Evolutionary Connectionism

Friday, September 29th, 2006

(I leave in a few hours to go see Jerry Fodor give a talk at the CUNY Graduate Center. I thought it appropriate, then, to post something on how awesome evolution and neural networks are.)

What makes analytical functionalists functionalists is their belief that what makes something a mental state is the role that it plays in a complex economy of causal interactions. What makes analytical functionalists analytical is their belief that which roles are essential is to be discovered by a consultation of common sense knowledge about mental states.

There are three serious related problems that arise for analytical functionalism. The first problem is that analytical functionalism appears to be committed to the existence of analytical truths and various philosophers inspired by Quine have been skeptical of analytical truths. As Prinz (20**) succinctly sums up this Quinean skepticism, the objection is that “[r]oughly, definitions of analyticity are either circular because they invoke semantic concepts that presuppose analyticity, or they are epistemically implausible, because they presuppose a notion of unrevisability that cannot be reconciled with the holistic nature of confirmation” (p. 92). There are two main ways in which analytic functionalism seems saddled with belief in analytic truths. The first is concerns the nature of psychological state types such as beliefs and desires. Analytical functionalism is committed to there being analytic truths concerning the necessary and sufficient conditions for being a belief. The second concerns the meaning of mental representations. The most natural theory of meaning for the analytic functionalist to adopt is that what makes one’s belief about, say, cows, have the meaning that it does, is the causal relations it bears to all other belief states. However, it is likely that no two people have all the same beliefs about cows. Thus, on pain of asserting that no one means the same thing when they think about cows, the functionalist cannot allow that every belief one has about cows affects the meaning of one’s cow thoughts. In order to allow that people with divergent beliefs about cows can both share the concept of cows, that is, both think about the same things when they think about cows, the analytic functionalist seems forced to draw a distinction between analytic and synthetic beliefs, eg., a distinction between beliefs about cows that are constitutive of cow concepts and beliefs that are not. But if Quinean skepticism about the analytic/synthetic distinction is correct, no such distinction is forthcoming.
The second problem arises from worries about how minds are implemented in brains. Many so-called connectionists may be seen to agree with analytical functionalists that mental states are defined in terms of networks. However, many connectionists may object that when one looks to neural network implementations of cognitive functions, it is not clear that the sets of nodes and relations postulated by common sense psychology will map on to the nodes and relations postulated by a connectionist architecture (see, e.g. Ramsey, et al., 1991). The question arises of whether folk-psychological states will smoothly reduce to brain states or be eliminated in favor of them. (I will not discuss further the third option that folk-psychological states concern a domain autonomous from brainstates.)
A third problem arises from worries about the evolution of cognition. If a mind just is whatever the collection of folk psychological platitudes are true of, then there seem not to be any simple minds, for a so called simple mind would be something that the folk psychological platitudes were only partially true of in the sense that only some proper subset of the platitudes were true of it. However a very plausible proposal for how our minds evolved is from simpler minds. It counts against a theory that it rules out a priori the existence of simpler minds than ours for it leaves utterly mysterious what the evolutionary forebears of our minds were. This third problem is especially pertinent to artificial life researchers.
One promising solution to these three problems involves appreciating a certain view concerning how information-bearing or representational states are implemented in neural networks and how similarities between states in distinct networks may be measured. Models of neural networks frequently involve three kinds of interconnected neurons: input neurons, output neurons, and neurons intermediate between inputs and outputs sometimes referred to as “interneurons” or “hidden-neurons”. These three kinds of neurons comprise three “layers” of a network: the input layer, the hidden layer, and the output layer. Each neuron can be, at any given time, one of several states of activation. The state of activation of a given neuron is determined in part by the states of activations of neurons connected to it. Connections between neurons may have varying weights which determine how much the activation of one neuron can influence the other. Each neuron has a transition function that determines how its state is to depend on the states of its neighbors, for example, the transition function may be a linear function of the weighted sums of the activations of neighboring neurons. Learning in neural networks is typically modeled by procedures for changing the connection weights. States of a network may be modeled by state spaces wherein, for example, each dimension of the space corresponds to the possible values of a single hidden neuron. Each point in that space is specified by an ordered n-tuple or vector. A network’s activation vector in response to an input may be regarded as its representation of that input. A state of hidden-unit activation for a three-unit hidden layer is a point in a three-dimensional vector-space. A cluster of points may be regarded as a concept (Churchland 19**).
Laakso and Cottrell (1999, 2000) propose a method whereby representations in distinct networks may be quantified with respect to their similarity. Such a similarity measure may apply even in cases where the networks in question differ with respect to their numbers of hidden units and thus the number of dimensions of their respective vector spaces. In brief the technique involves first assessing the distances between various vectors within a single network and second measuring correlations between relative distances between points in one network and points in another. Points in distinct networks are highly similar if their distinct relative distances are highly correlated.
Regarding the analytic/synthetic distinction related worries, the Laakso and Cottrell technique allows one to bypass attributions of literally identical representations to distinct individuals and make do instead with objective measures of degrees of similarity between the representations of different individuals. Thus if I believe that a cow once ate my bother’s hat and you have no such belief, we may nonetheless have measurably similar cow concepts. This is no less true of our psychological concepts such as our concepts of belief and concepts of desire. The so-called common-sense platitudes of folk psychology so important to analytic functionalism may very well diverge from folk to folk and the best we can say is that each person’s divergent beliefs about beliefs may be similar. And similarity measures are not restricted to the concepts that constitute various folk theories, we may additionally make meaningful comparisons between various folk theories and various scientific theories. This last maneuver allows us to retain one of the key insights of analytic functionalism mentioned earlier: that we are in need of some kind of answer to the question ‘how do you know that your theory is a theory of belief?” The answer will be along the lines of “because what I’m talking about is similar to beliefs.”
Regarding the question of simple minds, if there are no analytic truths, then there is no a priori basis (if any) for drawing a boundary between the systems that are genuine minds and those that are not. Similarity measurements between simple minds and human minds would form the basis for a (mind-body?) continuum along which to position various natural and artificial instances. How useful for understanding human minds will be the study of systems far away on the continuum? We cannot know a priori the answer to such a question.

Jombie On My Mind

Fig. 1. What are minds such that complex ones come from somewhere instead of nowhere? Photo by Pete Mandik.

References

Churchland, P. (19**)

Laakso, Aarre and Cottrell, Garrison W. (1998) How can I know what You think?: Assessing representational similarity in neural systems.(92k) In Proceedings of the Twentieth Annual Cognitive Science Conference, , Mahwah, NJ: Lawrence Erlbaum.
Laakso, Aarre and Cottrell, Garrison W. (2000) Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology, 13(1):47-76.
Prinz, J. (****) “Empiricism and State-Space Semantics” in Keeley, ed. Paul Churchland.

Ramsey, W., Stich S. & Garon, J. (1991) Connectionism, eliminativism, and the future of folk psychology, in: W. Ramsey, S. Stich & D. Rumelhart (Eds.) Philosophy and Connectionist Theory. Hillsdale NJ: Lawrence Erlbaum.