Neurosemantics Bibliography

Neurophilosophy concerning representational content, compiled by Pete Mandik. Last updated March 31, 2008.


Akins, K. (1996). "Of Sensory Systems and the "Aboutness" of Mental States." The Journal of Philosophy 93(7): 36.

Churchland, P. (2001). Neurosemantics: On the Mapping of Minds and the Portrayal of Worlds. The Emergence of Mind. K. E. White. Milan, Fondazione Carlo Elba: 117-47.

Churchland, P. S. and P. M. Churchland (2002). "Neural worlds and real worlds." Nature Reviews Neuroscience 3(11): 903-907. [link]

States of the brain represent states of the world. A puzzle arises when one learns that at least some of the mind/brain’s internal representations, such as a sensation of heat or a sensation of red, do not genuinely resemble the external realities they allegedly represent: the mean kinetic energy of the molecules of the substance felt (temperature) and the mean electromagnetic reflectance profile of the seen object (color). The historical response has been to declare a distinction between objectively real properties, such as shape motion and mass, and merely subjective properties, such as heat, color and smell. This hypothesis leads to trouble. A challenge for cognitive neurobiology is to characterize, in suitably general terms, the nature of the relationship between brain models and the world modeled. We favor the hypothesis that brains develop high-dimensional maps whose internal relations correspond in varying degrees of fidelity to the enduring causal structure of the world. From this perspective, the basic epistemological relation is not “single-percept to single-external-feature” but rather “background-brain-maps to causal-domain-portrayed.

Eliasmith, C. (2000). How neurons mean: A neurocomputational theory of representational content. Philosophy-Neuroscience-Psychology Program. St. Louis, Washington University. Ph.D.[link]

Questions concerning the nature of representation and what representations are about have been a staple of Western philosophy since Aristotle. Recently, these same questions have begun to concern neuroscientists, who have developed new techniques and theories for understanding how the locus of neurobiological representation, the brain, operates. My dissertation draws on philosophy and neuroscience to develop a novel theory of representational content. I begin by identifying what I call the problem of "neurosemantics" (i.e., how neurobiological representations have meaning). This, I argue, is simply an updated version of a problem historically addressed by philosophers. I outline three kinds of contemporary theory of representational content (i.e., causal, conceptual role, and two-factor theories) and discuss difficulties with each. I suggest that discovering a single factor that provides a unified explanation of the traditionally independent aspects of meaning will provide a means of avoiding the difficulties faced by current theories. My central purpose is to articulate and defend such a factor. Before describing the factor itself, I summarize the necessary background for evaluating a solution to the problem of neurosemantics. The resulting analysis results in thirteen questions about representation. I provide a methodological critique of the traditional approach to answering these questions and argue for an alternative approach. I discuss evidence that suggests that this alternative provides a better means of characterizing representation. After having established the nature of the problem and a preferred methodology, I briefly describe my theory of content. I then outline a neurobiologically motivated theory of neural computation that I and others have helped Charles H. Anderson develop. I use the computational theory show how to mathematically define the relations relevant to understanding representational content at various levels of analysis. I then show how this theory can be made philosophically respectable and integrated with the theory outlined earlier. I then answer each of the thirteen questions about representation. In conclusion, I defend this theory from potential philosophical criticisms. This defense includes an explication of how concepts are to be accounted for on this theory, and a consideration of the problem of misrepresentation. I also show how this theory is immune to the standard critiques facing each of causal, conceptual role, and two-factor theories of content.

Eliasmith, C. (2005). "A new perspective on representational problems." Journal of Cognitive Science 6: 97-123.[link]

I argue that current flaws in the methodology of contemporary cognitive science, especially neuroscience, have adversely affected philosophical theorizing about the nature of representation. To highlight these flaws, I introduce a distinction between adopting the animal’s perspective and the observer’s perspective when characterizing representation. I provide a discussion of each and show how the former has been unduly overlooked by cognitive scientists, including neuroscientists and philosophers. I also provide a specific neuroscientific example that demonstrates how adopting the animal’s perspective can simplify the characterization of the representation relation. Finally, I suggest that taking this perspective supports a specific thesis regarding content determination: the statistical dependence hypothesis.

Eliasmith, C. (2006). Neurosemantics and categories. Handbook of Categorization in Cognitive Science. C. Lefebvre and H. Cohen. Amsterdam, Elsevier. [link]

A theory of category meaning that starts with the shared resources of all animals (i.e., neurons) can, if suitably constructed, provide solutions to traditional problems in semantics. I argue that traditional approaches that rely too heavily on linguistics or folk psychological categories are likely to lead us astray. In light of these methodological considerations, I turn to the more theoretical question of how to construct a semantic theory for categories informed by neuroscience. The second part of the chapter is concerned with describing such a theory and discussing some of its consequences. I present a theory of neural representations that describes them as a kind of code, and show that such an understanding scales naturally to include complex representations such as concepts. I use this understanding of representational states to underwrite a theory of semantics. However, the theory must be supplemented by what I call the statistical dependence hypothesis. Content is then determined by a combination of the states picked out by this hypothesis and the neural decoders that define subsequent transformations of the neural representations. I briefly describe a solution to the traditional problem of misrepresentation that is consistent with this theory.

Grush, R. (1997). "The Architecture of Representation." Philosophical Psychology 10(1): 5-23.[link]

In this article I outline, apply, and defend a theory of natural representation. The consequences of this theory are: i) representational status is a matter of how physical entities are used, and specifically is not a matter of causation, nomic relations with the intentional object, or information; ii) there are genuine (brain-)internal representations; iii) such representations are really representations, and not just farcical pseudo-representations, such as attractors, principal components, state-space partitions, or what-have-you;and iv) the theory allows us to sharply distinguish those complex behaviors which are genuinely cognitive from those which are merely complex and adaptive.

Grush, R. (2001). The semantic challenge to computational neuroscience. Theory and Method in the Neurosciences. P. Machamer, P. McLaughlin and R. Grush. Pittsburgh, University of Pittsburgh Press.[link]

I examine one of the conceptual cornerstones of the field known as computational neuroscience, especially as articulated in Churchland et al. (1990), an article that is arguably the locus classicus of this term and its meaning. The authors of that article try, but I claim ultimately fail, to mark off the enterprise of computational neuroscience as an interdisciplinary approach to understanding the cognitive, information-processing functions of the brain. The failure is a result of the fact that the authors provide no principled means to distinguish the study of neural systems as genuinely computational/information-processing from the study of any complex causal process. I then argue for two things. First, that in order to appropriately mark off computational neuroscience, one must be able to assign a semantics to the states over which an attempt to provide a computational explanation is made. Second, I show that neither of the two most popular ways of trying to effect such content assignation — informational semantics and ‘biosemantics’ — can make the required distinction, at least not in a way that a computational neuroscientist should be happy about. The moral of the story as I take it is not a negative one to the effect that computational neuroscience is in principle incapable of doing what it wants to do. Rather, it is to point out some work that remains to be done.

Grush, R. (2004). "The emulation theory of representation: motor control, imagery, and perception." Behavioral and Brain Sciences 27: 377-442.[link]

The emulation theory of representationis developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.

Mandik, P. (2003). "Varieties of Representation in Evolved and Embodied Neural Networks." Biology and Philosophy 18(1): 95-130.[link]

In this paper I discuss one of the key issues in the philosophy of neuroscience: neurosemantics. The project of neurosemantics involves explaining what it means for states of neurons and neural systems to have representational contents. Neurosemantics thus involves issues of common concern between the philosophy of neuroscience and philosophy of mind. I discuss a problem that arises for accounts of representational content that I call “the economy problem”: the problem of showing that a candidate theory of mental representation can bear the work required within in the causal economy of a mind and an organism. My approach in the current paper is to explore this and other key themes in neurosemantics through the use of computer models of neural networks embodied and evolved in virtual organisms. The models allow for the laying bare of the causal economies of entire yet simple artificial organisms so that the relations between the neural bases of, for instance, representation in perception and memory can be regarded in the context of an entire organism. On the basis of these simulations, I argue for an account of neurosemantics adequate for the solution of the economy problem.

Mandik, P., M. Collins, et al. (2007). Evolving artificial minds and brains. Mental States Volume 1: Evolution, function, nature. A. C. Schalley and D. Khlentzos. Amsterdam, John Benjamins.[link]

We explicate representational content by addressing how representations that explain intelligent behavior might be acquired through processes of Darwinian evolution. We present the results of computer simulations of evolved neural network controllers and discuss the similarity of the simulations to real-world examples of neural network control of animal behavior. We argue that focusing on the simplest cases of evolved intelligent behavior, in both simulated and real organisms, reveals that evolved representations must carry information about the creature’s environments and further can do so only if their neural states are appropriately isomorphic to environmental states. Further, these informational and isomorphism relations are what are tracked by content attributions in folk-psychological and cognitive scientific explanations of these intelligent behaviors.

O’Brien, G. and J. Opie (2006). "How do Connectionist Networks Compute?" Cognitive Processing 7(1): 30-41.[link]

Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its computational credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks aren’t genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks.

Opie, J. and G. O’Brien (2004). Notes toward a structuralist theory of mental representation. Representation in Mind: New Approaches to Mental Representation. H. Clapin, P. Staines and P. Slezak. Amsterdam, Elsevier.[link]

Ryder, D. (2004). "SINBAD Neurosemantics: A Theory of Mental Representation." Mind Language 19(2).

I present an account of mental representation based upon the ‘SINBAD’ theory of the cerebral cortex. If the SINBAD theory is correct, then networks of pyramidal cells in the cerebral cortex are appropriately described as representing, or more specifically, as modelling the world. I propose that SINBAD representation reveals the nature of the kind of mental representation found in human and animal minds, since the cortex is heavily implicated in these kinds of minds. Finally, I show how SINBAD neurosemantics can provide accounts of misrepresentation, equivocal representation, twin cases, and Frege cases.

Ryder, D. (2006). Neurosemantics: A Theory. [link]

Usher, M. (2001). "A Statistical Referential Theory of Content: Using Information Theory to Account for Misrepresentation." Mind Language 16(3): 311-334.[link]

A naturalistic scheme of primitive conceptual representations is proposed using the statistical measure of mutual information. It is argued that a concept represents, not the class of objects that caused its tokening, but the class of objects that is most likely to have caused it (had it been tokened), as specified by the statistical measure of mutual information. This solves the problem of misrepresentation which plagues causal accounts, by taking the representation relation to be determined via ordinal relationships between conditional probabilities. The scheme can deal with statistical biases and does not rely on arbitrary criteria. Implications for the theory of meaning and semantic content are addressed.

Usher, M. (2004). "Comment on Ryder’s SINBAD Neurosemantics: Is Teleofunction Isomorphism the Way to Understand Representations?" Mind Language 19(2).[link]

The merit of the SINBAD model is to provide an explicit mechanism showing how the cortex may come to develop detectors responding to correlated properties and therefore corresponding to the sources of these correlations. Here I argue that, contrary to the article, SINBAD neurosemantics does not need to rely on teleofunctions to solve the problem of misrepresentation. A number of difficulties for the teleofunction theories of content are reviewed and an alternative theory based on categorization performance and statistical relations is argued to provide a better account and to come closer to the practice in neuroscience and to powerful intuitions on swampkinds and on broad/narrow content.