The Philosophy Department at Georgia State University in Atlanta, Georgia is accepting applications from qualified undergraduates for its two $15,000 Neurophilosophy Fellowships, to be awarded by the Brains & Behavior program. The Brains & Behavior program aims to take the neurosciences at Georgia State to a position of international prominence by promoting interdisciplinary collaboration between faculty and students from partnering departments. B&B Fellows in the Philosophy Department complete a Masters degree, receive a stipend of $15,000 plus tuition and they do not have to serve as graduate assistants or instructors. More information on the requirements for the fellowship can be found here:
Archive for January, 2007
“The Ontology of Experience” by Nick Treanor, Brown University
ABSTRACT: Most philosophers share a conception of consciousness according to which conscious experiences are had throughout periods of time. If this conception is correct, then conscious experiences are not instantiated in the same way that physical processes are. This is an ontic difference, and it rules out the possibility of psychophysical identities between experiences and processes. The upshot is that physicalists face a choice: we can either (i) preserve the common conception of conscious experience and take the arguments to provide constructive constraints on the ontological kinds to which the physical entities that stand in psychophysical identity relations belong, or (ii) preserve the fundamentality of physical processes to conscious experience by reworking our understanding of qualitative character along representationalist lines. I defend the second option: The paper amounts to a new argument for representationalist theories of consciousness.
I hope that not too many cockroaches get ahold of these. [link to video]
Originally uploaded by Pete Mandik.
I have the following idea: Functional Neuroscience Bowling. Each team wears specially designed TMS helmets. Knocking down pins activates magnets in other team’s helmets- the more pins the more f’ed up the other team gets. Look out for the motor strip!
Sort of like beer-pong, but for the CNS set.
What say you? My lab against yours!
My lab will need to borrow helmets, but we’re in.
It seems to be an open question whether distinctively neural properties are essential to the instantiation of mental properties. One can buy into reductive physicalism and reject neural reduction bases in favor of chemical or thermodynamic reduction bases, just to name a few. Perhaps, then, systems that have no distinctively neural propertiesâ€”no brains and no neuronsâ€”nonetheless have certain chemical or thermodynamic profiles that suffice for mentality. Perhaps. But I doubt it. I hope that I may be forgiven for being so brief about this, but I think there are three reasons (at least) for thinking that the physical reduction of the mental should be a neural reduction.
The first reason for believing in neural reduction is that no non-controversial examples of entities that implement consciousness or cognition exist without brains, or at least, neural networks. It is uncontroversial that alert human adults have mental states. It is also uncontroversial that they have brains. Things are much more contested for the brainless. While it is arguable that my laptop has mental properties, it is also arguable that it does not. It is also arguable, by the way, whether or not my laptop has neural properties. I do, after all, run neural network simulations on it (Mandik 2003). How many properties are literally shared by the simulation and the simulated? We need not settle this now. What is clear is that thereâ€™s controversy about the mental prowess of the brainless. And while some brain-havers may lack mentality (who knows what to say about the vegetative patients? (Begely 2006) ), there are no uncontroversial confirmations of mind-havers lacking brains. Letâ€™s talk about the mind-having brain-havers a bit more under the heading of â€œreason #2 for thinking that the physical reduction bases of mentality will be neural.â€?
The second reason for believing in neural reduction is that there is no reason to doubt that that it is in virtue of their brains (or their brains plus something else) that creatures like us implement consciousness or cognition. Putting the parenthetical â€œplus something elseâ€? to the side for the moment, let us entertain briefly how unpromising non-neurocentric theories have been. Mental properties are had by organisms either in virtue of the whole organism or part and it is easy to see that it canâ€™t be the whole organism. Amputees retain their mentality, and while my appendectomy doesnâ€™t exactly count as an amputation, comparing its relative effect to oneâ€™s mentality to the potential effects of a lobotomy is like comparing nothing to something. That the seat of our soul is some proper part of us is old news, but the appendix never had a chance and the Aristotelian coronary hypothesis was rejected long ago. So much of what we know about where drugs need to go to go to work and what brain injuries impair what mental functions has tipped the scales pretty clearly in favor of neruo-centrism. But, must it be merely neuro-centric? Canâ€™t it be exhaustively neural? Here we have to pause to consider various embodied, embedded, and externalist proposals for including the body and even chunks of the environment of the organism as part of the supervenience base of the organismâ€™s mental properties. There are a couple of things to say about this. The first is that none of it removes the brain from the center of the story. The second is to echo Fodorâ€™s (1989) suggestion that we individuate neural properties widely.
The third reason for believing in neural reduction is that no reductive research program has been as productive as neurocentric ones. One might even be so bold as to suggest that non-neurocentric reductionists have no research program at all. There have been, in recent decades, three major proposals that have been physicalistic without reducing mentality, a la behaviorism, to the behavior of whole organisms: classic computationalism, connectionism, and (certain versions of) dynamic systems theory. Classicism got wedded, in many peopleâ€™s minds, to non-reductive physicalism, largely due to the influence of Fodor (1974) and Putnam (1967). Dynamic systems theory included proposals of a specifically neural character, (e.g. Freeman 1991) while others looked like warmed-over behaviorism (van Gelder, 1995). Either way, dynamic systems theory was confronted with some devastating objections (see Glymour (1997), Grush, (1997) and Eliasmith (2001) for a taste). The main point here, though, is not any knock-down refutations of non-neurocentric research programs. The point here is that neurocentric research programs have been massively productive both in theory and in application.
Update: Futher discussion of this post on Paul Baxter’s blog [link].
Phisick.com is an online collection of antique medical instruments. In response to my query regarding which item is closest to a brain hammer, curator Laurie Slater writes:
www.phisick.com/a1mmazbr1.htm is a mid 19th century papier-mÃ¢chÃ© brain (a pretty one) and http://www.phisick.com/a1meh2.htm is a hammer, although one used more for percussion, like a primitive ultrasound - but not on the brain of course. I suppose if you were so inclined http://www.phisick.com/a8qfb1.htm might give an idea where to aim.
http://www.phisick.com/a2snstr1.htm is a brain drill which might have a similar effect to a brain hammer (if tidier). In fact it was a tool used to recover from ‘brain hammer’ type injuries. Drill three holes around a depressed skull fracture. Pass this http://www.phisick.com/a2snsg1.htm through them and saw out a triangle to releive the pressure on the cerebral cortex and patch up with a metal plate. Good as new!
Then I suppose there is this http://www.phisick.com/a1spr01.htm (hardly a blunt instrument I admit) which would proverbially hammer the brain via sensory overload into thinking of other things. I’m not sure if this would have been considered a cure for mental masturbation .. the affliction of many a philosopher. Present company excused of course!
I just got the thumbs-up from David Coady on my proposed contribution to a special issue of Episteme he’s editing on the epistemology of conspiracy theories. The title, “Shit Happens,” comes from, among other places, an epigraph in Brian Keeley’s J. Phil. article “Of Conspiracy Theories.” My article should be of interest to Brain Hammer readers interested in the function and evolution of folk-psychology.
Here’s the abstract:
In this paper embrace what Brian Keeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemmma for philosophers who criticize such “theories”. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing: conspiracy theories over-extend intentional explanation and attribute reason where reason does not apply. Along the way I explore some of the cognitive bases for the kind of totalizing intentional explanation of which conspiricay theories are but one instance (much religious thinking constitutes further instances). I speculate as to the evolutionary basis of such explanations. The evolutionary origins of intentional explanation, and thus the niche for which they were adapted, concern tracking relatively small numbers of agents in relatively small social dominance hierarchies. But attempts to apply reason-based explanations to numbers of agents approaching global scales over large chunks of history is as inappropriate as applying them to inanimate objects. Nonetheless, the urge to do so–the urge to theorize conspiritorily–is itself in need of an explanation and I explore what cognitive or psychological factors might underly this urge.
A book I co-authored, Cognitive Science: An Introduction to the Mind and Brain, is out now. [Link to Routledge's page for the book.] If you have a mind and/or a brain, but have not yet been introduced to them, this may be the book for you.
Also out now is The Blackwell Companion to Consciousness. With a title like that, you can probably guess it was published by Blackwell. [Link to guess-who's page for the book.] If you are conscious and in need of a companion, this may be the book for you. This book contains many excellent chapters by many excellent people. It also contains a chapter by me: “The Neurophilosophy of Consciousness.” [Link to draft of my chapter.] Here’s the abstract:
The neurophilosophy of consciousness brings neuroscience to bear on philosophical issues concerning phenomenal consciousness, especially issues concerning what makes mental states conscious, what it is that we are conscious of, and the nature of the phenomenal character of conscious states. Here attention is given largely to phenomenal consciousness as it arises in vision. The relevant neuroscience concerns not only neurophysiological and neuroanatomical data, but also computational models of neural networks. The neurophilosophical theories that bring such data to bear on the core philosophical issues of phenomenal conscious construe consciousness largely in terms of representations in neural networks associated with certain processes of attention and memory.
Regarding the books, both volumes bear handsome cover illustrations of bald heads. Until someone figures out a better way to draw a picture of the mind, we are going to be stuck with bald heads.
PMS WIPS 008 - Anthony Jack, Philip Robbins, and Andreas Roepstorff - The Genuine Problem of ConsciousnessMonday, January 1st, 2007
“The Genuine Problem of Consciousness” by Anthony Jack (Washington University),Â Philip Robbins (Wahsington University), and Andreas Roepstorff (Aarhus University)
Those who are optimistic about the prospects of a science of consciousness, and those who believe that it lies beyond the reach of standard scientific methods, have something in common: both groups view consciousness as posing a special challenge for science. In this paper, we take a close look at the nature of this challenge. We show that popular conceptions of the problem of consciousness, epitomized by David Chalmersâ€™ formulation of the â€˜hard problemâ€™, can be best explained as a cognitive illusion, which arises as a by-product of our cognitive architecture. We present evidence from numerous sources to support our claim that we have a specialized system for thinking about phenomenal states, and that an inhibitory relationship exists between this system and the system we use to think about physical mechanisms. Even though the â€˜hard problemâ€™ is an illusion, unfortunately it appears that our cognitive architecture forces a closely related problem upon us. The â€˜genuine problemâ€™ of consciousness shares many features with the hard problem, and it also represents a special challenge for psychology. Nonetheless, researchers should be careful not to mistake the hard problem for the genuine problem, since the strategies appropriate for dealing with these problems differ in important respects.