Archive for the ‘Shit Happens’ Category

Free Episteme

Friday, February 15th, 2008

From an email by Leslie Marsh:

To promote Edinburgh University Press’s new content management system,
the Press is offering free access to **EPISTEME: Journal of Social
Epistemology** until the end of March.

See here for table of contents:

EPISTEME homepage:

Unfortunately the latest issue, which is on conspiracy theories, is not available. I blame the Illuminati.

Conspiracy Theories: Special Issue: EPISTEME

Tuesday, January 8th, 2008

Volume 4, issue 2, a special issue of EPISTEME: Journal of Social Epistemology on Conspiracy Theories is now available.
Guest Editor: David Coady

Contents and Abstracts available at:

Journal website:

Included is the following:

Mandik, Pete. Shit Happens

Abstract: In this paper I embrace what Brian Keeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemma for philosophers who criticize such theories. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing. My complaint is that conspiracy theories apply intentional explanations to situations that give rise to special problems concerning the elimination of competing intentional explanations.

Crap Occurs

Monday, August 6th, 2007

goddess of discord

Originally uploaded by Barkeater

The latest, and perhaps last, draft of “Shit Happens” is online now (link) with loads of new stuff. Interested Brain Hammer Heads should check for their names in the acknowledgments section.

Conspiracy Theories and Keeping Secrets

Wednesday, July 11th, 2007

The Eye in the Pyramid

Originally uploaded by jovike

According to the working definition of conspiracy theories in “Shit Happens”, it is a necessary condition that the hypothesized conspirators “keep their intentions and actions secret.” Since a central point of “Shit Happens” is that conspiracy theories are universally unwarranted, prima facie warranted conspiracy theories (mainstream explanations involving the individuals involved in Watergate, al Queda, Nazi Germany) need to be addressed.

We can focus the concern that needs to be addressed in terms of a pair of questions. Aren’t we warranted in the common belief that, e.g. al Queda blew up WTC? And isn’t the common belief (e.g. that al Queda blew up WTC) a conspiracy theory?

The strategy I currently find most appealing is to answer the first question positively and the second negatively. The next question that immediately arises is why aren’t these prima facie warranted conspiracy theories really conspiracy theories. My answer is that they fail the necessary condition of keeping secret.

There are several ways in which one can fail to keep secrets. One way is by getting caught and being compelled to testify in a criminal investigation. In this case one may have tried then failed to keep the secret. A related way is when direct evidence (video tape of someone building and planting a bomb) renders the secret no longer kept. Another way of failing to keep secret is illustrated by terrorists broadcasting their involvement in a plot in order to take credit for its success. In this case the sense in which they fail to keep a secret is by no longer even trying to keep it secret.

A true conspiracy theory attempts to leap over a wall of posited secrecy via attempts at inference to the best explanation. The main problems arise in establishing that the proffered explanation is indeed the best instead of swamped by multiple equally plausible explanations. In cases that we are warranted in believing, e.g. that al Queda planned the 9/11 bombings, we aren’t stuck making such a leap.

What’s the haps?

Friday, June 15th, 2007


Originally uploaded by Pete Mandik

I’ve got a new paper draft up on my website: “Shit Happens“. Comments welcome.

In this paper I embrace what Brian Keeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemma for philosophers who criticize such theories. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing. My complaint is that conspiracy theories apply intentional explanations to situations that give rise to special problems concerning the elimination of competing intentional explanations.

Conspiracy Theory and Intentional Explanation

Saturday, June 9th, 2007

Conspiracy theories postulate (1) causal explanations of (2) historical events in terms of (3) intentional states of multiple agents (the conspirators) who, among other things, (4) intended the historical events in question to occur and (5) keep their intentions and actions secret. Each of the five elements of the definition of conspiracy theories gives rise to distinct problems for the believability of any given conspiracy theory. I’m especially interested here in problems arising in connection with the last three elements of the definition.

To set the stage for the problems that the third, fourth, and fifth elements raise for conspiracy theories as explanations, I’d like to briefly review points that can be raised against folk psychology’s usefulness for predictions.

I assume here a symmetrical relationship between prediction and explanation whereby what’s cited in the explanation of an event that has already occurred can just as well have served to predict the event prior to its occurrence and vice versa. Thus, whatever skepticism may be raised about the predictive power of folk psychology has a basis that can also be a basis for skepticism about the explanatory power of folk psychology.

Morton (1996) raises various problems for the view that the function of folk psychology is to serve as a predictive device. Part of his case concerns two features of intentional states that make them especially ill-suited as bases for the prediction of human behavior. Morton discusses these features under the labels of “holism� and “entanglement�.

Morton’s worry about holism is that if one were to predict an action of an agent in terms of beliefs and desires, one cannot do it in terms of a single belief-desire pair but must instead advert to whole systems of belief and desire. Thus, to adapt an example of Morton’s, a prediction that a person will leave the building through the front door cannot be based simply on an attribution to her of a desire to leave and a belief that the front door is the only exit, since one must also rule out the possibility that, for example, she believes the front door to be connected to a trigger for a bomb.

We see that things are even more complicated when we consider what Morton calls “entanglement,� namely, the fact that so many of our intentional states are about the intentional states of others.

Given the relationship between prediction and explanation, holism and entanglement raise problems for intentional explanation as well as for intentional prediction. If someone does leave the building, explaining her leaving in terms of her having a desire to leave will require attributing a whole host of other desires as well as beliefs. And if she leaves the building with friends, entanglement requires us to cite the many beliefs and desires of each of her friends, many of which will be beliefs and desires about the beliefs and desires of the other friends (not to mention people outside of the circle of friends).

Due to the holism of intentional explanation, even when a single agent is involved, the attribution of a single belief-desire pair will be consistent with a wide range of competing intentional explanations that differ with respect to what other beliefs and desires are attributed. Any given attribution of a belief-desire pair is thus highly likely to simply be post hoc. We already know that the event happened, and distinct competing intentional explanations may seem equally plausible with no real basis for choosing between them. Things certainly get no easier when multiple agents and the concomitant occasions for entanglement are thrown into the mix. Further, due to holism and entanglement, for any belief-desire pair attributed, there are equally plausible explanations that don’t attribute that belief-desire pair.

In ordinary cases of intentional explanation, one sort of thing that can sometimes be appealed to for the elimination of alternate hypotheses is the testimony of agents whose actions are the explananda. We can gain support for various hypotheses concerning what the agents were thinking by asking them what they were thinking. Of course, the utility of such testimony depends largely on a presupposition of veracity. And thus does the fifth element of the definition of conspriracy theory present its special problem, since the aforementioned supposition of truthful testimony is completely out of place when the agents in question are hypothesized to be engaged in various acts of deception.

Fodor on Historical Explanation

Tuesday, May 1st, 2007

Vader for Pedro

Originally uploaded by thorinside.

Jerry Fodor, in “Against Darwinism“, writes:

Very roughly, historical explanations offer not covering laws but plausible narratives; narratives which (purport to) articulate the causal chain of events leading to the event that is the explanandum. Covering law explanations are about (necessary) relations among properties; historical narratives are about (causal) relations among events. That’s why the former support counterfactuals, but the latter don’t.

Historical explanations are as far as I know, often perfectly ok. Certainly they are sometimes thoroughly persuasive, so perhaps they are sometimes true. But, prima facie at least, historical explanations don’t seek to subsume events under laws. `She fell because she slipped on a banana peel.’ Very likely she did; but there’s no law —there’s not even a statistical law— that has `banana peel’ in its antecedent and `slipped and fell’ in its consequent. Likewise, Napoleon lost at Waterloo because it had been raining for days, and the ground was too muddy for cavalry to charge. So, anyhow, I’m told; and who am I to say otherwise? But it doesn’t begin to follow that there are laws that connect the amount of mud on the ground with the outcomes of battles.

I suppose, metaphysical naturalists (of whom I am one) have to say that what happened at Waterloo must have fallen under some covering laws or other. No doubt, for example, it instantiated (inter alia) laws of the mechanics of middle-sized objects. But it doesn’t follow that there are laws about mud so described, or about battles so described, still less about causal connections between them so described; which is what would be required if `he lost because of the mud’ is to be an instance of a covering-law explanation. It likewise doesn’t follow, and it isn’t remotely plausible, that whatever explains why Napoleon lost at Waterloo likewise explains why Nelson won at Trafalgar; i.e. that there are laws about the outcomes of battles as such, of which Nelson’s victory and Wellington’s are both instances. `Is a battle’ doesn’t pick out a natural kind; it’s not (in Nelson Goodman’s illuminating term) `projectible`.


It’s of a piece with their not appealing to covering laws that historical-narrative explanations so often seem to be post hoc. The reason they so often seem to be is that they usually are. Given that we already know who won, we can tell a pretty plausible story (of the too-much-mud-on-the-ground variety) about why it wasn’t Napoleon. But, what with their being no covering law to cite, I doubt that Napoleon or Wellington or anybody else could have predicted the outcome prior to the event. The trouble is that there would have been a plausible story to explain the outcome whoever had won; prediction and retrodiction are famous for exhibiting this asymmetry. That being so, there are generally lots of reasonable historical accounts of the same event, and there need be nothing to choose between them. Did Wellington really win because of the mud? Or was it because the Prussian mercenaries turned up just in the nick of time? Or was it simply that Napoleon had lost his touch? (And while you’re at it, what, exactly, caused the Reformation?)

Contentful and Inefficacious

Friday, February 23rd, 2007


Originally uploaded by Pete Mandik.

The quick and the dirty:

Representational contents are causally efficacious only if representational contents are properties. But not all representational contents are properties. So not all representational contents are causally efficacious.

The slow and the clean:

The bearers of causal efficacy are properties. Something causes something else in virtue of some of its properties and not others. The color of an object, say the redness of a Frisbee, may be inefficacious with respect to the bump on my head. But this does not mean it is inefficacious tout court, since the redness may have the power to make a bull charge.

We may not have to dig very deeply into the notion of content to find some serious challenges to claims of its efficacy. This suggestion is fleshed out further as follows.

The bearers of efficacy are properties. But not all contents are properties.

The key language of the discussion of efficacy involves phrases such as “causal relevance” and “in virtue of” all of which implicate properties as the bearers of causal efficacy. Consider a red Frisbee. It is in virtue of its mass and velocity that it may cause a bump on my head, and in virtue of its redness that it may cause a bull to charge. If mental events are causes, and causes in virtue of their contents, then contents will need to be properties.

However, more will need to be said about contents to justify assimilating them to property-talk. Consider the content of my belief that not all beer is carbonated. What is the content of this attitude? Plausibly, the proposition that not all beer is carbonated. But it is not clear that this is a property. The content of attitudes is supposed to reduce to the meanings of component representations, ala The Language of Thought Hypothesis. Plausibly, the components here would include a predicational representation x is carbonated, the meaning of which is, I suppose the property of being carbonated. So, in at least one instance, we’ve found a property among the contents, and further it is an efficacious property, but it is not clear that we will be able to do this in every case. Some representations will not have properties as their meanings, as in the cases of the mental analogues of quantifiers and singular terms. And some representations will have inefficacious properties as their meanings, as in the cases of the of the predicate x is outside of my light cone or, better yet, x is an inefficacious property.

Does the inefficacy of content violate Folk Psychology?

Let us grant that Folk Psychology acknowledges things with content and that these things figure into the causes of behavior and or bodily motions. But it is not entirely clear that this alone commits Folk Psychology to the efficacy of content. Consider the following attitudes: I believe that there will be a bake sale tomorrow and that I desire to go to tomorrow’s bake sale. The content of both of these attitudes is that there will be a bake sale tomorrow. For reasons given above, it is unclear that this content has effects on me. Additionally, we may wonder how anything happening tomorrow can affect something happening today. Further, note that there is a lot more to the Folk Psych story than the contents, the other half of the story contains the attitudes themselves. The bake sale may be tomorrow, but I have the desire today, and it is the desire itself that drives my behavior and whatnot. So, it is not clear that the sorts of considerations from Folk Psych, like, “Mandik opened the fridge ‘cuz he thought beer was near” are evidence that Folk Psych contains an intuition that contents are efficacious. It seems merely to indicate that attitudes are causes.

Shit Happens

Friday, January 5th, 2007

Life During War Time

Originally uploaded by Pete Mandik.

I just got the thumbs-up from David Coady on my proposed contribution to a special issue of Episteme he’s editing on the epistemology of conspiracy theories. The title, “Shit Happens,” comes from, among other places, an epigraph in Brian Keeley’s J. Phil. article “Of Conspiracy Theories.” My article should be of interest to Brain Hammer readers interested in the function and evolution of folk-psychology.

Here’s the abstract:

In this paper embrace what Brian Keeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemmma for philosophers who criticize such “theories”. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing: conspiracy theories over-extend intentional explanation and attribute reason where reason does not apply. Along the way I explore some of the cognitive bases for the kind of totalizing intentional explanation of which conspiricay theories are but one instance (much religious thinking constitutes further instances). I speculate as to the evolutionary basis of such explanations. The evolutionary origins of intentional explanation, and thus the niche for which they were adapted, concern tracking relatively small numbers of agents in relatively small social dominance hierarchies. But attempts to apply reason-based explanations to numbers of agents approaching global scales over large chunks of history is as inappropriate as applying them to inanimate objects. Nonetheless, the urge to do so–the urge to theorize conspiritorily–is itself in need of an explanation and I explore what cognitive or psychological factors might underly this urge.

Anti-Mind, pt. 2 of 2

Thursday, October 26th, 2006

The world is everything that is the case

Originally uploaded by Pete Mandik.

Does it take a mind to detect a mind? If there could be a principled answer to this question the implications would be huge for the philosophy and science of mind.

Consider that so much of science depends on the unintelligent detection of unintelligents. Hydrogen samples are not particularly intelligent. Further, mechanisms capable of detecting the presence of hydrogen need not themselves be intelligent.

Maybe part of being a natural kind is that the unintelligent detection of instances of that kind is possible. Jerry Fodor has suggested that non-natural kinds like crumpled shirts or doorknobs can only be detected by minds. You have to be the sort of thing that knows a bunch of stuff in order to “light up” in the presence of a door knob.

In the Sterling short story “Swarm” excerpted in my previous post, the Nest is this asteroid that is mostly just a big super-organism that wanders the universe and whenever it is “invaded” it assimilates the invaders. Most of the various diverse species in the asteroid were once representatives of vast space-faring technological cultures that, when they encountered the Nest, got taken over and reduced to unintelligent animals and integrated into the Nest ecology inside of the asteroid. Swarm is an intelligent organism activated under certain instances for the protection of the Nest. Swarm explains how ultimately useless intelligence and consciousness is and suggests that the Nest is entirely unintelligent, and that the Nest grows a new Swarm whenever an intelligent invader needs to be dealt with. Once the intelligent invader is dealt with (rendered into a dumb slave animal) then Swarm self-destructs being no longer needed.

It occured to me that Swarm was to minds what antibodies are to germs, so I coined “anti-mind”. It also occurred to me that if Swarm was right that prior to the activation of Swarm, the Nest group organism was truly non-cognitive, then whatever mechanism that activates the growth of a new Swarm must itself be an unintelligent mechanism. So, the idea of an anti-mind is the idea of a thing that is not a mind but is capable of detecting minds. But this leads to what strikes me as some pretty interesting philosophical questions: Is there any way a dumb mechanism can detect the presence of intelligence? Can an unconscious mechanism detect the presence of consciousness?

If Dennett is right, intentional systems are detectable only from the intentional stance, which I take to entail that only minds can detect minds. If a lot of qualia-freaks are right, the only way to detect the presence of qualia is to have some yourself, and thus only consciousness can detect consciousness.

If these remarks are correct, the implications for science fiction are obvious: the “anti-mind” in the Sterling story is impossible. But enough about fiction: what about science? If the impossibility of unintelligent detection entails that the kinds that are intelligently detected are non-natural, then is a full-blown science of such kinds thereby doomed?