Archive for June, 2007

Contents, Vehicles, and Transitive Consciousness

Saturday, June 30th, 2007

Tromp d’oeil painting

Originally uploaded by moocatmoocat

Robert Lurz’s challenge to the standard view of transitive consciousness is constituted by the following claim from his paper, “Neither HOT nor COLD: An Alternative Account of Consciousness”:

[A] creature can be conscious of its thoughts and experiences simply by being conscious of what it thinks or experiences in having those thoughts or experiences

Here is what the Lurz challenge is supposed to be a challenge to. What lots of people seem to agree on, and this would include Tye, Dretske, Rosenthal, Churchland, Prinz, and me, is that transitive consciousness - consciousness of - is implemented by (certain kinds of) representation in the following way: what one is conscious of is what (certain kinds of) representations are representations of. (What further criteria the representations need to meet is what separates the various authors listed, and thus the use of the parenthetical “certain kinds of”.) Call this the Standard View. On the Standard View, one is conscious of such-and-such only if one mentally represents such-and-such. And if one has a (certain kind of) mental representation of a leafy tree, one is thereby conscious of a leafy tree. Or, in other words, what one is conscious of is the content of a certain representation, in this case the content is a leafy tree.

Now, when one has a state of the sort described in the previous paragraph, what sense could it possibly make to say that one is conscious of the state itself? This I try to spell out on pp. 60-61 of The Subjective Brain in terms of the content/vehicle distinction. Being conscious of a leafy tree involves representing a leafy tree. Being conscious of a representation of a leafy tree must involve representing something more than just the leafy tree, that is, something more than just the content of the representation. And the only candidate for the something more is the vehicle of the representation. Thus one is conscious of the representation itself only if one represents vehicular properties of the representation.

So on what basis can adherents of the Standard View resist Lurz’s position that consciousness of what a state represents suffices for consciousness of the state itself? One kind of response would be to point out that it’s not particularly clear what Lurz’s position even means. Another kind of response would be to point out that it isn’t particularly clear that any arguments have been given for Lurz’s position (or even that he takes himself to have supplied any arguments).

Regarding the first kind of response, regarding the meaning of Lurz’s position, is he asserting that being conscious of what a state represents suffices for being conscious of vehicular properties of the representation? Or is he stipulating that being conscious of the content is another way, distinct from the vehicular way, of being conscious of the representation? I’ll come back to this in a moment.

Regarding whether an actual argument is supplied, Lurz does offer “some intuitive support for this claim” in terms of an analogy concerning paintings. He writes:

It seems plain that in order to see what a particular painting represents, one must see the painting itself. If one does not see the painting itself — say, if one is looking in the wrong direction, or is seeing a different painting, or is blind — then one cannot be said to see what that particular painting represents.

Note that while Lurz says this “seems plain,” it seems plain to me that it doesn’t seem plain at all. If what a particular painting represents is the flight of Icarus and I am looking at some other particular painting which also represents the flight of Icarus, then I can see what the first painting represents without seeing the first painting. I do it by seeing the second painting, which is a representation of the same thing as the first painting. So it looks like I can be aware of what a particular painting represents without being aware of that particular painting. (And if paintings and non-paintings can share contents, then I can be aware of what a particular painting represents without being aware of any particular painting at all.)

To make matters worse, it looks like Lurz agrees with this sort of point. He writes:

Three identical-looking paintings by different artists, for example, may each depict a woman seated before an open window…. [I]n one sense of the phrase “what the painting represents,” the intentional-content sense, what these three paintings represent is the same: a woman seated by an open window.

If what they represent is the same, then I can be aware of what the first represents without ever having had any exposure to the first; I just see the second or the third. Imagine further, that while looking at the second, when I blink it is, unbeknownst to me, replaced by the third. And when I blink again, it is once again replaced. I would continue to be aware of what the painting represents without being aware of which particular painting it is –the second or the third –I am looking at. Being aware of what a particular painting represented would suffice for being aware of that particular painting only if different particular paintings necessarily represented different particular things. But, as Lurz admits, paintings can share contents. So which particular painting does being aware of a content suffice to make you aware of?

To return to question of what Lurz’s claim is supposed to mean, specifically whether it is a claim about awareness of vehicles, I note that thinking about typical cases of looking at paintings suggests awareness of vehicles. When I look at paintings I typically notice whether they’re oil or water color, how fat the paint strokes are, etc., and these are vehicular properties of the painting, the properties with which Icarus is represented, not properties that the painting represents.

However, there are atypical cases in which one notices none of the vehicular properties of a painting, and Lurz discusses such cases: trompe l’oeil cases in which, as Lurz points out, we see neither that a painting is present nor the painting as a painting. I might take myself to be looking out a window at a leafy tree when in actuality I’ve been fooled by an incredibly realistic painting. It seems natural to say in such cases that we would not be aware of any vehicular properties of the painting. However, it seems strained to say, as Lurz wants to, that in such cases we are aware of the painting or that we see the painting. Lurz presents his claims about paintings as “intuitive” and I don’t feel the intuitive pull.

So, in summary, the only support offered for Lurz’s challenge to the standard view are some remarks about paintings that are themselves easily resisted.

Subjective Brain Ch. 4

Thursday, June 21st, 2007

Chapter 4 of The Subjective Brain, “The Neurophilosophy of Consciousness,” is here.


THE STORY SO FAR: An account of consciousness needs, to get rolling, a credible answer to the question, “what makes this account an account of consciousness?” and appeals to (Deflated) Transitivity, (Deflated) Transparency, and WIL seem to best get us in the ball park. A physicalist account of consciousness is going to need to be a reductive physicalist account of consciousness. And if the reductive-physicalist account in question is going to make any kind of use of representation, it better do so in ways that don’t run afoul of unicorns and their inexistent brethren. It increasingly looks like we need a physicalistic representational account of consciousness that is internalistic. What internal things matter most? My bet is on brains. Time to start making good on the bet.

In this chapter I now turn to examine sample neurophilosophical theories of consciousness. I will raise problems for them to be solved in subsequent chapters where I develop my own neurophilosophical account.

In keeping with the remarks in chapter zero on the definition of neurophilosophy as well as the three questions of consciousness (the question of state consciousness, the question of transitive consciousness, and the question of phenomenal character), the discussion of this chapter will be centered on philosophical accounts of state consciousness, transitive consciousness, and phenomenal character that make heavy use of contemporary neuroscientific research in the premises of their arguments.

There are three philosophers whose work on the Neurophilosophy of consciousness I find especially illuminating to examine in concert: Paul Churchland, Jesse Prinz, and Michael Tye. Sections 1,2, and 3 will be devoted to them, respectively. Section 4 is devoted to initial contrasts and comparisons of the three thinkers. Section 5 is dedicated specifically to contrasts and comparisons regarding phenomenal character and section 6 discusses problems to be solved in subsequent chapters.

The Corkboards of Cops and Criminals

Wednesday, June 20th, 2007

Sometimes it’s the good guys and sometimes it’s the bad guys, but frequently in movies somebody has an obsessively detailed bulletin board with maps, poloroids, press-clippings, and it looks cool as hell. I hereby call dibs on the title “The Corkboards of Cops and Criminals”. I don’t know what this is going to be a title of (besides this blog post). Perhaps it will be an essay. Perhaps it will be an exhibit. But dibs have been called.

I’m Koo-koo for Kelly Cases. Cocoa Puffs, Not So Much

Wednesday, June 20th, 2007

Cocoa Puffs

Originally uploaded by Kindin

Suppose that it’s true that (1) a is F and that (2) b is not F. What would prevent a subject from judging that (3) a and b are distinct? There are several options.

The No-Relevant-Information option: The subject believes neither (1) nor (2)

The Incomplete-Information option: The subject believes either (1) or (2) but not both

The Inferential-Failure option: The subject believes both (1) and (2) but nonetheless fails to infer (3)

Not all of these options are equally appealing.

The Information Deficiency Options

Note that both the No-Relevant-Information option and the Incomplete-Information option are consistent with there having been some prior time in which the subject had the beliefs which, according to the options, the subject currently lacks. So, for example, it would be consistent with the Incomplete-Information option to say of the subject that at time t, the subject perceived and thus believed that (1) a is F; at time t + 1 perceived and thus believed that (2) b is not F; and at time t + 2, when queried about whether (3) a and b are distinct, doesn’t know because the subject has already forgotten either that a is F or that b is not F. It would likewise be consistent with the No-Relevant-Information option that the subject has forgotten at time t + 2 both (1) and (2).

However, insofar as the No-Relevant-Information option and the Incomplete-Information option are cashed out in terms of memory failure, the threat looms that the re-identifiability criterion for concept possession is unsatisfied. Of course, this sort of threat looms only if it is assumed that, for example, time t+1 was the first and only time the subject has a mental representation of b and the representation in question is atomic. If the alleged belief attributed in (2) is a first and fleeting atomic representation of b, then it is a poor candidate for a concept of b. If so-called demonstrative concepts are supposed to be atomic one-shot representations, then the memory-failure versions of the No-Relevant-Information and Incomplete-Information options point out serious problems for the demonstrative-concepts defense of conceptualism.

The threat to conceptualism posed here by memory failure can be headed off, however, if instead of construing the representations of a and b as demonstratives, we construe them instead as descriptions (definite or otherwise). Thus would the representations be non-atomic and even if b is being represented for the first and last time, its representation is composed of parts each of which may have a life history satisfying the demands of re-identifiability.

Another way of cashing out the No-Relevant-Information and Incomplete-Information options would be in terms of a failure of perception instead of memory. So, even though the subject might be presented with stimulatory conditions potentially conducive to perceiving that a is F and b is not F, the subject nonetheless fails to actually perceive that either that a is F or that b is not F. (This may be precisely the sort of thing going on in change-blindness.) Insofar as we can cash out the No-Relevant-Information and Incomplete-Information options in terms of perceptual failures, the kind of worry the non-conceptual content proponents want to raise about the representation of a and b gets blocked. This is because it no longer looks like we have a representation of, e.g., b that fails to be a conceptual representation. Insofar as we are relying on the sort of perceptual failure described above, there’s no need to attribute a representation of, e.g., b at all.

The Inferential-Failure Option

Turning now to the Inferential-Failure option, the question arises of how there can be such a failure of inference without raising doubts about whether the subject actually believes (1) and (2) in the first place. There are various ways this might get cashed out, none of which make the Inferential-Failure option a particularly plausible model for Kelly cases.

Way One: Either (1) or (2) are believed non-occurently. I’m sure that it’s common to have beliefs but not draw the simple logical conclusions of those beliefs precisely because the beliefs are not currently contemplated. Problem: Kelly cases involve occurrent mental states, so standing or abeyant beliefs are poor models.

Way Two: Either (1) or (2) are really complicated, like some proposition from string-theory, and thus their deductive consequences are not immediately apparent. Problem: Kelly cases involve experiences of colored paint chips. That ain’t rocket science. Another problem: Why would complication block inference? If it’s due to a load on memory, then this threatens to collapse this response into memory-failure versions of the No-Relevant-Information and Incomplete-Information options.

Way Three: Either (1) or (2) are believed occurrently but non-consciously. Problem: If the topic is non-conscious mental processing, then I’m not interested in the topic anymore. I’m interested in versions of Kelly cases that involve conscious experience. Insofar as Kelly cases involve conscious states, then un-conscious states are poor models.

I currently can’t think of any other Ways, so I currently can’t get excited about the Inferential-Failure Option

The Bottom Line
The conceptualist has some promising options for providing intellectual models of Kelly cases. Kelly cases can be modeled either in terms of perceptual failure or memory failure. If they are modeled in terms of memory failure, then a demonstrative-concepts defense is no longer available. But a conceptualist response would be available nonetheless.

Intellectual Kelly Cases

Tuesday, June 19th, 2007

Abstract Algebra

Originally uploaded by evaxebra

One way to approach the question of how conceptualists should best handle Kelly cases is by attempting to construct intellectual analogs of Kelly cases. Since one way of viewing conceptualism is as an attempt to model perception on judgment, we can clarify issues by approximating, in judgment, what’s going on in Kelly cases.

There are a few forms that intellectual Kelly cases might take. I’ll call them the singular form, the definite form, and the general form.

In all three forms, we must construct analogs to both the simultaneous and serial presentations essential to Kelly cases. In all three forms, the analog of the simultaneous presentation will involve a judgment of distinction which will be roughly contemporaneous with some other judgments and the analog of the serial presentation will have these judgments comparatively more spread out in time.

In the singular form, the simultaneous half of the Kelly case will involve the simultaneous judgments that
(1) a is F
(2) b is ~F
(3) ~(a = b)
and the serial half will have the judgments (1) and (2) occurring at separate times and the judgment of (3) withheld.

In the definite form, we replace (1), (2), and (3) with
(1) The F is G
(2) The H is ~G
(3) ~ (the F = the H)

In the general form, we use instead
(1) All Fs are Gs
(2) All Hs are not Gs
(3) No Hs are Fs and no Fs are Hs.

Regardless of whether we utilize the singular, definite, or general forms, we may express the concern about the satisfaction of a re-identifiability criterion for concept possession along the following lines. If a subject withholds judgment (3), doubts are raised about whether the subject is actually making the judgments attributed in (1) and (2). Related doubts are raised about whether the subject possesses the concepts required to make the judgments in (1) and (2).

Consider, for an example that conforms to the general form, a case in which you are attempting to teach some new concepts to a student. You ask the student, “Are all Fs Gs?” and they answer “yes”. They also answer “yes” to “All Hs are not Gs?” But then when you ask “So, no Hs are Fs and no Fs are Hs?” If, at this point, they answer “no” or “I don’t know” then you start to have doubts about the student’s conceptual prowess. Perhaps the student is incapable of making simple inferences or holding more than a few propositions in their short-term memory. Or perhaps the student has not yet mastered the concepts of Fs and Gs and thus, when they answered “yes” they didn’t really understand what they were answering “yes” to. They weren’t really thinking that all Fs are Gs.

On the other hand, if there ever is an intellectual case that conforms to either the singular, definite, or general forms, that is, if there is ever a case in which a subject can withhold assent to (3) without it being the case that they don’t grasp the concepts required for (1) and (2), then the conceptualist has a model upon which to base a response to the challenge posed by the Kelly cases.

Concepts, Contexts, and Kelly Cases

Friday, June 15th, 2007

In some cases, colors descriminable in simultaneous presentations are indescriminable in serial presentations. I’ll call such cases “Kelly Cases” for they play a central role in Sean Kelly’s arguments against the conceptual constitution of perception (hereafter, “conceptualism”) (Kelly, S. 2001. “Demonstrative Concepts and Experience” The Philosophical Review, 110, 3: 397-420).

Of course, a more accurate description of Kelly’s target is a demonstrative-concepts defense of conceptualism. My intent here is to defend conceptualism without relying on demonstrative concepts.

Kelly cases raise trouble for conceptualism only if accompanied by certain assumptions. One assumption, discussed quite a bit by Kelly, is a re-identifiability requirement on concept possession: in order to have a concept of something, one must be able to identify that something on separate occasions. Another assumption, discussed very little, if at all, by Kelly, is that the perceptual contents in the simultaneous and serial presentations differ only with respect to their time of presentation.

The first assumption doesn’t bother me too much. I question the second assumption.

There are lots and lots of cases in which the context of presentation messes with the discriminability of the colors presented. One of my favorites involves the color contrast cubes depicted below.

Figure 1. This is awesome.

In this image, the “blue” tiles on the top of the left cube and the “yellow” tiles on the top right are actually neither blue nor yellow but the same shade of gray. See a cool animated demonstration of this over at Dale Purves’s Lab webpage here.

It’s open, then, for conceptualism to be protected by treating simultaneous and serial presentations as different contexts that give rise to differences in perceptual content. Of course, the question arises of how to characterize the differences conceptually. It would be consistent with conceptualism to say something like that in the simultaneous half of the Kelly case, the concepts applied are a concept of, say, green plus the concept of difference-in-shade; and that in the serial half of the Kelly case, there is no application of the concept of difference-in-shade.

Note that the defense of conceptualism sketched here is not the demonstrative-concepts defense of McDowell and Brewer that constitutes Kelly’s main target. I envision that the demonstrative-concepts defense would have to say something like that in the simultaneous half of the Kelly case, the concepts applied are the demonstrative concepts that-shade-1, that-shade-2, and the concept of difference; and that in the serial half of the Kelly case, there is no application of the concept of difference. The question arises, however, of what is going on besides a failure of noticing a difference in the serial half. It must be either that (1) the serial case involves neither that-shade-1 nor that-shade-2, (2) only that-shade-1 is applied, or (3) only that-shade-2 is applied.

An objection to this version of the demonstrative-concepts defense that may be raised at this point is that neither (1), (2), nor (3) would constitute the satisfaction of the re-identifiability condition on conceptual content. So, for example, in (2) the shade identified in the simultaneous half of the Kelly case as that-shade-2 is not being re-identified.

Thankfully, I’m not leaning on demonstrative concepts here, and thus the problem raised is somebody else’s problem.

[UPDATE (6/19/2007): I really don't like the last three paragraphs of this post.]

What’s the haps?

Friday, June 15th, 2007


Originally uploaded by Pete Mandik

I’ve got a new paper draft up on my website: “Shit Happens“. Comments welcome.

In this paper I embrace what Brian Keeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemma for philosophers who criticize such theories. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing. My complaint is that conspiracy theories apply intentional explanations to situations that give rise to special problems concerning the elimination of competing intentional explanations.

Brain-Hate Link Roundup

Tuesday, June 12th, 2007

There’s been some interesting bloggin’ on brain hate recently. Some notable entries:

There’s discussion here of some of Max Coltheart’s complaints about cognitive neuroscience. Quoted from p. 22 of Coltheart, M. (2004) (Brain imaging, connectionism and cognitive neuropsychology. Cognitive Neuropsychology, 2, 21-25.) is the following:

No amount of knowledge about the hardware of a computer will tell you anything serious about the nature of the software that the computer runs. In the same way, no facts about the activity of the brain could be used to confirm or refute some information-processing model of cognition.

Re: “The seductive allure of neuroscience explanations” by Deena Skolnick Weisberg*, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson,
& Jeremy R. Gray, there’s very nice discussion to be found here and here. Link to draft here.

From the article’s abstract:

Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two non-expert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on non-experts’ judgments of bad explanations, masking otherwise salient problems in these explanations.

Conspiracy Theory and Intentional Explanation

Saturday, June 9th, 2007

Conspiracy theories postulate (1) causal explanations of (2) historical events in terms of (3) intentional states of multiple agents (the conspirators) who, among other things, (4) intended the historical events in question to occur and (5) keep their intentions and actions secret. Each of the five elements of the definition of conspiracy theories gives rise to distinct problems for the believability of any given conspiracy theory. I’m especially interested here in problems arising in connection with the last three elements of the definition.

To set the stage for the problems that the third, fourth, and fifth elements raise for conspiracy theories as explanations, I’d like to briefly review points that can be raised against folk psychology’s usefulness for predictions.

I assume here a symmetrical relationship between prediction and explanation whereby what’s cited in the explanation of an event that has already occurred can just as well have served to predict the event prior to its occurrence and vice versa. Thus, whatever skepticism may be raised about the predictive power of folk psychology has a basis that can also be a basis for skepticism about the explanatory power of folk psychology.

Morton (1996) raises various problems for the view that the function of folk psychology is to serve as a predictive device. Part of his case concerns two features of intentional states that make them especially ill-suited as bases for the prediction of human behavior. Morton discusses these features under the labels of “holism� and “entanglement�.

Morton’s worry about holism is that if one were to predict an action of an agent in terms of beliefs and desires, one cannot do it in terms of a single belief-desire pair but must instead advert to whole systems of belief and desire. Thus, to adapt an example of Morton’s, a prediction that a person will leave the building through the front door cannot be based simply on an attribution to her of a desire to leave and a belief that the front door is the only exit, since one must also rule out the possibility that, for example, she believes the front door to be connected to a trigger for a bomb.

We see that things are even more complicated when we consider what Morton calls “entanglement,� namely, the fact that so many of our intentional states are about the intentional states of others.

Given the relationship between prediction and explanation, holism and entanglement raise problems for intentional explanation as well as for intentional prediction. If someone does leave the building, explaining her leaving in terms of her having a desire to leave will require attributing a whole host of other desires as well as beliefs. And if she leaves the building with friends, entanglement requires us to cite the many beliefs and desires of each of her friends, many of which will be beliefs and desires about the beliefs and desires of the other friends (not to mention people outside of the circle of friends).

Due to the holism of intentional explanation, even when a single agent is involved, the attribution of a single belief-desire pair will be consistent with a wide range of competing intentional explanations that differ with respect to what other beliefs and desires are attributed. Any given attribution of a belief-desire pair is thus highly likely to simply be post hoc. We already know that the event happened, and distinct competing intentional explanations may seem equally plausible with no real basis for choosing between them. Things certainly get no easier when multiple agents and the concomitant occasions for entanglement are thrown into the mix. Further, due to holism and entanglement, for any belief-desire pair attributed, there are equally plausible explanations that don’t attribute that belief-desire pair.

In ordinary cases of intentional explanation, one sort of thing that can sometimes be appealed to for the elimination of alternate hypotheses is the testimony of agents whose actions are the explananda. We can gain support for various hypotheses concerning what the agents were thinking by asking them what they were thinking. Of course, the utility of such testimony depends largely on a presupposition of veracity. And thus does the fifth element of the definition of conspriracy theory present its special problem, since the aforementioned supposition of truthful testimony is completely out of place when the agents in question are hypothesized to be engaged in various acts of deception.

Space, Time, and Twinearthability

Friday, June 8th, 2007

kant kan

Originally uploaded by Pete Mandik

This is one of those requests for references and reflections post. I’d be grateful for thoughts and recommendations concerning the following questions: Which spatial properties are Twinearthable and which are not? Which temporal properties are Twinearthable and which are not?

Regarding “Twinearthability,� I’m following the usage of John Hawthorne’s “Direct Reference and Dancing Qualia.� The way Hawthorne puts it, a concept like “water� is twinearthable because we can easily imagine an epistemic counterpart that is epistemically just like us but locks onto a different property by “water� than we do (XYZ instead of H2O). A concept is not Twinearthable when beings epistemically just like us would lock onto the same properties that we do. Non-twinearthable properties are so fully present to the mind that epistemic possibility is a guide to metaphysical possibility.

Properties broadly describable as spatial may differ with respect to their Twinearthability. I consider as relevant the following reflections by Roy Sorensen

The volume of a two inch cube is eight times
the volume of a one inch cube. But the surface of the two inch cube
is only four times as large as the surface of the one inch cube. The
ratio of surface to volume further decreases when the cube achieves
a size of three inches. Now all six sides must be dedicated to
maintaining the organism. Thus the geometry of the cubical
organism imposes a limit on its growth. Since the volume of the
organism is cubed while surface area is squared, the animal must
eventually exhaust its ability to feed. The ratio of an organism’s
surface area to its volume is an internal relation. Hence, the size of
an organism is an intrinsic property.
Size is also an intrinsic property of environments. Doubling
everything would not create a duplicate environment. Although the
increase would not be detectable by linear measurements (for our
rulers would have expanded), the increase would make a difference
to planetary orbits and other phenomena governed by geometrical
Any purely spatial property of an organism is an extrinsic
property. Identical twins can be duplicates even though they stand a
meter apart. Nor is their duplicate status threatened by rotation. If
one spins clockwise while the other spins counter-clockwise, they
remain duplicates. If one twin sleeps with his head to the east while
the other sleeps with his head to the west, they still wake up as
twins. Given this indifference to space, we see that the twins are
duplicates even if they are mirror images of each other.
(From Mirror Imagery and Biological Selection, Biology and Philosophy 17/3 (June 2002) 409-422)

I wonder if temporal properties are similarly split with regards to Twinearthability. I wrestled with this a bit in the puzzle raised in “The Slow Switching Slowdown Showdown� wherein I wondered out loud about how long slow switching would take on a demonically slowed Twinearth. I wonder now about which properties broadly describable as temporal would be Twinearthable and which would not.

Further pointers on space as well as time are welcome.