## Defining “Computation”

I’m about half-way through my first draft of *Key Terms in Philosophy of Mind*, a book under contract with Continuum Books. From time to time I’ll be posting draft entries on *Brain Hammer*, especially for controversial or especially difficult to arrive at definitions. Here’s “computation”:

**computation**, the process of arriving at a (typically numerically or symbolically interpretable) state from an initial condition via the application of a set of rules; alternately, rule-governed symbol manipulation. The definition of computation is somewhat vexed, and its historical development has been influence by the not always congruent concerns of philosophers, mathematicians, and computer scientists. The most archaic uses of the term refer to calculation, typically of the sort done by humans solving problems involving numerically represented quantities. The notion of computation came to be associated with the notion of being effectively computable, which involves calculation via procedures that are “mechanical” in the sense of being able to be performed by the application of relatively simple procedures without the utilizations of much insight or ingenuity. This notion was later developed in such a way that made it clear how the procedures in question might be literally mechanical, that is, performed by machines. Such notions were made mathematically precise by Turing via the notion of what sorts of things can be done by Turing machines. Part of the history of these notions, and most significant for the philosophy of mind, is the hypothesis that human mental processes are themselves composed of the sorts of rule-governed and mechanistic processes distinctive of computing machines. According to some, the mind literally is a computer. See also ARTIFICIAL INTELLIGENCE; FUNCTIONALISM; TURING MACHINE; TURING TEST; TURING, ALAN.

[Related *Brain Hammer* post: What’s so metaphorical about the computer metaphor?]

This entry was posted
on Thursday, February 21st, 2008 at 10:12 am and is filed under Key Terms in Philosophy of Mind.
You can follow any responses to this entry through the RSS 2.0 feed.
Responses are currently closed, but you can trackback from your own site.

Part of “effective computation” is that the things manipulated must be digital, not requiring infinite precision.

That looks good but I think that all you have defined is digital computation and that there is a second type of computation, namely analog (as in resemblance, not continuous - see Jack Copeland’s entry in the Stanford on the history of computation).

Is shining a light on a scale model to work out a shadow pattern computation? I think it is - the buildings and the light and the shadows in the scale model are representations of the relevant things in the world and something (namely the shadow pattern) is calculated. But there are no symbols or rules.

What about a third alternative definition that is more inclusive? Computation is a causal process involving representing vehicles where the content of the vehicles influences the trajectory of the process (see O’Brien, G. and J. Opie (2006). “How Do Connectionist Networks Compute?” Cognitive Processing 7(1): 30-41.)

Ken, some people are talking about analog computation, so I guess Pete wanted to define the generic notion that covers both digital and analog cases. This seems OK, yet there is more to be said about the notion of implementation (and this is where problems start to appear). I won’t even try here as you probably have a word limit

Anyway, though Gualtiero Picinnini seems to divorce computation from information processing, I’d claim that computation is exactly a kind of information processing in which the final state is defined based on the initial state and the transition rules. And if you can analyze your information-processing system on all three Marr levels without ignoring most of the essential causal processes in the system, you see computation (even if there are no logic gates that Gualtiero loves so much). There is much more to be said about implementation, but this is a short intuition pump.

BTW, See how I defined computationalism here:

http://www.eucognition.org/wiki/index.php?title=Computationalism

Pete,

This seems to be a pretty good definition for the book, although there are some intricacies (as noted in the posts above) that may or may not be relevant for your purposes. For example, (contra Marcin) I don’t think this definition covers analog computation; it’s not clear in what sense the analog computers that were once so popular follow rules. But it’s still true that the term “computation” came to mean digital/discrete computation.

The only thing I would suggest is another sentence or two about the differences in the conceptions of computation that philosophers, computer scientists, and cognitive scientists care about. I don’t think it would necessarily be obvious to a reader of your book what these differences might be, or why they may have come about, or why the computer scientists’ conception(s) wouldn’t trump all others (since, you know, they’re the *computer* scientists!).

Corey

Corey,

it depends on whether you’re Wittgensteinian and you distinguish between rule-following behavior and behavior that can be described with a rule. It’s obvious that you coud have non-programmed digital computers with no rules represented to be followed so I thought Pete didn’t want to use this distinction (non-programmable digital calculators, for example, can have no software level and no explicit rules to follow). Not all digital computers adhere to von Neumann’s architecture, after all.

Thank you Ken, Liz, Marcin, and Corey for your helpful feedback. The revised version will no doubt greatly reflect your input and more will likely be said that explicitly addresses issues concerning analog and digital computers.

Here are some points for the purpose of further discussion, if you’re interested.

Ken: I take it that the key characteristic of the digital is that it’s not analog and the key characteristic of the analog is the way in which things are represented (via morphisms). I wouldn’t characterize analog representations as involving infinite precision (the calibrations of analog measuring devices, for example, are going to give you only a finite number of significant digits). And I think the general notion of effectiveness, the one of which both pre-Turing and post-Turing notions are instances, applies to the manipulations of representations and is neutral with respect to whether the representations themselves are analog or digital.

Liz: I guess there’s no harm in calling the illumination of the model a computation, but I likewise guess there’s no harm in calling the representations in question “symbols” and stating the rule followed as “shine light on model and note resultant shadow distributions.” Regarding your offered third alternative, I personally find it appealing but worry that it is a bit too controversial: widespread is the view that it’s vehicular, not content properties, that determine the causal interactions between representations in a computation.

Corey and Marcin: I don’t think anyone has a particularly clear idea about how to tell when something does or doesn’t follow rules, but the closest thing to popping open the hood and finding rules hiding therein we’re going to get is with programmables. However, making the presence of an instruction set criterial for digital computation will, as Marcin nicely points out, exclude non-programmable but obviously digital computers.

I think Pete’s reply to Ken is dead on. There is, I think, something to be said for distinguishing between discrete and digital computation, but that’s another matter.

What I had in mind in my earlier post (regarding analog computers) were the mechanical, crank-the-gears machines used to integrate functions and point the anti-aircraft guns at the right spots. While I agree that it’s important to maintain the Wittgensteinian point, there may be something to say about how those non-programmable but obviously digital computers follow rules (I take it that this is what Marcin had in mind when he mentioned implementation). But those analog computers did what they did in virtue of physical properties that, in a real sense, directly modeled whatever it was they were trying to compute. It seems to me that if those things follow rules, we may be very close to trivializing the notion of rule-following (and not just being rule-describable). Maybe those aren’t *really* computers, but for a time they certainly were.

Pete says computation is “the process of arriving at a (typically numerically or symbolically interpretable) state from an initial condition via the application of a set of rules; alternately, rule-governed symbol manipulation.”

This definition has way too many problems. Saying the state is typically numerically or symbolically interpretable is unneeded - try giving me a state that is NOT numerically or symbolically interpretable. What’s the difference between a “condition” and a “state”? Why aren’t we starting from an initial state? As it stands, the definition seems to imply that almost any process is a computation - even a random process like throwing a die over and over. Is “do whatever you want” a rule?

As an old computer scientist, here’s my definition: a computation is any process that (a) begins with an initial state; (b) uses a fixed set of operations to repeatedly transform its current state into its next state; (c) may use an operation to transform an endless series of states into a limit state; and that (d) ends with some final state.

The definition says nothing about the precision of the states (thus the states might be real numbers on a continuum). It says nothing about rules (and thus avoids the vexed question of what is a rule). Instead, it talks about operations. You get to pick anything at all as an operation, so long as its a function from states to states. The definition clearly links computation to recursion — an essential feature. The definition allows for transfinite recursion and thus transfinite computation. It is not bound by the Church-Turing limit. The definition demands the existence of a final state and thus ensures effectiveness (though not necessarily finite effectiveness).

The notion of effective computability has nothing to do with mechanicalness. It has to do with producing a result. f you could ask God (or one of Turing’s oracles) any question and get His answer (which would be true) in some fixed finite time, you’d have an effective procedure. But it wouldn’t be mechanical.

We now get to “calculation via procedures that are “mechanical” in the sense of being able to be performed by the application of relatively simple procedures without the utilizations of much insight or ingenuity”

To talk about proceedures that are able to be performed by procedures is nonsense. Note that nobody ever talked about “much” insight or ingenuity. Turing makes these ideas clear. So it’s better to just skip this sentence.

Otherwise, all is Groovy.

Cthulhu Ftaghn!

Those are pretty helpful remarks, Eric.

I’m not sure I understand (c), though. How is it consistent with (d)? (”endless” & “final state” tickle my contradiction detectors).

Relatedly, How is (d) consistent with transfinite recursion?

I don’t get the God point. What’s the effective procedure here? Be God?

Cthulhu Ftaghn!

The point of (c) is to allow a computation to be transfinite. Consider a computation that runs on Zeno moments 0, 1/2, 3/4, 7/8, and so on. Such a computation is an endless series of successor states (at the Zeno moments). But the computation can have an operation that transfoms the endless series of successor states into a limit state at time 1. Copeland has done much work on this notion of computation. It’s an old idea: the computation is said to accelerate over the Zeno moments and to produce a result at the limit of that sequence of moments. Thus “accelerating Turing machines”. The computation might terminate at time 1, thus having a final state after an endless series. Final does not entail finite.

There’s a difference between “effective” and “mechanical”. A mechanical procedure is, basically, just a recursive procedure. An effective procedure is, basically, just one that gets the job done. Thus Turing introduced the idea of an oracle. An oracle correctly answers your (yes / no) question in a fixed finite time. Thus you might have an oracle for arithmetical propositions. For any arithmetical proposition P, you can ask the oracle whether or not P is true (whether or not it’s a theorem of arithmetic). The oracle always answers correctly in finite time. There is no mechanical procedure for deciding the theorems of arithmetic — many are true but unprovable. But, thanks to your oracle, you have an effective way of deciding the theorems of arithmetic.

I’m not sure I totally follow. So is the oracle computing?

Maybe the oracle is computing and maybe it isn’t. That makes no difference. All that matters is that the oracle can do things that you can’t do mechanically.

There’s something fucked up in here somewhere:

You have said:

“The notion of effective computability has nothing to do with mechanicalness”

&

“The definition clearly links computation to recursion — an essential feature.”

&

“A mechanical procedure is, basically, just a recursive procedure.”

It kind of looks like we can derive from this that computation is simultaneously mechanical and not mechanical. But maybe I’m interpreting something incorrectly here. What’s up?

Maybe this is useful:

Hartley Rogers, Theory of Recursive Functions and Effective Computability. The MIT Press; New Ed edition (April 22, 1987). pp. 1-2.

You can look at this at Amazon with their excerpt feature.