Math vs. Natural Language and the Mind-Brain

Much of the recent discussion on “Your Brain is Reading This” has focused on what sorts of substantive conclusions (if any) can be drawn about cognitive neuroscience based on reflections on ordinary language. One contributor to that thread is neuroscientist Eric Thompson and I’m reminded of remarks he’s made over at the Brains blog about the relative poverty of natural language (as opposed to mathematical theories) in capturing neural and mental phenomena.

I’ve been thinking about this a lot more recently and want to link to these past comments of Eric’s. For starters, there’s his dream curriculum for a cognitive science program here. But for real food for thought, there are his remarks here on the differences between two groups of researchers who don’t collect their own data. Here’s an excerpt:

Unfortunately, philosophy training is not very helpful for thinking about data or coming up with precise theories of how brains work. Many philosophers I talk to think that theoretical neuroscience is just philosophical neuroscience, but there are really two groups of people that don’t collect their own data. The theoretical neuroscientists (Sejnowski, Abbott, Hopfield, etc) who are trained in lots of mathematics (typically they come from physics) on one hand, and the philosophers who typically use natural language to think about brains. I think the philosophical branch of the armchair neuroscientists have done, and will do, very little to push neuroscience in fruitful directions. The mathematical branch of the armchair dwellers, though, will continue to bear fruit.

While it is possible, I just don’t see neuroscience becoming data rich and theory poor: theoretical neuroscience is exploding, especially as theoretical physicists are realizing that it is much easier to find jobs in biophysics than string theory. Theories to explain given data are a dime a dozen. While the amount of data is quite daunting, if you ask an experimentalist for their speculations about their data, you typically won’t find any shortage. However, they will tend to be quite cautious, journal editors tend to cut such speculations out of papers, and experimentalists don’t want to come off as mushy theorists in their presentation of data. There is a strong selection effect to make it look like there lots of theories. Also, in practice, it is typically experimentalists who come up with predictions that can actually be tested: this is very hard to do even if you are an experimentalist with an understanding of the nuances of the techniques.

Also, while I don’t think philosophical naturalists should necessarily be doing experiments, as I mentioned above, they would be better served by learning more mathematics and actually analyzing some data.

20 Responses to “Math vs. Natural Language and the Mind-Brain”

  1. Just a quick comment (not so much about ‘Your brain is reading this’ but rather about the role of philosophers in theory building more generally).

    For what it’s worth, I agree with much of what is said in the post. Nonetheless, it seems to me that there is one group of folk who spend a good deal of their time reflecting on natural language yet who can nonetheless be expected contribute useful insights to theoretical neuroscience. I have in mind the linguists.

    As you know, linguistics (and specifically GB and Minimalist syntax as well as conceptualist semantics of the sort advocated by Jackendoff) aims to provide a theory of structure for certain circumscribed types of mental processing. It too is constrained by data — albeit data of a different sort than what the neuroscientist is accustomed to. Having a correct theory of linguistic competence to draw when trying to arrive at a theoretical characterization of the underlying wetware is, I would imagine, quite handy. (Or is that wrong?)

    Moreover, there are plenty of philosophers out there trained in linguistics. They can be expected to take part in the conversation as well.

  2. Randy says:

    I don’t see it as the job of philosophers to do any kind of armchair science. Philosophy is an autonomous activity distinct from the sciences
    The only kind of truth philosophy can address is conceptual truth.
    A philosopher shouldn’t be engaged in presenting scientific theories unless they are also a trained scientist.

  3. Eric Thomson says:

    Thanks for posting that. In retrospect, I largely agree with with I said back then. While in theory you might expect neurophilosophers to actually be better at coming up with predictions and syntheses than the scientists doing experiments. Experiments take up a ton of time and energy, while the neurophilosophers can devote the same energy to becoming more widely read in the literature than the primary researchers. This could lead to all sorts of interesting cross-pollination and idea earthquakes that are very useful for the neuroscientists.

    I think the reason it doesn’t work out this way in practice is manyfold.

    1. As I mentioned, it is hard to come up with experiments to test theories. New ideas come very quickly, even to the practicing scientist (e.g., for a recent speculation about cortical computation, see my recent neuro blog post). Coming up with testable ideas, though, is just hard, and experimentalists are more in tune with what is realistically testable. This is also a shortcoming of the mathematical theoretical neuroscientists: sure, if we could measure the currents in a million cells simultaneously, that would be great, and we’d be able to test your hypothesis (Mr Physicist), but give us something we can work with!

    2. Professional neurophilosophers have institutional constriants on their productivity. They have to teach about Plato to undergrads, for instance. This siphons off a great deal of time. Also, teaching and taking neuroscience classes, going to the colloquia and journal clubs in the neuroscience department, builds and maintains a kind of neuroscience intuition, a biological grounding that is a bit harder to do when stuck in the wrong department.

    I also found, in grad school, that I had to waste a lot of time defending the very project of neurophilosophy, which is unproductive. Nobody in my lab wastes my time trying to argue about whether our terminiology conforms to ordinary language, thank goodness.

    3. Also as alluded to in my quote, your formal training is in logic rather than practial math such as differential equations, statistics, programming, linear algebra, and the like. It is hard to learn that stuff after getting a PhD in philosophy. Even computer simulations are a great way to test intuitions about neural systems, so more neurophilosophers need to do simulations (note a lot already do simulations and heavy computational work, and do work in robotics and the like: they are often the most original thinkers in my opinion).

    4. The best ideas often come from interacting directly with data. Almost every time I collect new data, I see things I didn’t expect. Before the experiments, I had what I thought was a thorough set of predictions about what I might see. The data upends all that and shows me something completely different, which forces me to think in novel directions. I have really come to appreciate data as the primary engine of conceptual novelty in neuroscience. There is also the platitude that data provides useful constraints on one’s speculation. This is also true of simulations.

    5. The best philosophy comes when the science is done. Philosophers are trained in clear thinking about topics. Clearly thinking about a topic which is nowhere near resolved by the scientists (e.g., the psychology of mathematical knowledge) tends to lead to very clear descriptions of works in progress that are very likely to be jettisoned within a decade. Philosophy of special relativity is very interesting and well-developed partly because the science is so stable.

    Philosophical psychology is something I basically look at as a series of episodes of clear thinking about different fads in psychology. From behaviorism to cognitive psychology to connectionism to dynamical systems thinking. We have a lot of very clear thinking about a bunch of rapidly evolving disciplines.

    Note this all assumes the goal is to contribute to our understanding of how brains work. For that, I still think it is best to just be a neuroscientist (either experimental or (mathematical) theoretical). Now, I disagree with Randy about philosophers needing to just sit around analyzing concepts. I’m not sure what I think philosophers should do, but it’s got to be more than what Ayer envisioned.

  4. Eric Thomson says:

    Note I think perhaps the goal shouldn’t be to provide new brain theory as much as to use neuroscience in ways philosopers uniquely can: to bear on already existing philosophical controversies (e.g., debates about reductionism). This is the case with psychology data, as shown by the recent PMS submission which was very interesting.

    Also note that there are exceptions to all the negative Debbie Downer stuff I mentioned above! There is good neurophilosophy that makes substantive theories about how brains work, how they implement certain capacities. E.g., Grush, Mandik, surely others.

  5. Eric Thomson says:

    I think Luke makes a good point. Indeed, since linguistics mostly makes use of formalisms that are very logic-like, it is a field that is readily consumed.

    They key though is the idea that we have something close to a “correct” theory of linguistic competence. I sort of lump Chomskian linguistics in as one of the fads I mentioned. Fads can be right, of course, but it is clear that people jumped on the Chomsky bandwagon a bit prematurely and things are far from settled. This is the problem with substantive and interesting psychological theories.

    On the other hand, I don’t want to be a stick in the mud. Cross pollination is hard, but not impossible.

  6. I understand the hesitance regarding various philosophical fads. I completely agree that it’s important to check one’s enthusiasm about what happens to be front and centre in a given field at the moment.

    Still, I think one insight to take away from Chomsky’s work is that cognitive scientists can profitably engage in top-down research. And one good way to do that is to work on grammars: theories that specify the structure and the systematic differences between representations our mind/brains manipulate in order to perform some circumscribed set of tasks. Building grammars requires constant contact with the relevant high-level data. And I think everything that Eric has said about the importance of getting one’s hands dirty carries over.

    Importantly, grammars need not have anything to do with language per se. For example, I read Biederman’s work on visual object recognition to be an instance of the type. Having a provisional theory of basic ‘geons’ before trying to sort out how these might be implemented seems like a useful way of constraining bottom-up work. In fact, it seems essential. (The example is slightly dated, but I hope the point is nonetheless on target.)

    Grammars can be useful even if we know that they are not ultimately ‘correct’. An early version of a grammar can structurally approximate the final, correct account. And the discrepancies may not matter if the lower level theories being constrained are themselves quite provisional. Pulvermuller’s (2002) Neuroscience of Language makes use of old-fashioned phrase structure grammar to very good effect. PSG is close enough to GB (or whatever you prefer) to do the job for his purposes. And since the aim is to take a good stab at building a bridge between a linguistic theory and a neuronal one, the latest linguistic details don’t really matter all that much.

    Philosophers should (I would think) be every bit as competent as computer scientists and linguists at working on grammars — ones constrained by conceptual and empirical evidence as well as precise enough to be testable one way or another. Anyway, so it seems to me.

  7. Eric Thomson says:

    Luke: In practice it’s not clear how useful having the top-level theory in place is for the neuroscientist, but I understand the sentiment.

    Philosophers should (I would think) be every bit as competent as computer scientists and linguists at working on grammars

    You’d think, much like you’d think philosophers would be better at coming up with neuroscientific word-models (as opposed to mathematical models) than experimental neuroscientists. In practice, probably for a lot of similar reasons as mentioned above, it doesn’t seem to work this way.

    But at least linguistics meshes with the core competences of philosophers better: very logic-like models. A simple step from predicate calculus to Syntactic Structures.

  8. Eric Thomson says:

    Note this distinction I made above between word-models and mathematical models is probably not my creation. It is something we computational neuroscientists talk about fairly often. It was written up at some point by Eve Marder here in a nice little perspective article in Nature Neuroscience.

  9. Eric Thomson says:

    Ummm….. probably not should read definitely not!!!

  10. Eric Thomson says:

    One more thing: for many of the reasons stated above, neuroscientists make crappy philosophers! I’ve been working on some simple philosophical ideas for like two years (initially posted here), and just haven’t had the time to fully develop them. And the ideas aren’t even that complicated!

  11. Eric - I might have to ask you some time what you think natural language and mathematics *are* that gives rise to this deep difference in their descriptive powers. But that’s probably a long discussion.

    On another topic: what you write about semantics and content really interests me (it’s a topic I find fascinating). It’s probably better if I pursue that issue on ‘Brains’ though.

  12. Eric Thomson says:

    I think it’s pretty simple, Luke. It’s likely the precision and predictive power of a single equation that gives its power relative to natural language. E.g., compare the Hodgkin Huxely equations describing the behavior of a neuron to a natural language description of how neurons fire action potentials.

    Plus, natural language models are basically clear expressions of intuition. With a mathematical model, you can do simulations that often strongly violate your intuitions about how that model will behave: this provides very useful feedback, almost like data, on one’s word models.

  13. Pete Mandik says:

    I’d offer that the deep difference in descriptive power is due largely to math being deliberately engineered to have deep descriptive power whereas natural language was “designed” to do all sorts of things, with descriptive power being of only relatively minor importance to the “designer”.

  14. Josh Weisberg says:

    Very interesting discussion.

    One question that’s perhaps relevant for philosophers is how to connect these mathematical models with phenomena described in ordinary language–and this ties it back to the previous discussion. Starting very broadly, we want to know how the brain works, as Eric said. Next, treading very lightly to avoid mereological “fallacies,” the brain is the organ that allows us to perceive, remember, plan, act, sense, feel, reason, etc. If we wish to develop a mathematical model of these processes, we need some idea of what we are talking about, and some means of effectively operationalizing these terms to both capture what we initially meant (as much as possible) and to allow for fruitful modeling. It is in this stage, I think, that well-trained and empirically savvy philosophers can contribute. Why is this a model of memory? How does this model help explain perception, as opposed to something else? And so on.

    Also, there is the question of “psychological reality”–is this a model of what actually goes on in the head, or is it just one of a number of formal models which accurately captures the data? This requires some understanding of just what the models are saying, but can probably be done without full-blown knowledge of dif. equations and so on.

    In addition, there’s the question of connecting neuroscientific results back to folk-psychological categories, for the purpose of saying why ordinary folk think the way they do about the mind and themselves. If these models are true, what does that say about who we are? That still requires a knowledge of empirical methods, but it connects back to philosophy “proper”. And it’s here that the debate about ordinary language really comes to the fore, with folks like Randy contending (I take it–correct me if I’m wrong) that the empirical results have nothing to add to the person-level categories of our ordinary ways of speaking and thinking about ourselves. And folk like Pete and me hold that the empirical results can and do inform those issues–that there is no “autonomous activity distinct from the sciences” for philosophy to engage in, at least not with much hope of success.

  15. Eric Thomson says:

    Good point Josh. I think philosophers can be pretty good at keeping scientists honest and constrained in their interpretations of their own models.

    That Hurley book is not a good example, but I think Chalmers is. Zeki also tempered some of his claims about the disunity of consciousness, and even started citing Kant when he seemingly backtracked. I wonder if a philosopher got ahold of him.

    It works out really well in physics, where models are very well established. In neuroscience there is one model that is very well established: the Hodgkin-Huxely model. If there is a paradigm that unifies neuroscience, that would be it. Unfortunately it is a rather complicated set of four nonlinear differential equations, and such things are hard to analyze. A great book that gives the low-down on such models is Spikes, Decisions, and Actions by Hugh Wilson (it assumes comfort with basic calculus and perhaps some linear algebra).

    Unfortunately the HH models aren’t particularly metaphysically interesting, say nothing about minds, so probably won’t attract much attention from philosophers. But they’d be a good study in the emergence of a “paradigm”, and of the emergence of a mechanistic perspective on an abstract mathematical model in biology, and are what we mean when we talk about ‘biologically realistic’ models. (On the second point, at first HH had no idea what the biology was behind their models: it was almost 20 years after their initial publications in 1952 before channels were established as the mechanism of ion flow across membranes).

  16. Pete - thanks for your reply. To be honest, I still find the situation pretty puzzling though. I’m not sure that math *is* designed to have deep descriptive power. Or perhaps I’m not sure what it’s designed to be descriptive *of*.

    The physicist Steven Weinberg (1986) notes that many mathematical structures were discovered and elaborated in great detail “long before any thought of physical application arose. It is positively spooky how the physicist finds the mathematician has been there before him or her.” Similar sentiments have been expressed by Wigner, Einstein, and Galileo among others.

    Now, don’t get me wrong: I’m no platonist (in part for the reasons my paper explores). But the fact that you and Eric are (so completely) right about the descriptive power of mathematics and the relative impotence of natural language descriptions is still pretty weird.

  17. Pete Mandik says:

    Hi Luke,

    I’m not terribly confident of my worth as a philosopher of mathematics, but it strikes me as defensible that something can be fine-tuned for descriptive power without there being any particular target of which it is designed to describe. Take for example, the developments in math logic from the past hundred years or so. So much work was put into how to deal with truth that didn’t require a specific ontology of what sentences or predicates were true of.

    Contrast this with natural language, where saying things that are true is only one of many of its uses.

  18. Hi Pete,

    It must be possible for a biological entity to develop a representational system and to fine-tune its descriptive power in useful ways without any particular entity acting as the intended target. After all, as you say, we seem to do it. You have to admit though that coming up with a plausible cognitive architecture capable of doing this sort of thing is hard. (Fodor’s ‘The mind doesn’t work that way’ looms large.)

    I suppose natural languages provide us with one sort of template to work from. At least in some respects. We already have some sense of what a scientific theory of the relevant representational system looks like. We know that languages (properly speaking) are internal, individual and intensional. We know that a great deal of their structure is innate and varies according to identifiable parameters. There are hypothetical boxologies available too. And we know what bits of brain those may plausibly implement on. Etc. We can at least start to imagine what fine-tuning one’s NL semantic representations might look like.

    The available models of the math faculty (subitizer, accumulator, logarithmic number line) look like they’re off to a good start. But they still fall far short of explaining the sort of fine-turning you write about. They are also mute on the issue you started with: namely why the math faculty is so useful in scientific contexts while the language faculty is frequently actively misleading. (Oxbridge word ninjas and all that.)

    I know that’s not particularly enlightening but there we are. Maybe somebody else has some good ideas.

  19. Pete Mandik says:

    Hi Luke,

    Here are some further thoughts.

    I think it’s worth distinguishing, in the current context, mathematical capacities like subitizing etc., which I’m happy to regard as likely innate and modular, from mathematics itself, which is a humongous cross-cultural artifact that people have been fine-tuning with much deliberation for thousands of years. It’s also worth distinguishing a likely innate and modular natural language capacity from any particular natural language, neither of which has been subjected to any fine-tuning worth regarding as deliberate.

    So, when I offer that math is better for science than language because of the deliberate fine-tuning that the former has received, it’s the cultural artifact that I have in mind.

    When a scientist uses math to kick all sorts of ass in describing the natural world, I doubt that she is using her mathematical faculty instead of her language faculty. Her ability to work skillfully with the cultural artifact that is mathematics likely draws on both kinds of cognitive modules. I doubt that much insight about why math is better than natural language for science is going to come from an understanding of the innate biological mechanisms that we can distinguish as being more mathematically-oriented versus more language-oriented.

    I, of course, have nowhere near a satisfying explanation of why math is so much better for science. But these are my hunches about where to look.

  20. Luke Jerzykiewicz says:

    Thanks Pete. That’s really useful to me. I’ll definitely need to bear it in mind.

    I just love this use of philosophy, by the way: philosophy as ‘guide for where to look.’ :-)