From NewScientistSpace: “‘Babelfish’ to translate alien tongues could be built”
Such a “babelfish”, which gets its name from the translating fish in Douglas Adams’s book The Hitchhiker’s Guide to the Galaxy, would require a much more advanced understanding of language than we currently have. But a first step would be recognising that all languages must have a universal structure, according to Terrence Deacon of the University of California, Berkeley, US.
[...]Deacon argues that all languages arise from the common goal of describing the physical world. That limits the way a language could be constructed, he concludes.
Deacon argues that no matter how abstract a symbol becomes, it is still somehow grounded in physical reality, and that limits the number of relationships it can have with other symbol words. In turn, this defines the grammatical structure that emerges from stringing words together.
If that is true, then in the distant future it might be possible to invent a gadget that uses complex software to decode alien languages on the spot, Deacon said. He presented his ideas on Thursday 17 April at the 2008 Astrobiology Science Conference in Santa Clara, California, US.
Testing the theory might be tough because we would have to make contact with aliens advanced enough to engage in abstract thinking and the use of linguistic symbols.
The lack of aliens does indeed make that a tough nut to crack. Also, problematic is the lack of a physical “grounding” relation that would serve to distinguish between reference to rabbits, un-detached rabbit parts, and the cosmic complement of a rabbit. Good luck, exolinguists!
Fig. 1. Stick this in your ear hole.
My recollection of Douglas Adams’s description of the Babelfish was that it fed off of the brain-waves of the speaker and secreted telepathic translations into the brain of the listener. Regarding the ‘gavagai’ problem, this is just to kick the problem upstairs: specifying determinate contents for alien brain states is not obviously easier than specifying determinate contents for their utterances.
However, perhaps one can appeal to a strategy outlined recently by Paul Churchland (Churchland, P. (2001). Neurosemantics: On the Mapping of Minds and the Portrayal of Worlds. The Emergence of Mind. K. E. White. Milan, Fondazione Carlo Elba: 117-47.) The gist of Churchland’s suggestion is that the neural activation spaces of distinct brains may be uniquely mapped to one another in spite of large differences between the brains’ fine-grained structure. This is alleged to provide an objective basis for measuring similarities of content in the respective neural representations.
Even if this Churchlandish proposal is correct, huge hurdles remain to harness the proposal in the service of a Babelfish-esque technology. Scanning an alien brain and then adjusting my own to resemble it and thus token representations with similar contents may suffice for me to think like an alien, but it wouldn’t suffice for me to have thereby translated the alien’s thoughts into my own. Consider: if someone zapped a monolingual English speaker with a ray that turned them into a monolingual Chinese speaker, the zapped speaker is no closer than before to understanding how to translate Chinese into English.
Fig. 2. By the way, my Babelfish tells me that the cover of his book says “To Serve Man“. Nice!