The Turing Centenary Conference was held this June at Cambridge University. Cambridge had special significance for Alan Turing, as he spent his formative undergraduate years there and returned there shortly after his PhD. The conference brought together many well known researchers from theoretical computer science, mathematical biology, and philosophy — a fitting celebration for the diversity of Turing’s interests. There was a strand of ideas arising from the conference that I found particularly interesting: the definition of intelligence.

Defining intelligence

What is intelligence? This is of course an extremely tricky question. Turing’s contribution was to propose an operational definition that’s consistent with what many of us intuit of intelligence. The Turing test argues that if a computer can convince a human questioner that it is human, it is intelligent. The basic assumption behind this test is that language behavior singularly indicates intelligence, an idea advocated by many thinkers, including Descartes. Other philosophers, such as Ned Block, argue that even if a machine passes the test, it is still not necessarily intelligent. Consider the following “Aunt Bertha” machine [1]. There are only a finite number of sentences that any human is likely to use in conversations, and in a given amount of time a human can only utter a limited number of sentences. The conversing machine could store a big lookup table in its memory, containing all combinations of reasonable replies for any possible combination of input sentences. The Aunt Bertha machine would therefore be capable of having a reasonable conversation, since it knows what to say in every situation. But it cannot be said to be “intelligent” in a sense that we would intuitively agree with.

A crucial feature and flaw of the Aunt Bertha machine is that it requires astronomically large storage capacity. In English there are at least 1000 sentences that people are likely to use in a conversation. In a half hour conversation, one is likely to utter at least 30 sentences (one per minute). So the total number of combinations can be truly astronomical: 100030 or 1 followed by 90 zeros! A typical computer has less than 1012 bytes of memory. All the computers in the world combined can only store a minuscule fraction of the verbal lookup table. This gives us a way out of the Turing test’s limitations: we add to the test an additional requirement that the machine must have physically realistic memory capacity. This is the Neo-Turing test, and the hypothetical Aunt Bertha machine fails.

One idea I discussed with other conference attendees is that of using energy expenditure as a metric of intelligence [2]. Energy expenditure reflects memory storage, since storing in and searching from a large database cost energy. Imagine a series of devices (either machines or living organisms) that perform equally well in a given task, such as speech, solving a puzzle, or drawing. To determine the relative intelligence of these devices, one could measure the total amount of energy expended by each, for example by measuring the total electricity used as input for a machine. Most of these devices are not intelligent, so the average energy expenditure is high. However, if a particular device uses significantly less energy to attain a result similar to that of the others, then it is considered “intelligent” for that particular task. If the task achieved is sufficiently difficult, like understanding a language, or covers a diverse range of topics, then we say that the device has “general intelligence”. Thus, question and answer machines such as IBM’s Watson, which beat some of the world’s best Jeopardy players, are not intelligent, since they use a tremendous amount of energy to achieve something that humans can do almost as well with much less energy. Likewise, the chess-playing Deep Blue is not generally intelligent. Reassuringly, humans are intelligent by this energy metric, since our metabolic requirements are comparable to other, less intellectually productive animals.

Intelligent slime mold?

The project I presented at the conference is connected to the topics of intelligence and computation [3]. It involved using a simple, microscopic organism, the slime mold, to solve puzzles that can be hard for even humans to solve, and our work was to mathematically analyze how the slime mold is able to solve these puzzles. Some of my colleagues performed the following experiment [4]: place individual corn flakes (a food the slime likes to eat) on a table, so that their positions mimic the locations of cities around Tokyo. A single slime mold cell is placed in the middle, and allowed to grow. Remarkably, after 24 hours, the slime mold grows into a network of paths that is close to being the most efficient transportation network connecting these cities. If you want to travel between these cities, while minimizing driving time, you should follow the slime mold instead of human engineers! Does this make slime mold intelligent?

James Zou is a PhD student at the Harvard School of Engineering and Applied Sciences.


[1] Stuart Shieber. The Turing Test: verbal behavior as the hallmark of intelligence. MIT Press 2004.

[2] Personal communication.

[3] Anders Johansson and James Zou. A slime mold solver for linear programming problems. Lecture Notes in Computer Science, 2012, 7318.

[4] Tero, A, Takagi S, Saigusa T, Ito K, Bebber D, Fricker D, et al. Rules for biologically inspired adaptive network design. Science, 2010, 327:439-442.

Back to Contents

Leave a Reply

Your email address will not be published. Required fields are marked *