Arguably, and it would be a tough argument to win if you took the other side, computers have had a greater impact on civilization than any other machine since the wheel. Sure, there was the steam engine, the automobile and the airplane, the printing press and the mechanical clock. Radios and televisions also made their share of societal waves. But look around. Computers do everything TVs and radios ever did. And computers tell time, control cars and planes, and have rendered printing presses pretty darn near obsolete. Computers have invaded every realm of life, from work to entertainment to medicine to education: Reading, writing and arithmetic are now all computer-centric activities. Every nook and cranny of human culture is controlled, colored or monitored by the digital computer. Even though, merely 100 years ago, no such machine existed. In 1912, the word computer referred to people (typically women) using pencils and paper or adding machines.
Coincidentally, that was the year that Alan Turing was born. If you don’t like the way computers have taken over the world, you could blame him.
No one did more to build the foundation of computer science than Turing. In a paper published in 1936, he described the principle behind all of today’s computing devices, sketching out the theoretical blueprint for a machine able to implement instructions for making any calculation.
Turing didn’t invent the idea of a computer, of course. Charles Babbage had grand plans for a computing machine a century earlier (and even he had precursors). George Boole, not long after Babbage, developed the underlying binary mathematics (originally conceived much earlier by Gottfried Leibniz) that modern digital computers adopted. But it was Turing who combined ideas from abstract mathematical theory and concrete mechanical computation to describe precisely how, in principle, machines could emulate the human brain’s capacity for solving mathematical problems.
“Turing gave a brilliant demonstration that everything that can be reasonably said to be computed by a human computer using a fixed procedure can be computed by … a machine,” computer scientist Paul Vitányi writes in a recent paper (arxiv.org/abs/1201.1223).
Tragically, though, Turing didn’t live to see the computer takeover. He died a victim of prejudice and intolerance. His work lived on, though, and his name remains fixed both to the idealized machine he devised and to a practical test for machine intelligence, a test that foreshadowed powers that today’s computers have begun to attain.
Turing’s machine
Born in London on June 23, 1912, Turing grew up in an era when mathematics was in turmoil. Topics like the nature of infinity, set theory and the logic of axiomatic systems had recently commandeered the attention of — and confused — both practitioners and philosophers interested in the foundations of mathematics. Constructing an airtight logical basis for proving all mathematical truths had been established as the ultimate goal of mathematical inquiry.
But in 1931, Austrian logician Kurt Gödel dashed that hope, proving that some true statements could not be proved (within any mathematical system sufficiently complex to be good for anything). In other words, no system built on axioms could be both complete and internally consistent — you couldn’t prove all true statements about the system by deductions from its axioms.
A second deep question remained, though. Even if not all true statements can be proved, is there always a way to decide whether a given mathematical statement is provable or not?
Turing showed the answer to be “no.” He wasn’t the first to figure that out; as he was finishing his paper, the American logician Alonzo Church at Princeton published his own proof of such “undecidability.” Turing’s triumph was not in the priority of his claim, but rather in the creative way his proof was constructed. He proved the “no” answer by inventing his computer.
He didn’t actually build that computer (at first, anyway), nor did he seek a patent. He conceived a computational machine in his imagination — and outlined the essential principles by which it would work — to explore the limits of mathematics.
Turing’s machine was deceptive in its conceptual simplicity. Its basic design consisted of three parts: a limitless length of tape, marked off by squares on which symbols could be written; a read-write “head” that could inscribe symbols on the tape and decipher them; and a rule book to tell the machine what to do depending on what symbol the head saw on the tape.
These rules would tell the head both what to do in response to a given symbol and then which rules to use next. Suppose, for instance, the head detects a 1 on the tape. A possible rule might be to move one square to the left and write a 1; or move one square to the right and write a 0; or stay on that square, erase the 1 and leave the square blank. By following well-thought-out rules, such a mechanism could compute any number that could be computed (and write it as a string of 0s and 1s).
One of the prime consequences of Turing’s analysis was his conclusion that some numbers could not be computed. He adopted Gödel’s device of assigning a number to every possible mathematical statement and then showed that this inability to compute all numbers implied that the provability of some statements could not be decided. (And Turing showed that his proof of undecidability was also equivalent to Church’s more complicated proof.) Turing’s result was immediately recognized as exceptional by his professor at the University of Cambridge, who advised Turing to go to Princeton for graduate school and work with Church.
Turing’s imaginary computer (christened by Church the “Turing machine”) offered additional lessons for future computer scientists. Depending on the type of calculation you wanted to perform, you could choose from Turing machines with different sets of instructions. But, as Turing showed, you have no need for a roomful of machines. A portion of one computer’s tape could contain the rules describing the operations needed for carrying out any particular computation. In other words, you can just give that machine a rule book (today, you’d call it a program) that tells it what to do. Such a “universal Turing machine” could then be used to solve any problem that could be solved.
During his time at Princeton, Turing discussed these ideas with the mathematician John von Neumann, who later articulated similar principles in describing the stored program general purpose computer, the model for digital computers ever since. Today’s computers, whether Macs or PCs or teraflop supercomputers, are all Turing machines.
“Von Neumann realized that Turing had achieved the goal of defining the notion of universal computing machine, and went on to think about practical implementations of this theoretical computer,” writes Miguel Angel Martín-Delgado of Universidad Complutense in Madrid in a recent paper (arxiv.org/abs/1110.0271).
Computing thoughts
Turing’s thoughts about his machine went well beyond the practicality of mixing math and mechanics. He was also entranced by the prospect of machines with minds.
To specify which rule or set of rules to follow, Turing assigned his machine a “state of mind.” More technically, he called that state a “configuration.” After each operation, the rules specified the machine’s configuration; the configuration in turn determined what rule the machine should implement next. For instance, in configuration “B,” if the head is positioned over a blank square, the instruction might be to write a 0 on the square, move one position to the right and then assume configuration C. In configuration C, a head positioned over a blank square might be instructed to write a 1, move one square to the right and then assume configuration A.
When Turing referred to the machine’s configuration as its “state of mind,” he really did consider it analogous to the state of mind of a human computer, using a notepad, pencil and rule book rather than tape, head and program. Turing’s imaginary machine demonstrated that the computing abilities of the person and the mechanical computer were identical. “What he had done,” wrote his biographer, Andrew Hodges, “was to combine … a naïve mechanistic picture of the mind with the precise logic of pure mathematics.”
Turing believed that people were machines — that the brain’s magic was nothing more than physics and chemistry “computing” thoughts and behaviors. Those views emerged explicitly years later, when he devised the now-famous test of artificial intelligence that goes by his name. To analyze whether machines can think, Turing argued, the question must be posed in a way that enables an empirical test. As commonly described, the Turing test involves a human posing questions to an unseen respondent, either another human or a computer programmed to pretend to be human. If the computer succeeds in deceiving the interrogator, then — by Turing’s criteria — it qualifies as intelligent.
Actually, Turing’s proposal was a bit more elaborate. First, the interrogator was to pose questions to two unseen humans — one man, one woman — and attempt to determine which one was which. After several trials, either the man or the woman was to be replaced by a computer, and the game repeated, this time the interrogator attempting to tell which respondent was human. If the interrogator succeeded no more often in this task than when the respondents were both human, then the machine passed the thinking test.
Since Turing’s paper appeared, in 1950, multiple objections to his test have been raised (some of which Turing anticipated and responded to in the paper). But the test nevertheless inspired generations of computer scientists to make their machines smart enough to defeat chess grandmasters and embarrass humans on Jeopardy! Today you can talk to your smartphone and get responses sufficiently humanlike to see that Turing was on to something. He even predicted a scenario similar to something you might see today on a TV commercial. “One day ladies will take their computers for walks in the park and tell each other ‘My little computer said such a funny thing this morning!’ ” he liked to say.
Turing seeded a future in which machines and people interact at a level that is often undeniably personal. But he was not around to participate in the realization of his imaginations. Four years after that paper on artificial intelligence appeared, he was dead.
A surviving vision
During World War II, Turing had been the key scientist in the British government’s code-breaking team. His work on cracking the German Enigma code was, of course, a secret at the time, but later was widely recognized as instrumental in the Allies’ defeat of Germany. After the war, Turing returned to computer science, eventually developing software for a sophisticated (at the time) programmable computer at the University of Manchester.
While in Manchester, he composed his paper on the artificial intelligence test. Later there he encountered the lack of intelligence in the British criminal code. During a police investigation of a break-in at Turing’s home, he acknowledged that he knew the culprit’s accomplice from a homosexual encounter. And so Turing became the criminal, prosecuted for “gross indecency” under a law banning homosexual acts. Upon his conviction, Turing chose the penalty of chemical castration by hormone injection rather than serving a term in prison. His security clearance was revoked.
Two years later, Turing’s housecleaner found him dead in bed, a partly eaten apple at his bedside. It was officially ruled a suicide by cyanide. At the age of 41, the man who played the starring role in saving Western democracy from Hitler became the victim of a more disguised form of evil.
In his tragically truncated life, Turing peered more deeply into reality than most thinkers who had come before him. He saw the profound link between the symbolisms of mathematical abstraction and the concrete physical mechanisms of computations. He saw further how computational mechanisms could mimic the thought and intelligence previously associated only with biology. From his insights sprang an industry that invaded all other industries, and an outlook that today pervades all of society. Science itself is infused with Turing’s information-processing intuitions; computer science is not merely a branch of the scientific enterprise — it’s at the heart of the enterprise. Modern science reflects Turing’s vision. “He was,” wrote Hodges, “the Galileo of a new science.”