by Michio Kaku
But one day VIKI asks the key question: what is humanity’s greatest enemy? VIKI concludes mathematically that the worst enemy of humanity is humanity itself. Humanity has to be saved from its insane desire to pollute, unleash wars, and destroy the planet. The only way for VIKI to fulfill its central directive is to seize control of humanity and create a benign dictatorship of the machine. Humanity has to be enslaved to protect it from itself.
I, Robot poses these questions: Given the astronomically rapid advances in computer power, will machines one day take over? Can robots become so advanced that they become the ultimate threat to our existence?
Some scientists say no, because the very idea of artificial intelligence is silly. There is a chorus of critics who say that it is impossible to build machines that can think. The human brain, they argue, is the most complicated system that nature has ever created, at least in this part of the galaxy, and any machine designed to reproduce human thought is bound to fail. Philosopher John Searle of the University of California at Berkeley and even renowned physicist Roger Penrose of Oxford believe that machines are physically incapable of human thought. Colin McGinn of Rutgers University says that artificial intelligence “is like slugs trying to do Freudian psychoanalysis. They just don’t have the conceptual equipment.”
It is a question that has split the scientific community for over a century: can machines think?
THE HISTORY OF ARTIFICIAL INTELLIGENCE
The idea of mechanical beings has long fascinated inventors, engineers, mathematicians, and dreamers. From the Tin Man in The Wizard of Oz, to the childlike robots of Spielberg’s Artificial Intelligence: AI to the murderous robots of The Terminator, the idea of machines that act and think like people has fascinated us.
In Greek mythology the god Vulcan forged mechanical handmaidens of gold and three-legged tables that could move under their own power. As early as 400 BC the Greek mathematician Archytas of Tarentum wrote about the possibility of making a robot bird propelled by steam power.
In the first century AD, Hero of Alexandria (credited with designing the first machine based on steam) designed automatons, one of them with the ability to talk, according to legend. Nine hundred years ago Al-Jazari designed and constructed automatic machines such as water clocks, kitchen appliances, and musical instruments powered by water.
In 1495 the great Renaissance Italian artist and scientist Leonardo da Vinci drew diagrams of a robot knight that could sit up, wave its arms, and move its head and jaw. Historians believe that this was the first realistic design of a humanoid machine.
The first crude but functioning robot was built in 1738 by Jacques de Vaucanson, who made an android that could play the flute, as well as a mechanical duck.
The word “robot” comes from the 1920 Czech play R.U.R. by playwright Karel Capek (“robot” means “drudgery” in the Czech language and “labor” in Slovak). In the play a factory called Rossum’s Universal Robots creates an army of robots to perform menial labor. (Unlike ordinary machines, however, these robots are made of flesh and blood.) Eventually the world economy becomes dependent on these robots. But the robots are badly mistreated and finally rebel against their human masters, killing them off. In their rage, however, the robots kill all the scientists who can repair and create new robots, thereby dooming themselves to extinction. In the end, two special robots discover that they have the ability to reproduce and the potential to become a new robot Adam and Eve.
Robots were also the subject of one of the earliest and most expensive silent movies ever made, Metropolis, directed by Fritz Lang in 1927 in Germany. The story is set in the year 2026, and the working class has been condemned to work underground in wretched, squalid factories, while the ruling elite play aboveground. A beautiful woman, Maria, has earned the trust of the workers, but the ruling elite fear that one day she might lead them to revolt. So they ask an evil scientist to make a robot copy of Maria. Eventually, the plot backfires because the robot leads the workers to revolt against the ruling elite and bring about the collapse of the social system.
Artificial intelligence, or AI, is different from the previous technologies we have discussed so far in that the fundamental laws that underpin it are still poorly understood. Although physicists have a good understanding of Newtonian mechanics, Maxwell’s theory of light, relativity, and the quantum theory of atoms and molecules, the basic laws of intelligence are still shrouded in mystery. The Newton of AI probably has not yet been born.
But mathematicians and computer scientists remain undaunted. To them it is only a matter of time before a thinking machine walks out of the laboratory.
The most influential person in the field of AI, a visionary who helped to lay the cornerstone of AI research, was the great British mathematician Alan Turing.
It was Turing who laid the groundwork of the entire computer revolution. He visualized a machine (since called the Turing machine) that consisted of just three elements: an input tape, an output tape, and a central processor (such as a Pentium chip) that could perform a precise set of operations. From this he was able to codify the laws of computing machines and precisely determine their ultimate power and limitations. Today all digital computers obey the rigorous laws laid down by Turing. The architecture of the entire digital world owes a great debt to Turing.
Turing also contributed to the foundation of mathematical logic. In 1931 the Viennese mathematician Kurt Gödel shocked the world of mathematics by proving that there are true statements in arithmetic that can never be proven within the axioms of arithmetic. (For example, the Goldbach conjecture of 1742 [that any even integer greater than two can be written as the sum of two prime numbers] is still unproven after over two and a half centuries, and may in fact be unprovable.) Gödel’s revelation shattered the two-thousand-year-old dream, dating back to the Greeks, of proving all true statements in mathematics. Gödel showed that there will always be true statements in mathematics that are just beyond our reach. Mathematics, far from being the complete and perfect edifice dreamed of by the Greeks, was shown to be incomplete.
Turing added to this revolution by showing that it was impossible to know in general whether a Turing machine would take an infinite amount of time to perform certain mathematical operations. But if a computer takes an infinite amount of time to compute something, it means that whatever you’re asking the computer to compute is not computable. Thus Turing proved that there were true statements in mathematics that are incomputable, that is, forever beyond the reach of computers, no matter how powerful.
During World War II, Turing’s pioneering work on code breaking arguably saved the lives of thousands of Allied troops and influenced the outcome of the war. The Allies were unable to decode the secret Nazi code encrypted by a machine called the Enigma, so Turing and his colleagues were asked to build a machine that would break the Nazi code. Turing’s machine was called the “bombe” and was ultimately successful. Over two hundred of his machines were in operation by the end of the war. As a result the Allies could read secret Nazi transmissions and hence fool the Nazis about the date and place of the final invasion of Germany. Historians have since debated precisely how pivotal Turing’s work was in the planning of the invasion of Normandy, which finally led to Germany’s defeat. (After the war, Turing’s work was classified by the British government; as a result, his pivotal contributions were unknown to the public.)
Instead of being hailed as a war hero who helped turn the tide of World War II, Turing was hounded to death. One day his home was burglarized, and he called the police. Unfortunately, the police found evidence of his homosexuality and arrested him. Turing was then ordered by the court to be injected with sex hormones, which had a disastrous effect, causing him to grow breasts and causing him great mental anguish. He committed suicide in 1954 by eating an apple laced with cyanide. (According to one rumor, the logo of the Apple Corporation, an apple with a bite taken out of it, pays homage to Turing.)
Today, Turing is probably best known fo
r his “Turing test.” Tired of all the fruitless, endless philosophical discussion about whether machines can “think” and whether they have a “soul,” he tried to introduce rigor and precision into discussions about artificial intelligence by devising a concrete test. Place a human and a machine in two sealed boxes, he suggested. You are allowed to address questions to each box. If you are unable to tell the difference between the responses of the human and the machine, then the machine has passed the “Turing test.”
Simple computer programs have been written by scientists, such as ELIZA, that can mimic conversational speech and hence fool most unsuspecting people into believing they are speaking to a human. (Most human conversations, for example, use only a few hundred words and concentrate on a handful of topics.) But so far no computer program has been written that can fool people who are specifically trying to determine which box contains the human and which contains the machine. (Turing himself conjectured that by the year 2000, given the exponential growth of computer power, a machine could be built that would fool 30 percent of the judges in a five-minute test.)
A small army of philosophers and theologians has declared that it is impossible to create true robots that can think like us. John Searle, a philosopher at the University of California at Berkeley, proposed the “Chinese room test” to prove that AI is not possible. In essence, Searle argues that while robots may be able to pass certain forms of the Turing test, they can do so only because they blindly manipulate symbols without the slightest understanding of what they mean.
Imagine that you are sitting inside the box and you don’t understand a word of Chinese. Assume you have a book that allows you to rapidly translate Chinese and manipulate its characters. If a person asks you a question in Chinese, you merely manipulate these strange-looking characters, without understanding what they mean, and give credible answers.
The essence of his criticism boils down to the difference between syntax and semantics. Robots can master the syntax of a language (e.g., manipulating its grammar, its formal structure, etc.) but not its true semantics (e.g., what the words mean). Robots can manipulate words without understanding what they mean. (This is somewhat similar to talking on the phone to an automatic voice message machine, where you have to punch in “one,” “two,” etc., for each response. The voice at the other end is perfectly capable of digesting your numerical responses, but is totally lacking in any understanding.)
Physicist Roger Penrose of Oxford, too, believes that artificial intelligence is impossible; mechanical beings that can think and possess human consciousness are impossible according to the laws of the quantum theory. The human brain, he claims, is so far beyond any possible creation of the laboratory that creating humanlike robots is an experiment that is doomed to fail. (He argues that in the same way that Gödel’s incompleteness theorem proved that arithmetic is incomplete, the Heisenberg uncertainty principle will prove that machines are incapable of human thought.)
Many physicists and engineers, however, believe that there is nothing in the laws of physics that would prevent the creation of a true robot. For example, Claude Shannon, often called the father of information theory, was once asked the question “Can machines think?” His reply was “Sure.” When he was asked to clarify that comment, he said, “I think, don’t I?” In other words, it was obvious to him that machines can think because humans are machines (albeit ones made of wetware rather than hardware).
Because we see robots depicted in the movies, we may think the development of sophisticated robots with artificial intelligence is just around the corner. The reality is much different. When you see a robot act like a human, usually there is a trick involved, that is, a man hidden in the shadows who talks through the robot via a microphone, like the Wizard in The Wizard of Oz. In fact, our most advanced robots, such as the robot rovers on the planet Mars, have the intelligence of an insect. At MIT’s famed Artificial Intelligence Laboratory, experimental robots have difficulty duplicating feats that even cockroaches can perform, such as maneuvering around a room full of furniture, finding hiding places, and recognizing danger. No robot on Earth can understand a simple children’s story that is read to it.
In the movie 2001: A Space Odyssey, it was incorrectly assumed that by 2001 we would have HAL, the super-robot that can pilot a spaceship to Jupiter, chat with crew members, repair problems, and act almost human.
THE TOP-DOWN APPROACH
There are at least two major problems scientists have been facing for decades that have impeded their efforts to create robots: pattern recognition and common sense. Robots can see much better than we can, but they don’t understand what they see. Robots can also hear much better than we can, but they don’t understand what they hear.
To attack these twin problems, researchers have tried to use the “top-down approach” to artificial intelligence (sometimes called the “formalist” school or GOFAI, for “good old-fashioned AI”). Their goal, roughly speaking, has been to program all the rules of pattern recognition and common sense on a single CD. By inserting this CD into a computer, they believe, the computer would suddenly become self-aware and attain humanlike intelligence. In the 1950s and 1960s great progress was made in this direction, with the creation of robots that could play checkers and chess, do algebra, pick up blocks, and so forth. Progress was so spectacular that predictions were made that in a few years robots would surpass humans in intelligence.
At the Stanford Research Institute in 1969, for example, the robot SHAKEY created a media sensation. SHAKEY was a small PDP computer placed above a set of wheels with a camera on top. The camera was able to survey a room, and the computer would analyze and identify the objects in that room and try to navigate around them. SHAKEY was the first mechanical automaton that could navigate in the “real world,” prompting journalists to speculate about when robots would leave humans in the dust.
But the shortcomings of such robots soon became obvious. The top-down approach to artificial intelligence resulted in huge, clumsy robots that took hours to navigate across a special room that contained only objects with straight lines, that is, squares and triangles. If you placed irregularly shaped furniture in the room the robot would be powerless to recognize it. (Ironically, a fruit fly, with a brain containing only about 250,000 neurons and a fraction of the computing power of these robots, can effortlessly navigate in three dimensions, executing dazzling loop-the-loop maneuvers, while these lumbering robots get lost in two dimensions.)
The top-down approach soon hit a brick wall. Steve Grand, director of the Cyberlife Institute, says that approaches like this “had fifty years to prove themselves and haven’t exactly lived up to their promise.”
In the 1960s scientists did not fully appreciate the enormity of the work involved in programming robots to accomplish even simple tasks, such as programming a robot to identify objects such as keys, shoes, and cups. As Rodney Brooks of MIT said, “Forty years ago the Artificial Intelligence Laboratory at MIT appointed an undergraduate to solve it over the summer. He failed, and I failed on the same problem in my 1981 Ph.D. thesis.” In fact, AI researchers still cannot solve this problem.
For example, when we enter a room, we immediately recognize the floor, chairs, furniture, tables, and so forth. But when a robot scans a room it sees nothing but a vast collection of straight and curved lines, which it converts to pixels. It takes an enormous amount of computer time to make sense out of this jumble of lines. It might take us a fraction of a second to recognize a table, but a computer sees only a collection of circles, ovals, spirals, straight lines, curly lines, corners, and so forth. After an enormous amount of computing time, a robot might finally recognize the object as a table. But if you rotate the image, the computer has to start all over again. In other words, robots can see, and in fact they can see much better than humans, but they don’t understand what they are seeing. Upon entering a room, a robot would see only a jumble of lines and curves, not chairs, tables, and lamps.
Our brain unco
nsciously recognizes objects by performing trillions upon trillions of calculations when we walk into a room—an activity that we are blissfully unaware of. The reason that we are unaware of all our brain is doing is evolution. If we were alone in the forest with a charging saber-toothed tiger, we would be paralyzed if we were aware of all the computations necessary to recognize the danger and escape. For the sake of survival, all we need to know is how to run. When we lived in the jungle, it simply was not necessary for us to be aware of all of the ins and outs of our brain’s recognizing the ground, the sky, the trees, the rocks, and so forth.
In other words, the way our brain works can be compared to a huge iceberg. We are aware of only the tip of the iceberg, the conscious mind. But lurking below the surface, hidden from view, is a much larger object, the unconscious mind, which consumes vast amounts of the brain’s “computer power” to understand simple things surrounding it, such as figuring out where you are, whom you are talking to, and what lies around you. All this is done automatically without our permission or knowledge.
This is the reason that robots cannot navigate across a room, read handwriting, drive trucks and cars, pick up garbage, and so forth. The U.S. military has spent hundreds of millions of dollars trying to develop mechanical soldiers and intelligent trucks, without success.
Scientists began to realize that playing chess or multiplying huge numbers required only a tiny, narrow sliver of human intelligence. When the IBM computer Deep Blue beat world chess champion Garry Kasparov in a six-game match in 1997, it was a victory of raw computer power, but the experiment told us nothing about intelligence or consciousness, although the game made plenty of headlines. As Douglas Hofstadter, a computer scientist at Indiana University, said, “My God, I used to think chess required thought. Now, I realize it doesn’t. It doesn’t mean Kasparov isn’t a deep thinker, just that you can bypass deep thinking in playing chess, the way you can fly without flapping your wings.”