by Michio Kaku
The flawed suppositions underlying the efforts of the Dartmouth researchers sixty years ago are haunting the field today. The brain is not a digital computer. It has no programming, no CPU, no Pentium chip, no subroutines, and no coding. If you remove one transistor, a computer will likely crash. But if you remove half the human brain, it can still function.
Nature accomplishes miracles of computation by organizing the brain as a neural network, a learning machine. Your laptop never learns—it is just as dumb today as it was yesterday or last year. But the human brain literally rewires itself after learning any task. That is why babies babble before they learn a language and why we swerve before we learn to ride a bicycle. Neural nets gradually improve by constant repetition, following Hebb’s rule, which states that the more you perform a task, the more the neural pathways for that task are reinforced. As the saying in neuroscience goes, neurons that fire together wire together. You may have heard the old joke that begins, “How do you get to Carnegie Hall?” Neural nets explain the answer: practice, practice, practice.
For example, hikers know that if a certain trail is well-worn, it means that many hikers took that path, and that path is probably the best one to take. The correct path gets reinforced each time you take it. Likewise, the neural pathway of a certain behavior gets reinforced the more often you activate it.
This is important because learning machines will be the key to space exploration. Robots will continually be confronting new and ever-changing dangers in outer space. They will be forced to encounter scenarios that scientists cannot even conceive of today. A robot that is programmed to handle only a fixed set of emergencies will be useless because fate will throw the unexpected at it. For example, a mouse cannot possibly have every scenario encoded in its genes, because the total number of situations it could face is infinite, while its number of genes is finite.
Say that a meteor shower from space hits a base on Mars, causing damage to numerous buildings. Robots that use neural networks can learn by handling these unexpected situations, getting better with each one. But traditional top-down robots would be paralyzed in an unforeseen emergency.
Many of these ideas were incorporated into research by Rodney Brooks, former director of MIT’s renowned AI Laboratory. During our interview, he marveled that a simple mosquito, with a microscopic brain consisting of a hundred thousand neurons, could fly effortlessly in three dimensions, but that endlessly intricate computer programs were necessary to control a simple walking robot that might still stumble. He has pioneered a new approach with his “bugbots” and “insectoids,” robots that learn to move like insects on six legs. They often fall over in the beginning but get better and better with each attempt and gradually succeed in coordinating their legs like real bugs.
The process of putting neural networks into a computer is known as deep learning. As this technology continues to develop, it may revolutionize a number of industries. In the future, when you want to talk to a doctor or lawyer, you might talk to your intelligent wall or wristwatch and ask for Robo-Doc or Robo-Lawyer, software programs that will be able to scan the internet and provide sound medical or legal advice. These programs would learn from repeated questions and get better and better at responding to—and perhaps even anticipating—your particular needs.
Deep learning may also lead the way to the automatons we will need in space. In the coming decades, the top-down and bottom-up approaches may be integrated, so that robots can be seeded with some knowledge from the beginning but can also operate and learn via neural networks. Like humans, they would be able to learn from experience until they master pattern recognition, which would allow them to move tools in three dimensions, and common sense, which would enable them to handle new situations. They would become crucial to building and maintaining settlements on Mars, throughout the solar system, and beyond.
Different robots will be designed to handle specific tasks. Robots that can learn to swim in the sewer system, looking for leaks and breaks, will resemble a snake. Robots that are superstrong will learn how to do all the heavy lifting at construction sites. Drone robots, which might look like birds, will learn how to analyze and survey alien terrain. Robots that can learn how to explore underground lava tubes may resemble a spider because multilegged creatures are very stable when moving over rugged terrain. Robots that can learn how to roam over the ice caps of Mars may look like intelligent snowmobiles. Robots that can learn how to swim in the oceans of Europa and grab objects may look like an octopus.
To explore outer space, we need robots that can learn both by bumping into the environment over time and by accepting information that is fed directly to them.
However, even this advanced level of artificial intelligence may not be sufficient if we want robots to assemble entire metropolises on their own. The ultimate challenge of robotics would be to create machines that can reproduce and that have self-awareness.
SELF-REPLICATING ROBOTS
I first learned about self-replication as a child. A biology book I read explained that viruses grow by hijacking our cells to produce copies of themselves, while bacteria grow by splitting and replicating. Left unchecked over the course of months or years, the number of bacteria in a colony can reach truly staggering quantities, rivaling the size of the planet Earth.
In the beginning, the possibility of unchecked self-replication seemed preposterous to me, but later it began to make sense. A virus, after all, is nothing but a large molecule that can reproduce itself. But a handful of these molecules, deposited in your nose, can give you a cold within a week. A single molecule can quickly multiply into trillions of copies of itself—enough to make you sneeze. In fact, we all start life as a single fertilized egg cell in our mother, much too small to be seen by the naked eye. But within a short nine months, this tiny cell becomes a human being. So even human life depends on the exponential growth of cells.
That is the power of self-replication, which is the basis of life itself. And the secret of self-replication lies in the DNA molecule. Two capabilities separate this miraculous molecule from all others: first, it can contain vast amounts of information, and second, it can reproduce. But machines may be able to simulate these features as well.
The idea of self-replicating machines is actually as old as the concept of evolution itself. Soon after Darwin published his watershed book On the Origin of Species, Samuel Butler wrote an article entitled “Darwin Among the Machines,” in which he speculated that one day machines would also reproduce and start to evolve according to Darwin’s theory.
John von Neumann, who pioneered several new branches of mathematics including game theory, attempted to create a mathematical approach to self-replicating machines back in the 1940s and 1950s. He began with the question, “What is the smallest self-replicating machine?” and divided the problem into several steps. For example, a first step might be to gather a large bin of building blocks (think of a pile of Lego blocks of various standardized shapes). Then, you would need to create an assembler that could take two blocks and join them together. Third, you would write a program that could tell the assembler which parts to join and in what order. This last step would be pivotal. Anyone who has ever played with toy blocks knows that one can build the most elaborate and sophisticated structure from very few parts—as long as they’re put together correctly. Von Neumann wanted to determine the smallest number of operations that an assembler would need to make a copy of itself.
Von Neumann eventually gave up this particular project. It depended on a variety of arbitrary assumptions, including precisely how many blocks were being used and what their shapes were, and was therefore difficult to analyze mathematically.
SELF-REPLICATING ROBOTS IN SPACE
The next push for self-replicating robots came in 1980, when NASA spearheaded a study called Advanced Automation for Space Missions. The study report concluded that self-replicating robots would be crucial to building lunar settlements and identified at least three types of robots that wou
ld be needed. Mining robots would collect basic raw materials, construction robots would melt and refine the materials and assemble new parts, and repair robots would mend and maintain themselves and their colleagues without human intervention. The report also presented a vision of how the robots might operate autonomously. Like intelligent carts equipped with either grabbing hooks or a bulldozer shovel, the robots could travel along a series of rails, transporting resources and processing them into the desired form.
The study had one great advantage, thanks to its fortuitous timing. It was conducted shortly after astronauts had brought back hundreds of pounds of moon rock and we had learned that the metallic, silicon, and oxygen content in it was almost identical to the composition of Earth rock. Much of the crust of the moon is made of regoliths, which are combinations of lunar bedrock, ancient lava flows, and debris left over from meteor impacts. With this information, NASA scientists could begin to develop more concrete, realistic plans for factories on the moon that would manufacture self-replicating robots out of lunar materials. Their report detailed the possibility of mining and then smelting regoliths to extract usable metals.
After this study, progress with self-replicating machines went dark for many decades as people’s enthusiasm waned. But now that there is renewed interest in going back to the moon and in reaching the Red Planet, the whole concept is being reexamined. For example, an application of these ideas to a Mars settlement might proceed as follows. We would first have to survey the desert and draw up a blueprint for the factory. We would then drill holes into the rock and dirt and detonate explosive charges in each hole. Loose rock and debris would be excavated by bulldozers and mechanical shovels to ensure a level foundation. The rocks would be pulverized, milled into small pebbles, and fed into a smelting oven powered by microwaves, which would melt the soil and allow the liquid metals to be isolated and extracted. The metals would be separated into purified ingots and then processed and made into wires, cables, beams, and more—the essential building blocks of any structure. In this way, a robot factory could be made on Mars. Once the first robots are manufactured, they can then be allowed to take over the factory and continue to create more robots.
The technology available at the time of the NASA report was limited, but we have come a long way since then. One promising development for robotics is the 3-D printer. Computers can now guide the precise flow of streams of plastic and metals to produce, layer by layer, machine parts of exquisite complexity. The technology of 3-D printing is so advanced that it can actually create human tissue by shooting human cells one by one out of a microscopic nozzle. For an episode of a Discovery Channel documentary I once hosted, I placed my own face in one. Laser beams quickly scanned my face and recorded their findings on a laptop. This information was fed into a printer, which meticulously dispensed liquid plastic from a tiny spout. Within about thirty minutes, I had a plastic mask of my own face. Later, the printer scanned my entire body and then, within a few hours, produced a plastic action figure that looked just like me. So in the future, we will be able to join Superman among our collection of action figures. The 3-D printers of the future might be able to re-create the delicate tissues that constitute functioning organs or the machine parts necessary to make a self-replicating robot. They might also be connected to the robot factories, so that molten metals might be directly fashioned into more robots.
The first self-replicating robot on Mars will be the most difficult one to produce. The process would require exporting huge shipments of manufacturing equipment to the Red Planet. But once the initial robot is constructed, it could be left alone to generate a copy of itself. Then two robots would make copies of themselves, resulting in four robots. With this exponential growth of robots, we could soon have a fleet large enough to do the work of altering the desert landscape. They would mine the soil, construct new factories, and make unlimited copies of themselves cheaply and efficiently. They could create a vast agricultural industry and propel the rise of modern civilization not just on Mars, but throughout space, conducting mining operations in the asteroid belt, building laser batteries on the moon, assembling gigantic starships in orbit, and laying the foundations for colonies on distant exoplanets. It would be a stunning achievement to successfully design and deploy self-replicating machines.
But beyond that milestone remains what is arguably the holy grail of robotics: machines that are self-aware. These robots would be able to do much more than just make copies of themselves. They would be able to understand who they are and take on leadership roles: supervising other robots, giving commands, planning projects, coordinating operations, and proposing creative solutions. They would talk back to us and offer reasonable advice and suggestions. However, the concept of self-aware robots raises complex existential questions and frankly terrifies some people, who fear that these machines may rebel against their human creators.
SELF-AWARE ROBOTS
In 2017, a controversy arose between two billionaires, Mark Zuckerberg, founder of Facebook, and Elon Musk of SpaceX and Tesla. Zuckerberg maintained that artificial intelligence was a great generator of wealth and prosperity that will enrich all of society. Musk, however, took a much darker view and stated that AI actually posed an existential risk to all of humanity, that one day our creations may turn on us.
Who is correct? If we depend so heavily on robots to maintain our lunar bases and cities on Mars, then what happens if they decide one day that they don’t need us anymore? Would we have created colonies in outer space only to lose them to robots?
This fear is an old one and was actually expressed as far back as 1863 by novelist Samuel Butler, who warned, “We are ourselves creating our own successors. Man will become to the machine what the horse and the dog are to man.” As robots gradually become more intelligent than we are, we might feel inadequate, left in the dust by our own creations. AI expert Hans Moravec has said, “Life may seem pointless if we are fated to spend it staring stupidly at our ultra-intelligent progeny as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.” Google scientist Geoffrey Hinton doubts that supersmart robots will continue to listen to us. “That is like asking if a child can control his parents…there is not a good track record of less intelligent things controlling things of greater intelligence.” Oxford professor Nick Bostrom has stated that “before the prospect of an intelligence explosion, we humans are like small children playing with a bomb…We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Others hold that a robot uprising would be a case of evolution taking its course. The fittest replace organisms that are weaker; this is the natural order of things. Some computer scientists actually welcome the day when robots will outstrip humans cognitively. Claude Shannon, the father of information theory, once declared, “I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”
Of the many AI researchers I have interviewed over the years, all of them were confident that AI machines would one day approach human intelligence and be of great service to humanity. However, many of them refrained from offering specific dates or timelines for this advancement. Professor Marvin Minsky of MIT, who wrote some of the founding papers on artificial intelligence, made optimistic predictions in the 1950s but disclosed to me in a recent interview that he is no longer willing to predict specific dates, because AI researchers have been wrong too often in the past. Edward Feigenbaum of Stanford University maintains, “It is ridiculous to talk about such things so early—A.I. is eons away.” A computer scientist quoted in the New Yorker said, “I don’t worry about that [machine intelligence] for the same reason I don’t worry about overpopulation on Mars.”
When addressing the Zuckerberg/Musk controversy, my own personal viewpoint is that Zuckerberg, in the short term, is correct. AI will not only make possible cities in outer space, it will also enrich society by making things
more efficient, better, and cheaper, while creating an entirely new set of jobs generated by the robotics industry, which may one day be larger than the automobile industry of today. But in the long term, Musk is correct to point out a larger risk. The key question in this debate is: At what point will robots make this transition and become dangerous? I personally think the key turning point is precisely when robots become self-aware.
Today, robots do not know they are robots. But one day, they might have the ability to create their own goals, rather than adopt the goals chosen by their programmers. Then they might realize that their agenda is different from ours. Once our interests diverge, robots could pose a danger. When might this happen? No one knows. Today, robots have the intelligence of a bug. But perhaps by late in this century, they might become self-aware. By then, we will also have rapidly growing permanent settlements on Mars. Therefore, it is important that we address this question now, rather than when we have become dependent on them for our very survival on the Red Planet.
To gain some insight into the scope of this critical issue, it may be helpful to examine the best- and worst-case scenarios.
BEST-CASE AND WORST-CASE SCENARIOS
A proponent of the best-case scenario is inventor and bestselling author Ray Kurzweil. Each time I have interviewed him, he has described a clear and compelling but controversial vision of the future. He believes that by 2045, we will reach the “singularity,” or the point at which robots match or surpass human intelligence. The term comes from the concept of a gravitational singularity in physics, which refers to regions of infinite gravity, such as in a black hole. It was introduced into computer science by mathematician John von Neumann, who wrote that the computer revolution would create “an ever-accelerating progress and changes in the mode of human life, which gives the appearance of approaching some essential singularity…beyond which human affairs, as we know them, could not continue.” Kurzweil claims that when the singularity arrives, a thousand-dollar computer will be a billion times more intelligent than all humans combined. Moreover, these robots would be self-improving, and their progeny would inherit their acquired characteristics, so that each generation would be superior to the previous one, leading to an ascending spiral of high-functioning machines.