The Doomsday Handbook
Page 8
What of the gray goo?
In 2004, Drexler made public attempts to play down his more apocalyptic warnings. “I wish I had never used the term ‘grey goo,’” he told Nature, adding that if he could write Engines of Creation again, he would barely mention self-replicating nanobots.
His statement underscored calculations made by a researcher at the Texas-based Zyvex Corporation, the first molecular nanotechnology company. Robert A. Freitas Jr. looked at what it would take for Drexler’s idea to come true. He weighed up how fast nanobots, if they ever came into existence, might be able to replicate, and how much energy they would have available to them against our ability to detect and stop them.
What a nanorobot might look like, according to the science-fiction stories. Here it sits on the head of a pin and can inject its onboard payload using a tiny needle (not shown).
His research, published in 2000, concluded that fast-replicating devices of the type in Drexler’s scenario would need so much energy and produce so much heat that they would become easily detectable to policing authorities, which could then deal with the threat.
If the nanomachines were made primarily of minerals containing aluminum, titanium or boron, then life forms would be spared the rampage anyway, as these metals are millions of times more abundant in the Earth’s crust than in living things. The machines could just mine the Earth rather than killing us.
There is also the issue of power. “Current nanomachine designs typically require power densities on the order of 105–109 W/m3 (watts per metre cubed) to achieve effective results,” wrote Freitas. “Biological systems typically operate at 102–106 W/m3. Solar power is not readily available below the surface, and the mean geothermal heat flow is only 0.05 W/m2 at the surface, just a tiny fraction of solar insolation.”
The Royal Society’s report, published in 2004, also poured cold water on the idea that nanobots could replicate in the numbers required to destroy life. But the experts did raise several more pressing concerns about the possible health effects of the vanishingly small particles being made by the nanotechnology industry. They are created by grinding metals or other materials into an ultrafine powder—in sunscreens, for example, nanoparticles are designed to absorb and reflect UV rays while appearing transparent to the naked eye.
Ann Dowling, the Cambridge University professor who chaired the group behind the report, said: “Where particles are concerned, size really does matter. Nanoparticles can behave quite differently from larger particles of the same material. There is evidence that at least some manufactured nanoparticles are more toxic than the same chemical in its larger form, but mostly we just don’t know. We don’t know what their impact is on either humans or the environment.”
These particles, scientists warned, could be inhaled or absorbed through the skin. We already inhale millions of nanoparticles contained in the pollution from motor vehicles, and these have been linked to heart and lung conditions. As nanotechnology becomes more widespread in industry, experts worry that we will become increasingly exposed to these airborne dangers.
Self-replicators
The idea of self-replication in nanotechnology has never quite gone away. But instead of building machines or replicators from scratch, as Drexler might have predicted in the 1980s, modern scientists look to nature for a helping hand.
Around the time Freitas wrote his paper, a researcher at Cornell University in New York state had attached a tiny nickel propeller on to a biological version of a motor powered by the fuel that makes humans tick, a molecule called ATP. At around ten nanometres across, this molecule is 50,000 times smaller than McLellan’s mini motor. Adapting existing molecules like this, rather than building things from scratch atom by atom, is the way nanotechnology of the future will really work. And that is also where potential dangers lie.
“It is one of our most enduring myths that anyone with bad intentions will choose to express them in the most technologically complex way,” wrote journalist and author Philip Ball in an article for Nature Materials, bemoaning the scare tactics used by opponents of nanotechnology. “You want life-threatening replicators? Then set loose a few smallpox viruses. These are nanobots that really work.”
Artificial Superintelligence
* * *
We have all come across the idea, in science fiction, of an ultra-smart computer or robot that tries to take over the world. Although it is designed by humans to improve their lives, its programming soon makes it recognize the superiority of its own abilities and technology over bog-standard flesh-and-blood life forms. Disaster awaits.
* * *
A familiar trope, perhaps, but not one that is too realistic for now, right? Modern computers do not approach the complexity or intelligence of even a human baby. Robots might be sophisticated enough to assemble cars and assist during complex surgery, but they are dumb automatons.
But don’t discount progress. It is only a matter of time before the technological hurdles are surpassed and we reach a point where the machines are challenging us in the intelligence stakes. “It might happen someday,” says Douglas Hofstadter, an expert in the computer modeling of mental processes at Indiana University, Bloomington. The ramifications would be huge, he says, since the highest form of sentient being on the planet would no longer be human. “Perhaps these machines—our ‘children’—will be vaguely like us and will have a culture similar to ours, but most likely not. In that case, we humans may well go the way of the dinosaurs.”
* * *
Perhaps these machines—our “children”—will be vaguely like us and will have a culture similar to ours, but most likely not.
* * *
The trouble with superintelligence
Intelligence is one of those very human qualities that is hard enough to define, never mind understand. Is it memory capacity? Is it processing ability? Is it the ability to infer meaning from multiple and conflicting sources of information simultaneously, in the way, for example, a teenager can when talking, texting, watching TV and surfing the web all at once?
The lack of a definition, however, has not stopped engineers and programmers from trying to recreate aspects of intelligence in machines. This endeavor has benefited all of us, giving us everything from cheap, fast computers to smart software that dynamically runs our electrical grids and traffic lights or manages our lives.
Compared to humans, though, even the most sophisticated modern machines are not what we would call “intelligent.” There might be robots that can mimic basic emotions, and those that can have (almost) real conversations, but proper artificial intelligence is decades away at best. And probably even longer before it starts to become threatening.
In stanley Kubrick’s 2001: A Space Odyssey, the spaceship Discovery is taken over by the seemingly malevolent computer, Hal 9000, which believes that humans are not required for its mission.
* * *
These intelligent machines will grow from us, learn our skills, share our goals and values, and can be viewed as children of our minds.
* * *
Of course, another way of looking at it is that the arrival of true artificial intelligence is just a matter of time. Computing technology and robot controls roughly double in complexity and processing power every year. “They are now barely at the lower range of vertebrate complexity, but should catch up with us within a half-century,” says Hans Moravec, one of the founders of the robotics department of Carnegie Mellon University. “By 2050 I predict that there will be robots with humanlike mental power, with the ability to abstract and generalize. These intelligent machines will grow from us, learn our skills, share our goals and values, and can be viewed as children of our minds. Not only will these robots look after us in the home, but they will also carry out complex tasks that currently require human input, such as diagnosing illness and recommending a therapy or cure. They will be our heirs and will offer us the best chance we’ll ever get for immortality by uploading ourselves into advanced robots.”
On
e way to build ever more intelligent machines is to keep copying humans. The human brain and nervous system is the most complex and intelligent structure we know of, and the technology to measure and copy it has taken off in the past few decades. “In this short time, substantial progress has been made,” says Nick Bostrom, a philosopher and the director of the Future of Humanity Institute at the University of Oxford. “We are beginning to understand early sensory processing. There are reasonably good computational models of primary visual cortex, and we are working our way up to the higher stages of visual cognition. We are uncovering what the basic learning algorithms are that govern how the strengths of synapses are modified by experience. The general architecture of our neuronal networks is being mapped out as we learn more about the interconnectivity between neurones and how different cortical areas project on to one another. While we are still far from understanding higher-level thinking, we are beginning to figure out how the individual components work and how they are hooked up.”
As well as raw processing power, researchers are adding more human-like properties: artificial consciousness, for example, could help machines understand their place in the world and behave in an appropriate manner, working out what is beneficial to it and its users, and what is dangerous.
Artificial emotions might also help us to have more natural and comfortable interactions with robots. Researchers at the University of Hertfordshire have programmed a robot to develop and display emotions, allowing it to form bonds with the people it meets depending on how it is treated. Cameras in the robot’s eyes read the physical postures, gestures and movements of a person’s body. So far, it can mimic the emotional skills of a one-year-old child, learning and interpreting specific cues from humans and responding accordingly. Its neural network can remember different faces, and this understanding, plus some basic rules about what is good and bad for it learned from exploring the local environment, allows the robot to indicate whether it is happy, sad or frightened about what is going on around it. It can also be programmed with different personalities—a more independent robot is less likely to call for human help when exploring a room, whereas a more needy and fearful robot will display distress if it finds something in the room that is potentially harmful or unknown.
* * *
Within 14 years after human-level artificial intelligence is reached, there could be machines that think more than a hundred times more rapidly than humans do.
* * *
The impact of artificial intelligence
If, as Moravec suggests, machines will achieve human-level intelligence by the middle of the 21st century, what are the implications? In an essay for the journal Futures, Bostrom plays out some of the consequences. First, he points out, artificial minds can easily be copied. “Apart from hardware requirements, the marginal cost of creating an additional artificial intelligence after you have built the first one is close to zero. Artificial minds could therefore quickly come to exist in great numbers, amplifying the impact of the initial breakthrough.”
As soon as machines reach human levels of intelligence, we will quickly see the creation of machines with even greater intellectual abilities that surpass any human mind. “Within 14 years after human-level artificial intelligence is reached, there could be machines that think more than a hundred times more rapidly than humans do,” says Bostrom. “In reality, progress could be even more rapid than that, because there would likely be parallel improvements in the efficiency of the software that these machines use. The interval during which the machines and humans are roughly matched will likely be brief. Shortly thereafter, humans will be unable to compete intellectually with artificial minds.”
This could be a good thing—these superintelligent machines will have easy access to huge sets of data, and will accelerate progress in technology and science faster than any human being. And they will no doubt also devote some of their energies to designing the next generation of machines, which will be even smarter. Some futurologists speculate that this positive-feedback loop could lead to what they call a singularity, a point where technological progress becomes so rapid that, according to Bostrom, “genuine superintelligence, with abilities unfathomable to mere humans, is attained within a short time span.”
How do you predict whether such an intelligence would be a good thing for the human race? Would it help us to eradicate poverty and disease? Or would it decide that humans are a waste of resources and wipe us out? It could all be down to programming. “When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so,” says Bostrom. “For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.”
Given the potential nightmare, it seems like a good idea to build in safeguards to prevent machines from hurting people or worse. “The ethical question of any machine that is built has to be considered at the time you build the machine,” says Igor Aleksander, an emeritus professor of neural systems engineering at Imperial College, London. “What’s that machine going to be capable of doing? Under what conditions will it do it, under what conditions could it do harm?”
He adds that these are all engineering problems rather than ethical dilemmas. “A properly functioning conscious machine is going to drive your car and it’s going to drive it safely. It will be very pleased when it does that, it’s going to be worried if it has an accident. If suddenly it decides, I’m going to kill my passenger and drive into a wall, that’s a malfunction. Human beings can malfunction in that way. For human beings, you have the law to legislate, for machines you have engineering procedures.”
* * *
For human beings, you have the law to legislate, for machines you have engineering procedures.
* * *
The science-fiction novelist Vernor Vinge considered what an Earth with a machine superintelligence might look like. “If the singularity cannot be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad.”
The physical extinction of the human race is one possibility, he says, but that is not the scariest scenario. “Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet ... In a Post-Human world there would still be plenty of niches where human-equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients.”
He adds: “Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new environment to what we call human now.”
Is it likely?
You might be relieved to hear that not everyone is so pessimistic. In an interview on the potential dangers of a technology takeover, the linguist and psychologist Steven Pinker told the US Institute of Electrical and Electronics Engineers (IEEE) that there was not the “slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, milehigh buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.”
But that does not mean it is impossible. John Casti of the International Institute for Applied Systems Analysis in Austria says that the singularity is scientifically plausible, and the only issue concerns the time frame over which it would unfold. This moment would mark the end of the supr
emacy of Homo sapiens as the dominant species on planet Earth. “At that point a new species appears, and humans and machines will go their separate ways, not merge one with the other,” he told the IEEE. “I do not believe this necessarily implies a malevolent machine takeover; rather, machines will become increasingly uninterested in human affairs just as we are uninterested in the affairs of ants or bees. But it’s more likely than not in my view that the two species will comfortably and more or less peacefully coexist—unless human interests start to interfere with those of the machines.”
Transhumanism
* * *
Humans have spent thousands of years building tools to make our lives better, happier and more productive. At some point, our innovation will take us far beyond the natural and lead to people with capabilities so advanced that we might not define them as human. Are we ready?
* * *
One of the most important advances in the past century has been our increased understanding of medicine. Better drugs, better tools and a sophisticated knowledge of what happens to body cells when they go wrong—all of this has helped to heal us when we fall ill and gifted us longer, healthier lives.