by Joel Garreau
When others were just beginning to think that stand-alone desktop personal computers might someday be a big deal, Joy imagined an entirely different world. In it, intelligence would be embedded in everything from telephones to shoes to doors to eyeglasses, all talking and thinking with each other in a fertile jungle of information, often without the intervention of humans. A camera, for example, might talk to a printer without having to go through a PC. Everything was simple, like using a lamp or a telephone. The complexity—as with the electricity or telephone systems—was managed by the network Sun served. In 1988 this crystallized in Sun’s enduring slogan, “The network is the computer.”
Joy has developed themes that have shaped his life. His watchword is “simple”—avoid the puffed-up and esoteric. Another important concept for him has been openness. He has found great power in letting the world freely understand the underpinnings of his work in order to communally and cooperatively tinker with and improve on it. This open-source philosophy contrasts markedly with the proprietary and jealously guarded approach of outfits such as Microsoft. Finally, there is the importance of networks. Nourish the network. Trust the network. The network collectively can do things beyond the power or even imagination of any individual.
Joy enjoys a reputation in Silicon Valley as thoughtful and level-headed. “Nobody is more phlegmatic than Bill,” says Stewart Brand, the Internet pioneer. “He is the adult in the room.”
That’s why it came as such a shock in March 2000 when this godfather of the Information Age predicted “something like extinction” of the human race within the next generation. Most extraordinarily, he blamed it on the accelerating pace of technological change he had helped create. He intended his warning to be reminiscent of Albert Einstein’s famous 1939 letter to President Franklin Delano Roosevelt alerting him to the possibility of an atomic bomb.
In a vast, 24-page spread in Wired magazine entitled “Why the Future Doesn’t Need Us,” Joy announced that, to his horror, he found advancing technology poses a threat to the human species. “I have always believed that making software more reliable, given its many uses, will make the world a safer and better place,” he wrote in the article, on which he worked for six months. “If I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine that such a day may come.”
At the peak of the Internet boom, when technologists of Joy’s stature were beating movie stars to the covers of magazines, his conversion went off like a thunderclap on a sunny day.
“I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil,” he wrote. He was horrified by “a surprising and terrible empowerment of extreme individuals.” Joy reviewed the prospects for the genetic, robotic, information and nano technologies and was aghast. Genetic technology, he noted, makes possible the creation of “white plagues” designed to kill widely but selectively—attacking only people of a targeted race, for example. It also holds out the possibility of engineering our evolution into “several separate and unequal species . . . that would threaten the notion of equality that is the very cornerstone of our democracy.” Robots more intelligent than humans could reduce the lives of their creators to that of pathetic zombies. “A robotic existence would not be like a human one in any sense that we understand,” Joy wrote. “The robots would in no sense be our children . . . on this path our humanity may well be lost.” Nanotechnology holds out the possibility of the “gray goo” end-of-the-world scenario, in which devices too tough, too small, and too rapidly spreading to stop, suck everything vital out of all living things, reducing their husks to ashy mud in a matter of days. “Gray goo would surely be a depressing ending to our human adventure on Earth,” he writes, “far worse than mere fire or ice, and one that could stem from a simple laboratory accident. Oops.”
“Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups,” he wrote. “Knowledge alone will enable the use of them.” Nuclear weapons require heavy industrial processes and rare minerals. They take the resources of, at the very least, a rogue state. With a variety of GRIN technologies, by contrast, one bright but embittered loner or one dissident grad student intent on martyrdom could—in a decent biological lab, for example—unleash more death than ever dreamed of in nuclear scenarios. It could even be done by accident. Joy called these “weapons of knowledge-enabled mass destruction.” What really alarmed him about these GRIN weapons was their “power of self-replication.” Unlike nuclear weapons, these horrors could make more and more of themselves. Let loose on the planet, the genetically engineered pathogens, the super-intelligent robots, the tiny nanotech assemblers and of course the computer viruses could create trillions more of themselves, vastly more unstoppable than mosquitoes bearing the worst plagues.
This belief—that one strike and you’re out—is, in short, what I have come to call The Hell Scenario. “The only realistic alternative I see is relinquishment,” he concluded, “to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.”
“There’s nothing optional about this, you see,” Joy says, staring stoically across his long conference table in an unadorned room in the outback of the American West. “It’s like death and taxes. We have this problem. It will get us.” Joy is The Hell Scenario’s most celebrated apostle.
William Nelson Joy was born on November 8, 1954, in what then were the ragged fringes of metropolitan Detroit. The area around Fourteen Mile and Orchard Lake Roads in which he grew up was then called Farmington. Now more tony, the area is pleased to be called Farmington Hills. But at the time, you could get a four-bedroom two-story Colonial with a basement and a humidifier on a quarter-acre lot for $29,000, which is what his family did. His dad, William, was a counselor and vice principal in the Detroit schools who taught business courses such as accounting and typing, as did his mother, Ruth. By the time he was three, Bill was already reading. His dad took him to the elementary school, where he sat on the principal’s lap and read him a story. As a result, Joy started school early. Then he skipped a grade. He drove adults nuts, asking lots of questions. He escaped into books. He wanted to have a telescope to look at the stars. Since there was no money for him even to make one, he checked out library books on telescope making and read them instead.
His mother was of Swedish extraction, so he was raised Lutheran, but to no lasting effect. “The scars are invisible,” he jokes. As a teenager, he wished to be a ham radio operator. It was a kind of predecessor to the Internet, he remembers—very addictive, and quite solitary. His family didn’t have the money to buy him the equipment, but cost aside, his mother was appalled. Joy—who was not exactly living up to his name even then—was not going to spend his life in the basement with headphones on. “I was antisocial enough already,” he recalls her feeling. “Oh yes, I looked like Clark Kent with my hair greased back and the glasses. But I didn’t have the alter ego.” He did not go to his high school prom.
Ruth Joy died when he was 18. Her death was particularly difficult on his younger brother and sister. A few days after his mother died, his father said something that stuck in his mind: “Sorry you kids have to go through this.” Joy still thinks about that, because unlike his father’s generation—which had grown up before antibiotics and which had been sent off to war—he is impressed by how many young people today have no firsthand experience with death or disease. He believes they are unprepared for the evils that he fears will come.
Growing up, Joy had few close friends, but he did discover the great writers who speculated about the future. He especially remembers Robert A. Heinlein’s Have Spacesuit—Will Travel, and, like so many others of his temperament, Isaac Asimov’s I, Robot. Thursday nights his parents went bowling and the kids stayed home alone. That was the night of Gene Roddenberry’s original Star Trek. It made a big impression on Joy. He seized on the notion that humans had a future i
n the cosmos, and it was like a Western, with big heroes and big adventures. He devoured Roddenberry’s vision of the centuries to come as one of strong moral values, embodied in codes such as the Prime Directive:
As the right of each sentient species to live in accordance with its normal cultural evolution is considered sacred, no Star Fleet personnel may interfere with the healthy development of alien life and culture. Such interference includes the introduction of superior knowledge, strength, or technology to a world whose society is incapable of handling such advantages wisely. Star Fleet personnel may not violate this Prime Directive, even to save their lives and/or their ship unless they are acting to right an earlier violation or an accidental contamination of said culture. This directive takes precedence over any and all other considerations, and carries with it the highest moral obligation.
“This had an incredible appeal to me,” he recalls. “Ethical humans, not robots, dominated this future, and I took Roddenberry’s dream as part of my own.” Now, in The Hell Scenario, the possibility looms that humans already may be violating the Prime Directive—on their home planet.
At the age of 16, in 1971, Joy embraced the advanced courses of math majors at the University of Michigan. When he discovered computers, he never turned back. The computer had a clear notion of correct and incorrect, true and false, and Joy found this “very seductive.” He liked clear answers about what was right and wrong. He took few liberal arts courses.
Joy got a job programming early supercomputers and discovered their amazing power to simulate reality. In grad school at Berkeley in the mid-1970s, he stayed up late, inventing new worlds. He recalls “writing the code that argued so strongly to be written.” In his novel The Agony and the Ecstasy, Irving Stone described Michelangelo as feeling he was only releasing the statues from the stone, “breaking the marble spell.” Joy vividly remembers exactly this kind of ecstatic moment. The software emerged the same way. It was as if the truth, the beauty, was already there in the machine, waiting to be freed. Staying up all night seemed a small price to pay. He recalls it as the “rapture of discovery.”
Joy tells these stories to make clear he is not a Luddite. He insists he has always had a strong belief in the value of the scientific search for truth. The Industrial Revolution, in his view, immeasurably improved everyone’s life; he always saw himself as part of this inevitable march of truth and progress. Sun hardware powers a significant portion of the Net to this day. Joy’s RISC processors are still at the heart of the company’s $10 billion server business.
The turning point for Joy came on September 17, 1998, at the Resort at Squaw Creek, near North Lake Tahoe, an exquisite mountain locale in northern California’s Sierras half a day from Silicon Valley. There he ran into Ray Kurzweil, whom up until then Joy had known only as a famous inventor. He and Kurzweil were speakers at different sessions of something called the Telecosm Conference, which at the time was one of the hottest tickets in geekdom. Financiers with what seemed like unlimited funds flocked to this gathering to worshipfully gather crumbs of wisdom from digital visionaries.
At this conference Joy first heard Kurzweil’s prediction that computers will supersede humans in the next step of evolution. Joy was in the bar of the hotel with John Searle, a professor of philosophy of mind at the University of California at Berkeley, who scoffs at Kurzweil’s idea that machines can become conscious. Kurzweil wandered in, and he and Searle picked up their wrangle. As Joy listened to them debate whether The Heaven Scenario was feasible, a possibility occurred to Joy that was entirely different from the one those two were discussing.
Joy buys the first half of The Heaven Scenario. He views it as obvious that the fundamental reality shaping our future is The Curve. That Indian summer day, however, Joy remembers thinking—oh my God, this could go just the opposite way. The outcome could be terrible. Instead of The Curve going straight up to Heaven, it could be the mirror image and go straight to Hell. This was the moment of conception for what ultimately would be Joy’s “Why the Future Doesn’t Need Us.”
Later, reading page proofs of Kurzweil’s Spiritual Machines, he came to the passage in which Kurzweil, discussing the new Luddite challenge, quotes another author’s scenarios:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.
Joy remembers nodding in unease as he read this. But then he flipped the page. His anxiety changed to horror. He discovered he was reading the words of Theodore Kaczynski in The Unabomber Manifesto. Kaczynski was everything Joy found despicable. “I am no apologist for Kaczynski,” Joy writes. “His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber’s next target. Kaczynski’s actions were murderous and, in
my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.”
It’s eerie the extent to which Joy’s Hell Scenario mirrors Kurzweil’s Heaven Scenario. Both are driven by The Curve. Joy readily anticipates astonishing increases in the information drivers of the GRIN technologies. “A calculation that, on a normal computer, would take the age of the universe, on a quantum computer it would take, like, a second. It would be exponentially better,” he says. Both see the increase in information power clearly driving vast change in our ability to become masters of genetics, robotics and nanotechnology. In fact, when it first came out, Joy devoured Eric Drexler’s book announcing the possibility that nanotechnology could be made to work. He was a speaker at one of the first gatherings of Drexler’s organization, the Foresight Institute, in 1989.
Where Kurzweil and Joy totally and diametrically diverge is what happens to humanity as a result. Like Kurzweil, Joy can’t see any possibility for the future other than the scenario he lays out. “If we create widely dispersed technology for this kind of mass destruction, we would have grave consequences. It’s almost a tautology. How could that be wrong? If you gave a million people their own personal atomic bombs, would some of them go off?” he asks.
If anything, Joy’s gloom has deepened in the years since “Why the Future Doesn’t Need Us” first appeared. When the Wired piece detonated, New York publishers threw money at him to turn it into a book. Some would-be authors might be thrilled to put a gloss on what they had already written, spending a few months padding and thickening it sufficiently to justify a $25 cover price. Not Bill Joy. When he gets down to business, he gets down to business. So here we are years later and still no book, though not for lack of effort on his part. He has entire rooms full of research material. “You know, there is this old quote that says, ‘Extraordinary claims require extraordinary proof,’” he says.