Valley of the Gods

Home > Nonfiction > Valley of the Gods > Page 18
Valley of the Gods Page 18

by Alexandra Wolfe


  For now, in 2015, he was working as a director of engineering, primarily getting machines to understand what scientists called “natural” language. Computers still weren’t as good as humans at figuring out the context of questions and speech. They could scan the words in an article and 56 percent of the time figure out that Barack Obama was the president of the United States, while a human would read the same article and reach that conclusion with near certainty. Kurzweil was developing software that he hoped would enable computers to understand language conceptually rather than just by key words—with the near-term goal of creating a better, more conversational search function for Google.

  But the inventor had made the unlikely possible before. He had invented the first print-to-speech reading machine for the blind, just to start. But it was his bestselling books The Age of Spiritual Machines: When Computers Exceed Human Intelligence (2000) and The Singularity Is Near (2005) that really made his name. In How to Create a Mind: The Secret of Human Thought Revealed, published in 2012, he described how to build a synthetic extension of the brain that would connect it to the cloud. He thought that nanobots would travel one day to our brains through our capillaries and that blood-cell-sized computers would connect to the cloud the same way that iPhones do.

  Before the publication of How to Create a Mind, Kurzweil met with Google’s then CEO Larry Page to give him a copy and pitch him on investing in a company he wanted to create based on the ideas in his book. Page was interested but instead persuaded Kurzweil to start it at Google, having the use of the company’s resources while maintaining his independence. (Since then, Google has continued building a veritable artificial intelligence laboratory, hiring artificial intelligence researcher Geoffrey Hinton and in 2014 acquiring the British company DeepMind Technologies—renaming it Google DeepMind—which combines techniques from machine learning and neuroscience to build algorithms. But Google, of course, went apoplectic when Kurzweil spoke these thoughts out loud; the company wanted to separate his ideas from Google’s mission and goals. God forbid it reflect on its own aspirations, which certainly were not publicly reengineering humans, its valuable customers.)

  Some of Kurzweil’s ideas hit the mainstream. Spike Jonze, the director of Her, the 2013 film about a man who forms a relationship with an intelligent (and female-sounding) operating system, has said that Kurzweil’s writings inspired him to write, direct, and produce the movie. Still, Kurzweil found some fault in the film’s development of the futuristic operating system, voiced by Scarlett Johansson. With her high level of emotional understanding, he said, she also should have had a virtual body.

  These days, Kurzweil’s actual body looks its age, even with his intravenous nutrient intake and “real age” of forty. He still wants to reprogram our biology as well, which he said began with the Human Genome Project and included regenerating tissue through stem cell therapies and the 3-D printing of new organs.

  Kurzweil was well aware of the darker side of a more technological future. “Technology has always been a double-edged sword,” he said. Fire helped humans improve their lives, but it also burnt down villages. And while he thought that technology could reprogram our biology away from disease, it could also fall into the hands of terrorists who might reprogram colds into deadly viruses. (“We’re not defenseless against that,” he added, having spent time helping the US Army come up with a program to combat biological threats.) However, he was reassured by the ubiquity of networked technology. With smartphones in the hands of billions of people, crowds could organize to deal with many problems, he thought.

  In any case, Kurzweil planned to be around to see whatever the future held. “The goal is to live indefinitely,” he insisted. As a backup plan, he would preserve his body cryogenically. But, he said, “The goal is not to need to.”

  • • •

  Across the country at Yale, another computer scientist who knew his fair share about artificial intelligence, David Gelernter, considered Kurzweil the Antichrist, seeing mostly the darker side of his contemporary’s vision. Gelernter found Kurzweil’s predictions not only depressing and nihilistic but also dangerous.

  After all, Gelernter was not a typical computer scientist. Most days, he stood at an easel near a wide window in his Woodbridge, Connecticut, home and painted. His two pet parrots flew around a house filled with stacks of books and papers. The birds screeched sporadically, and every now and then one popped up from behind the couch to say “Peekaboo.” There were no gadgets in sight, aside from a desktop computer barely visible in an adjacent office.

  “I hate computers, and I refuse to play with them,” he said, as he geared up to write one of his many attacks on Kurzweil, this next one called “The Closing of the Scientific Mind.” “Any success I’ve had in computing is because I fit so badly in the field,” he explained with a laugh. He thought that using computers should be more logical. “I want software to work in thirty seconds,” he said.

  Gelernter had just launched a new company called Life­streams. It was an attempt to make computers more human rather than the other way around. Lifestreams would make desktops more intuitive and narrative. The information would be chronological, not disseminated throughout a blue screen in the form of icons and confusing dropdown menus.

  Years ago, a first try at commercializing his ideas ended in failure, but Gelernter was used to setbacks. In 1993 he was the target of a mail bomb from Theodore “Ted” Kaczynski, known as the Unabomber, who between 1978 and 1995 conducted a campaign of domestic terrorism against people involved in developing technology. The explosion disfigured Gelernter’s right hand and blinded his right eye. He wrote the book Drawing Life: Surviving the Unabomber in 1997 about living through the trauma. Nearly ten years later, Gelernter was still physically uncomfortable. He moved around his living room slowly but didn’t complain about his ailments.

  He was troubled that people would describe the Unabomber as “sick” or a disturbed “genius” yet hesitated to call him “evil.” It prompted Gelernter to ask, “What does it mean when a culture no longer believes in evil?” and “What happens to a society that has lost its ability to react morally in a crisis?”

  In Drawing Life, Gelernter turned his own pain, disfigurement, and subsequent recovery into a metaphor for the state of the country. He criticized the United States for losing the resources that helped him to mend—religion, family, art—and argued that American culture focused on sensationalizing crime rather than on teaching courage and character.

  Artificial intelligence, he thought, was incapable of knowing about character, or bravery, or anything that made us qualitatively human. Gelernter’s personal slogan for his company was “By humans for humans.” Humanity, he thought, would never be replaced by machines. Our subjective, conscious experiences could never be programmed, he said. With what we knew so far about computers, he said, there was no way they could ever be conscious. They didn’t get built—or turn on, even—without human intent.

  If humans had built computers to be helpers of humanity, he thought, he never would have had to restart his old company to make them more intuitive. In the 1990s, Mirror Worlds Technologies never got off the ground commercially and ran out of money in 2004. Ironically, Gelernter started seeing his early ideas pop up in Apple products. He believed that three of ­Apple’s features—Cover Flow, Time Machine, and Spotlight, for flipping through CD covers, backing up files, and performing searches, respectively—looked like the software he had invented. Although Gelernter never sued Apple himself, a lawyer discovered an email that had been sent by the late Apple founder and CEO Steve Jobs to a handful of his lieutenants about Mirror Worlds, saying, “It may be something for our future, and we may want to secure a license ASAP.” (Apple never did get a license.)

  That sentence became the basis of a lawsuit filed by Mirror World’s patent holders. (Gelernter and his coinventor, Eric Freeman, had sold their patent as a condition of the company’s funding in the
1990s, though Gelernter retains a 2 percent stake, minus costs, in any award from the lawsuit. Gelernter never saw the email from Jobs until the trial itself.) In 2010 a jury voted in favor of Mirror Worlds and gave its patent holders $625 million in damages, one of the top five patent awards in US history. Six months later, however, a judge overruled the verdict. An appeal was unsuccessful, and in June 2013 the US Supreme Court declined to hear the case. Finally, in July 2016, Apple paid a $25 million settlement for the Cover Flow and Time Machine patents.

  Still, Gelernter admitted that his ideas might not have been viable a decade ago anyway. Apple could have just made use of them at the right time. “The technology was not ready, the graphics weren’t ready, and people’s state of mind was not ready.” Now he thinks the world may be able to see his vision. “It was F. Scott Fitzgerald who said there are no second acts in American lives,” he said. “But this is a second act.”

  Whether or not he could stop Ray Kurzweil and his ilk was another matter. But by trying to change the direction of the way artificial intelligence would be used, Gelernter hoped to sway people away from the idea that humans would evolve into robots, and help them reclaim individual will and identity. Gelernter’s anti-Kurzweil world view is put forth less in his software than in his book The Tides of Mind (2016).

  • • •

  John Burnham fell on the more humanist side of the spectrum. “The concern with AI is that AI is a machine,” he said. “People created it with a utility function set of values, which sounds all well and good, but that assumes that intelligence is enslaved to a utility function set of values, and assumes those two ideas are ­compatible—that you can have intelligent being without free will. But the only example of an intelligent being—us—is one that has free will.”

  By 2016, Burnham had swung so far in the direction of the qualia, that religion was guiding his choices now more than ever. The following year, he would become confirmed as Catholic. (After a year at Thomas More he returned to Dartmouth, in fall 2016, to study math.) Silicon Valley, he said, had given him an entirely new perspective on humanity, one that made him value what it actually meant to be conscious—as a human in particular. He was now fixated on all that was nonprogrammable about us.

  At Thomas More College, he wrote an essay on human beings’ unique perspective on time. In some ways it was the flipside of Kurzweil’s ideas, but not entirely dissimilar:

  “Man cannot by nature be purely body, since he is not fully explicable through the corporeal. Man apprehends, at least in some small way, part of Divine revelation. To an angel, a being living in Eternity, prophetic revelation is irrelevant, since they perceive the whole of Time at once. To an animal, not possessing the means to imagine forms outside of time, prophetic revelation is impossible. Man is uniquely defined as being, according to Aquinas, ‘composed of soul and body,’ and this union is exemplified most clearly in those phenomena that comprise the intersection of the Eternal and the Temporal. Man is the only creature that can both look out of Time at Eternity and, through revelation, look into Time from Eternity.”

  From an inverse vantage point, Kurzweil echoed Burnham’s thoughts. “Soon we’ll live forever, take over the universe, control the universe,” he said. Singularity, he said, would make us closer to the supernatural. Already he could teleport, after all—at least as a hologram. “The universe is not very intelligent, so eventually we’ll take over the solar system, and in essence we’ll be Godlike.” He popped another one of his supplement pills, and got in his Lexus hybrid proto-self-driving car. Once we break the speed of light, “We’ll be closer to God than ever before,” he said as he careened around a corner onto the highway. “We will be God.”

  Conclusion

  Of the first class of 2011 fellows, including Laura Deming, John Burnham, Paul Gu, and James Proud, only about one in ten ended up going back to college, according to Danielle Strachman, the program’s original director. But when that did happen, she said, the fellows did it in a determined way. Eden Full created the SunSaluter solar panel system, but she had always planned to return to Princeton University. When Full got back, however, she was so tired of the required courses, such as rhetoric and cultural studies—an idea she certainly had experienced firsthand during her travels to Africa—that she ended up dropping out again.

  Most of the fellows who went back to school, Strachman said, used the academic experience for a reason: to apply it to something else they were working on as entrepreneurs. “What we saw was really deliberate,” she emphasized.

  Some, however, went back for the structure and social life. Noor Siddiqui, Burnham’s ex-girlfriend who tried to start a charity company to combat poverty, felt she was missing out on college and the friendships it could provide. While frat parties were never her thing, she was afraid of being out on the loose, away from home, without a successful company, at least so far.

  Noor’s first idea had been the ambitious goal of trying to end poverty—the idea she applied with, in which she’d match poor people in third world countries to Western employers. But then, a year into her fellowship, she came up with a new company called Remedy, inspired by her sister’s complaints that as a med school student, she thought more should be done medically in an ambulance on the way to the hospital versus at the hospital.

  Her next idea entailed emergency responders using Google Glass, eyeglasses that had browsing and wireless capabilities, or their cell phones to allow doctors in hospitals to see what’s happening in the ambulance and provide live support to the paramedics. The mobile display would send videos, images, and GPS, so that the physicians could respond remotely as to what form of treatment could be initiated en route.

  In April 2014 Siddiqui started testing the system at Harvard and the University of Pennsylvania. The technology, to be called Beam, would assign numbers to patients in ambulances and then create an interface for surgeon supervisors. With one tap of a mobile phone, she said, an expert could see what was happening remotely. It would be the first product Remedy would put out. Eventually it would make wearable health care technology, but first the company was working with Google Glass. That way, she hoped to go beyond emergency situations to allow doctors to be able to see more patients more often in places without access to decent medical care. But still, Beam was slow to take off, and Noor wasn’t raising the amount of capital she needed. The nineteen-­year-old decided to go back to school and enroll at Stanford University while she attempted to keep her company going on the side.

  She and Burnham were still in touch every now and then, and she called him before she made her decision to go to college. Her choice was somewhat influenced by his—to go to ­Dartmouth—even though that wasn’t where he ended up.

  • • •

  After a low period out west, John Marbach, who had originally tried to build an online education platform, had also gone back to college, at Wake Forest in North Carolina. He wished he’d stayed in school all along. But he was among the few. The other fellows were still trying to make it in the tech industry, with varying degrees of success.

  The biggest success story, actually, was based in India: Ritesh Agarwal, a young 2013 Thiel fellow whose line of budget hotel rooms, Oyo Rooms, was valued in 2016 at around $400 million. Another was Dylan Field, a 2012 Thiel fellow and former Brown University student who had raised nearly $18 million to start Figma, a company that was supposed to compete with Adobe Acrobat.

  In the end, the Thiel fellowship has become reflective of the inherent cycles of Silicon Valley, or any area immersed in the uncertainties of entrepreneurship. Like the many PC makers before them or the search engines that came before Google, few succeeded but most did not. The program was a microcosm of the history of the area. The 2011 class hadn’t fared as well financially as Agarwal and Field, save for Proud and Gu. Dale Stephens had capitulated to the idea that kids still wanted to go to college. He changed his UnCollege program into a gap-year program rather tha
n a replacement of university. His site and seminars offered advice on how to tell your parents why you wanted to take a gap year rather than go straight to college, with points such as “Have a plan.” It encouraged students to “intellectually misbehave.”

  Stephens asked, “Why not spend that time understanding who you are and what you want to do with life?” before entering college. His program cost $16,000, including room and board. Coaches taught students what they wanted to learn based on their own preferences. “Learning in its pure form is not for everyone,” he conceded. “Some people want some guidance.”

  Some colleges offered credit for his program, but most admissions officers rolled their eyes. While Stephens had become one of the best-known Thiel fellows, he’d had a hard time raising money. He thought that venture capitalists preferred funding companies with big ideas and high valuations rather than those with cash flow, which he purported to have, claiming, “We were making money and delivering a product or service. We don’t just have an algorithm.” It was easier for him to raise money back east, actually, where investors wanted to see a bottom line.

  • • •

  Danielle Strachman was all too aware of this phenomenon. Having worked with frustrated fellows for five years, she was eager to move on to try her own hand at it. In early 2015 she left the fellowship to start her own fund for young entrepreneurs. It was called the 1517 Fund, in reference to the Protestant Reformation, alluding to Martin Luther’s claim that the Church shouldn’t be charging people to have a relationship with God. She thought that was the ethos of the fellowship too. People could pray to their own gods, or none. Maybe they could even be them. Looking back on the Thiel Fellowship, Strachman thought that it had changed the conversation about higher education and its importance.

 

‹ Prev