Book Read Free

Life After Google

Page 10

by George Gilder


  The Brownian molecules discovered by Einstein presumably do not plan or intend their paths; the speakers and surfers do. Capturing both in its Olympian statistics, Markov is a superb tool. But it should not be elevated to a system of the world.

  The current Google-era system of the world sees randomness everywhere, from a random walk on Wall Street or on Main Street or down the Vegas Strip with gambler’s ruin wrapped in Markov chains, or through geological time in evolution, or through the history of “inevitable” invention, or across the wastes and wealth of the World Wide Web. Happenstance and history look the same. Signal is statistically similar to noise. Everything looks random, from white light to white noise.

  The working assumption of the prevailing system of the world is that what looks random is random. As Shannon knew, however, in principle a creative pattern of data points, reflecting long and purposeful preparation and invention in the real world of imagination and will, is indistinguishable from a random pattern. Both are high-entropy, unexpected. Parsing of random patterns for transitory correlations fails to yield new knowledge. You cannot meaningfully study the market with an oscilloscope registering the time domain gyrations. You need a microscope, exploring inside the cells of individual companies to find the pure tones of true technology advance.

  Since Einstein used the concept to calculate the spontaneous gigahertz jiggling of molecules, Markov chains accelerated to gigahertz frequencies have enabled scientists to dominate a world economy ruled by chaotic money creation from central banks. Now, in the Google system of the world, technologists imagine that computer velocity conveys computer intelligence, that if you shuffle the electrons fast enough you can confer consciousness and creativity on dumb machines.

  The idea, however, that human brains, the world’s most compact and effective thinking systems, are actually random machines is not really very bright. Markov models work by obviating human intelligence and knowledge. Whether analyzing speech without knowing the language (Shannon and Baum), gauging the importance of webpages without knowledge of either the pages or the evaluators (Page and Brin), measuring the performance of computing machines while ignoring 99 percent of the details of the system (A. L. Scherr), investing in stocks and bonds with no awareness of the businesses that issue them (Renaissance), or identifying authors without any knowledge of what they wrote or even the language they write in (Markov himself), these procedures are marked and enabled by their total lack of intelligence. You use big data statistics and Markov probability models when you don’t know what is really happening. Markov models are idiot savants that can predict either a random pattern or a planned process without the slightest understanding of either. For its future, the industry must move beyond them.

  At one point during my interview, Mercer challenged the prevailing regime of fractional-reserve banking. Citing the libertarian economist Murray Rothbard, he suggested that in an ideal system, the maturities of assets and liabilities would match.

  This is the view of an outside trader, governed by the Markovian present. The maturities do not match in almost any banking system because of the divergence between the motivations of savers and the sources of the value of savings. Savers attempt to preserve their wealth while having it still available in a liquid form, where they can retrieve it whenever they wish. But that very wealth of savings, for its perpetuation and expansion, is dependent on long-term investments in perilous processes of learning—real investments in companies and projects that can fail and go bankrupt at any time.

  The role of finance is to transform savers’ quests for security and liquidity into the entrepreneur’s necessarily long-term illiquidity and acceptance of risk. If banks and other institutions don’t perform this role, economic growth flags and stagnation sets in.

  All wealth is ultimately a product of long-term investment based on knowledge and discovery. There is no way to escape the inexorable conflict between savers who want liquidity and investors who constantly destroy it with enduring investments.

  These are the systole and diastole at the heart of capitalist saving and investment when money is a measuring stick rather than a magic wand for governments. Cowed by the threat of government computer surveys of their trading patterns, which have criminalized real inside investment, the new hedge fund industry is turning this relationship on its head. It now abides by the rule “Don’t invest in anything you know about.” With learning prohibited, the current algorithms make almost no investments and generate no enduring wealth. Instead, by accelerating trades in the oceans of currencies and short-term securities—the $280 trillion of global debt—hedge funds contribute liquidity and feed on its turbulence.

  Pushed to the limits of velocity, Markov produces only “gold” as wealth rather than real gold as the measure of wealth. But it was real gold, the measuring stick of wealth rather than Midas-touch wealth itself, that served as an outside oracle of value during the ascent of capitalism.

  Operating in chaotic global markets without a gauge of gold, Renaissance is proud that it enjoys no subsidies or special support from government. But by computing more rapidly and more voluminously than its rivals, Renaissance is a supreme arbitrageur of the constant market distortions caused by capricious government.

  Google, on the other hand, escapes market irrationality and price discovery through its strategy of giving most of its goods away for free. Both Google and Renaissance have found ways to flee the remorseless truth-telling and knowledge-expansion of real markets and long-term investments. Both these strategies will ultimately fail, because they are susceptible to the lesson of Midas.

  Midas’s error was to mistake gold, wealth’s monetary measure, for wealth itself. But wealth is not a thing or a random sequence. It is inextricably rooted in hard won knowledge over extended time.

  CHAPTER 9

  Life 3.0

  Among pines and dunes at the edge of a peninsula overlooking Monterey Bay stand the historic rustic stone buildings of Asilomar. Once a YWCA camp, and still without televisions or landlines in its guest rooms, this retreat is separated by an eighty-mile drive from Silicon Valley. Here in early January 2017 many of the leading researchers and luminaries of the information age secretly gathered under the auspices of the Foundational Questions Institute, directed by the MIT physicist Max Tegmark and supported by tens of millions of dollars from Elon Musk and Skype’s co-founder Jaan Tallinn.

  The most prominent participants were the bright lights of Google: Larry Page, Eric Schmidt, Ray Kurzweil, Demis Hassabis, and Peter Norvig, along with former Googler Andrew Ng, later of Baidu and Stanford. Also there was Facebook’s Yann LeCun, an innovator in deep-learning math and a protégé of Google’s Geoffrey Hinton. A tenured contingent consisted of the technologist Stuart Russell, the philosopher David Chalmers, the catastrophe theorist Nick Bostrom, the nanotech prophet Eric Drexler, the cosmologist Lawrence Krauss, the economist Erik Brynjolfsson, and the “Singularitarian” Vernor Vinge, along with scores of other celebrity scientists.1

  They gathered at Asilomar preparing to alert the world to the dire threat posed by . . . well, by themselves—Silicon Valley. Their computer technology, advanced AI, and machine learning—acclaimed in hundreds of press releases as the Valley’s principal activity and hope for the future, with names such as TensorFlow, DeepMind, Machine Learning, Google Brain, and the Singularity—had gained such power and momentum that it was now deemed nothing less than a menace to mankind.

  In 1965 I. J. Good, whom Turing taught to play Go at Bletchley Park while they worked on cracking the Enigma cipher, penned the first (and still the pithiest) warning:

  Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of those intellectual activities, an ultra-intelligent machine could design even better machines. There would unquestionably be an “intelligence explosion” and the intelligence of man would be left far behind.2

  “Thus,”
Good declared, “the first ultra-intelligent machine is the last invention that man need ever make, provided that it is docile enough to tell us how to keep it under control.”3 The message of the Asilomar experts was that keeping it under control is still an unsolvable problem. When a new supreme intelligence emerges, it is hard to see how an inferior human intelligence can govern it. As Musk put it, “It’s potentially more dangerous than nukes.”4 Stephen Hawking pronounced: “The development of full artificial intelligence could spell the end of the human race.”5

  Tegmark explains why a “breakout,” in which the machines take over the commanding heights of the society and economy, is almost inevitable. When Homo sapiens came along, after all, the Neanderthals had a hard time, and virtually all animals were subdued. The lucky ones became pets, the unlucky lunch.

  Asilomar was unveiling an industry on the march across the second half of Kurzweil’s exponential chessboard.6 Everyone should watch out. New robotic kings would be popping up all over the board. “For any process whose power grows at a rate proportional to its current power,” explained Tegmark, “its power will keep doubling at regular intervals, in ultimately exponential explosions.”7 To the Googleplex intellectuals, mathematics is essentially a doomsday machine.

  Another possibility is that blather like this, which reveals the grandiose fatuity of contemporary “genius,” could discredit the prevailing system of the world.

  Cynics—and there are some on the fringes of the AI shrine—might consider this secret meeting an ingenious publicity campaign for Silicon Valley’s most touted products. It was certainly a splendid sendoff for Tegmark’s tome, Life 3.0: Being Human in the Age of Artificial Intelligence, and a rousing launch for his Future of Life Institute. Secret meetings, particularly if packed with hundreds of famously loquacious celebrities, tend to generate far more attention than public ones do, and this summit was no exception.

  What tribute to one’s transformative brilliance could be more thrillling than the warning that your inventions threaten to attain consciousness and reduce human beings to patronized pets? The Asilomar Statement of AI Principles, signed by eight thousand scientists, representing a 97 percent consensus—including a passel of Nobel laureates and Hawking—echoed the billowy affirmations of Google’s own “Do No Evil” precepts and the statement of principles of Burning Man. “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization. . . . An arms race in lethal autonomous weapons should be avoided.” One wonders what the 3 percent of dissenters had to say.

  All in all, however, the statement was a bland summation of the new Silicon Valley system of the world, in which human beings are no longer the supreme intelligence or the significant inventors. Even new laws of physics, according to Tegmark, will have to come from AI. “Given that a super-intelligent computer has the potential to dramatically supersede human understanding of computer security, even to the point of discovering more fundamental laws of physics than we know today, it’s likely that if it breaks out, we’ll have no idea how it happened. Rather it will seem like a Harry Houdini breakout act, indistinguishable from pure magic.”8

  The advocates of super-AI believe that it can propel human intelligence out into the cosmos in the form of silicon digital devices, escaping the limits of space exploration by vulnerable carbon-based human beings. Ultimately the breakout will sweep into the galaxy, with the intelligent machines contriving ever more powerful rockets bearing ever more miraculous minds and bionic bodies. Tegmark speculates about what that will look like: “after spending billions of years as an almost negligibly small perturbation on an indifferent lifeless cosmos, life suddenly explodes onto the cosmic arena as a spherical blast wave expanding near the speed of light, never slowing down, and igniting everything in its path with the spark of life.”9 In Tegmark’s new creation story, digital machines become the dominant form of life.

  In the face of this new revelation from Silicon Valley, I decided to consult the most experienced and sophisticated of the Asilomar attendees, Ray Kurzweil, who for the past five years has been director of engineering at Google. Despite his reputation as one of the most extreme figures in the movement, I knew him to be an equable and undismayed master of the technology. When I asked him about Tegmark, he seemed somewhat abashed, as if he knew that this line of thought from his alma mater, MIT, was not going as he might have wished.

  Kurzweil has been studying and fashioning forms of artificial intelligence ever since he was a fourteen-year-old prodigy-protégé of MIT’s Marvin Minsky, a career that encompasses the entire history of the field. In late 2017, Kurzweil confided that he has been consulting his mentor for new insights on the fast developing technology. With a mischievous twinkle in his eye, he said he was surprised to discover that Minsky has become more articulate and responsive recently—perhaps surprising in view of the inconvenience of his death two years ago.

  Conducting what he calls a “semantic search” of all ten of Minsky’s compendious books—that is, searching for specific associative meanings rather than blind “key words”—Kurzweil was able to get answers instantly from the deceased AI legend. Kurzweil has used the same program to explore his own works and rediscover insights that had slipped away over time, presumably displaced in memory by the newer concepts of his semantics program. If you have Gmail on your smartphone, you have seen the fruits of Kurzweil’s semantic breakthroughs in the three proposed responses underneath each new email you receive.

  To the more intoxicated of the Asilomar congregants, semantic search is a “super-human” capability, surpassing a search by sequence of words, which can fail if the words are not recalled exactly. By rendering each word as a cluster of synonyms and associations in longer sequences up a hierarchy of meanings, Kurzweil’s “semantic search” operates as a vast computer acceleration of a cumbersome human perusal of a pile of texts.

  As Kurzweil acknowledges, semantic search is an “extension of human intelligence” rather than a replacement for it. A human being reinforced by AI prosthetics is less likely, not more likely, to be ambushed by a usurper digital machine. Semantic search delays the machine-learning eschaton.

  Also at Google in late October 2017, the DeepMind program launched yet another iteration of the AlphaGo program, which, you may recall, repeatedly defeated Lee Sedol, the five-time world champion Go player. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks trained by immersion in records of human expert moves and by reinforcement from self-play. The blog Kurzweil.ai now reports a new iteration of AlphaGo based solely on reinforcement learning, without direct human input beyond the rules of the game and the reward structure of the program.

  In a form of “generic adversarial program,” AlphaGo plays against itself and becomes its own teacher. “Starting tabula rasa,” the Google paper concludes, “our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.”10

  The claim of “superhuman performance” seemed rather overwrought to me. Outperforming unaided human beings is what machines—from a 3D printer to a plow—are supposed to do. Otherwise we wouldn’t build them. A deterministic problem with few constraints—a galactic field to plow—Go is perfectly suited to a super-fast computer. Functioning at millions of iterations per second, the machine soon reduces all human games of Go ever played to an infinitesimal subset of its own experience. It may be said to “discover” millions of solutions beyond human reach just as a space probe may “discover” regions of space beyond human ken. But speed of iteration is not the same as intelligence.

  Because Go is a game of pure strategy without differentiated pieces like chess, a computer can exhaust the solutions more efficiently than in chess, with its smaller solution space. The Asilomar eschatologists miss the difference between computing-speed and intelligence, between programmable machines and programmers.
>
  Tegmark makes the case as well as it can be made that the attainments of AI programs—“Watson” the quiz-show winner and occasionally superior medical diagnostician; Big Blue the chess champion; Google’s DeepMind game players, which learned to outperform human players from scratch in dozens of electronic games; the face-recognizers; the natural language translators; the self-driving car programs—portend a super-intelligence that will someday be so superior to the human mind that we will no more comprehend its depths than a dog grasps the meaning of our own cerebrations. It is just a matter of time. Although shunning the dystopian interpretation, Kurzweil boldly offers a date: 2049. Tegmark likes to quote Edward Robert Harrison: “Hydrogen, given enough time, turns into people.” People, given enough time, presumably turn into Turing machines, and Turing machines are essentially what people used to call “God.” He isn’t shy about the godlike powers this super-AI will have: “Whatever form matter is in, advanced technology can rearrange it into any desired substances or objects, including power plants, computers, and advanced life forms.”

  Life 3.0 and Asilomar are declarations of principles for a post-human age. The conclusion is that the last significant human beings are the inventors of super-intelligent AI. People like Hassabis, Norvig, LeCun, and Page. Pay them tribute while you can and hope that they will be indulgent if you sign up for their movement. Life 3.0 is silicon-based and machine-generated.

  Like everyone in the movement, from Page to Kurzweil, Tegmark is a sophisticated modern man who recognizes that there are many imponderables. In his book, he even imagines some persons being allowed to opt out of a relatively benevolent AI regime and set up human-only zones.

 

‹ Prev