Book Read Free

Breakout: Pioneers of the Future, Prison Guards of the Past, and the Epic Battle That Will Decide America's Fate

Page 12

by Newt Gingrich


  The fears that drive this perverse agenda can be fairly described as apocalyptic. A 2013 article in Rolling Stone titled “Goodbye Miami: Why the City of Miami is Doomed to Drown” is representative of what keeps these zealots up at night. From its imaginative perspective of the end of the twenty-first century, the article envisions a parade of catastrophes due to global warming, from highways that “disappeared into the Atlantic” to the dumping of “hundreds of millions of gallons of raw sewage into Biscayne Bay” to storm waters that “took weeks…to recede.” The metropolis we know “became something else entirely: a popular snorkeling spot where people could swim with sharks and sea turtles and explore the wreckage of a great American city.”73

  Rolling Stone was far from the first to promote this kind of alarmism. Al Gore’s documentary An Inconvenient Truth features maps depicting the large swaths of the world that will be flooded if we don’t immediately stop using the fuels on which our economy depends.

  Proponents of the climate change theory might well be correct that temperatures will increase and sea levels will rise in the future. It is, however, important to be clear about a few points. First, all of the catastrophes the global warming alarmists prophesy are based on computer models that are supposed to simulate how weather works. The predictions of warmer temperatures, the rise in ocean levels, the frequency of extreme weather—they’re all based on computer programs that forecast the weather fifty or even a hundred years from now. But of course, we have no such precise understanding of how the climate works. We know that weather is a mind-bogglingly complex phenomenon, so much so that no scientist would claim to know with any precision what it will be a month from now. (Such a forecast, after all, would be quickly disprovable.) Yet when the time horizon is a few decades out, they are somehow more comfortable with their forecasts.

  Climatologists’ computer models have a long history of being spectacularly wrong. In his book Chaos: Making a New Science, James Gleick recalls, “The fifties and sixties were years of unreal optimism about weather forecasting. Newspapers and magazines were filled with hope for weather science, not just for prediction but for modification and control.… There was an idea that human society would free itself from weather’s turmoil and become its master instead of its victim.”74 John von Neumann, the father of modern computing and one of the greatest minds of the twentieth century, promoted this hope in frequent lectures about the promise of computers. With sufficiently powerful machines, Gleick writes, “Von Neumann imagined that scientists would calculate the equations of fluid motion for the next few days. Then a central committee of meteorologists would send up airplanes to lay down smoke screens or seed clouds to push the weather into the desired mode.”75

  In reality, even the most advanced computer models fall far short of such precision. The reason, Gleick observes, is that “for small pieces of weather—and to a global forecaster, small can mean thunderstorms and blizzards—any prediction deteriorates rapidly. Errors and uncertainties multiply, cascading upward through a chain of turbulent features, from dust devils and squalls up to continent-size eddies that only satellites can see.”76 Weather forecasters cannot predict the temperature even a few weeks in advance. Yet climate-change theorists are predicting aggregate temperatures to a tenth of a degree at the end of the twenty-first century.77

  Global-warming alarmists forget that their models are human creations, vague abstractions trying to predict events in the real world. They are not facts. This was a lesson I learned early in my career. In the 1970s, when I first taught environmental studies, we used the popular book The Limits to Growth. It related how a group called the Club of Rome had developed a computer program and fed all sorts of data about economic growth and natural resources into it, producing predictions of disastrous shortages. There is, however, a principle in computing known as GIGO—“garbage in, garbage out.” Every prediction of The Limits to Growth eventually proved wrong. Every projected shortage was averted. The price signals of a free economy worked. New technology and entrepreneurial creativity created solutions faster than the problems developed.

  Another prophet of doom, Paul Ehrlich, achieved fame at about the same time with The Population Bomb, a collection of some of the most stupendous miscalculations ever published.

  None of this is to say that the scientists who predict global warming are ignorant or malicious. No doubt they craft the best models they can. But the limits to our ability to understand infinitely complex systems demand a big dose of humility. Indeed, the impermissibility in polite company of questioning the computer models is reason enough to be skeptical. Such hubris is antithetical to the scientific method.

  Adapting to Change

  Another important point is too often overlooked in the climate change debate. Climatologists are climatologists. They are not political scientists, economists, or engineers. So while they might be able to tell us what the weather will be, they are not the people to tell us what we must do about it. Yet the discussion usually jumps directly from what the “science” says to why we should accept the fantastic remedies prescribed by the environmental Left—which always involve impoverishing the country and increasing government power.

  The assumption is that the only reasonable response to an increase in temperature of a couple degrees and a rise in sea level of a few inches is to prevent them from happening—to control the climate and the seas. There is an alternative, however. It’s how human beings have responded to weather and the oceans for hundreds of thousands of years: adapt to it.

  When you think of the Netherlands, you probably envision sweeping landscapes dotted with windmills. The Dutch erected thousands of them in the sixteenth and seventeenth centuries. Perhaps you have assumed they built them to grind wheat. Of course, the windmills had agricultural uses, but their main purpose was to pump water—not for drinking or irrigation, but to keep the country dry. About a third of the Netherlands lies below sea level. Without human intervention, much of the country would be underwater, including the entire city of Amsterdam.

  The Dutch ingeniously constructed a system of dikes, dams, canals, and drainage ditches to vacate the ocean from their country’s broad plains. The windmills pumped water from low-lying fields into drainage canals. They were the breakout technology of their day.

  Much of the country would be under water today if not for the complex system that walls off the sea. Over a period of nearly eighty years starting in 1920, the Netherlands constructed a system of twenty-two giant dams, dikes, and flood barriers. Rotterdam lies twenty feet below sea level, walled off from the water by a pair of massive swinging gates.78 The same basic technique has kept the country dry for more than four hundred years (although there have been a few severe floods). In all those centuries, the Dutch never considered lowering the sea.

  Many of the most densely populated places on earth have taken a cue from the Netherlands to expand their available real estate. In 2007, construction crews on the site of the World Trade Center in Manhattan discovered the remains of an eighteenth-century ship. Archeologists determined that it had been hoisted onto the docks that once marked the edge of the island and abandoned there. Soon after, in the 1790s, New Yorkers began extending the city into the Hudson River. “By the 1830s,” Archaeology reported, “the area had been completely filled and the new shoreline lay 200 yards west of its original location at modern Greenwich Street. Over the decades, the earth brought in for the shoreline extension—and the trash discarded there—completely covered up the ship.”79

  Indeed, humans have transformed the island of Manhattan dramatically since Henry Hudson discovered it in 1609. The skyscrapers that house the offices of Standard & Poor’s, Morgan Stanley Smith Barney, and Bank of New York Mellon all stand where the river once flowed. Further uptown, what is now Stuyvesant Town and Peter Cooper Village (neighborhoods composed of fifty-six high-rise apartment buildings) would have been sitting in fifteen feet of water in 1609. A huge swath of Chelsea, too—almost everything west of where the
High Line is today—would have been under water. The major thoroughfares that run the length of the city, FDR Drive, the West Side Highway, and the Henry Hudson Parkway, would have stood some distance out into the river. Even much of the inland part of the island was a tidal estuary.80 Lower Manhattan’s Battery Park City, about ninety-two acres of land, sprang out of the water as recently as the 1970s.

  In Boston, the story is even more dramatic. Most of what is now Boston was under water when the first English settlers arrived in 1630. The Boston Garden, Fenway Park, Faneuil Hall and Quincy Market, the Prudential Building, and Copley Square would all have been under water in the Puritans’ Boston. In fact, the city’s residents have been pushing back the sea almost since the town was founded. A massive landfill project beginning in the early 1800s cut the top off of Beacon Hill to begin filling in the Back Bay. Over the century, Bostonians continued to bring in earth to extend the shoreline, dramatically altering the city’s shape by 1900. The Boston area’s natural contours are completely unrecognizable on a modern map.81

  It’s impossible to know whether the computer models’ projections of slightly warmer temperatures and slightly higher seas will turn out to be accurate over the course of our century. But it is easy to see that many of the worst catastrophes the global-warming alarmists forecast—such as Miami’s or Manhattan’s turning into Atlantis—will never come to pass. That’s because even if the models prove correct, human beings won’t sit idle while the sea level climbs a few inches. We’ll adapt, just as we have for centuries.

  Anyone who thinks the next breakout is inevitable should consider the fanatical determination of today’s green prison guards, who would impose trillions of dollars in taxes and regulations in their quest to keep the planet’s climate from changing.

  CHAPTER SIX

  BREAKOUT IN TRANSPORTATION

  The Pain of Gridlock

  The unbearable commutes that many of us endure ten times a week are powerful testimony to the American work ethic. Given the traffic congestion that brings many of the nation’s cities to a halt twice a day, it’s remarkable that America’s drivers don’t snap.

  One of the senior members of our staff recently moved with his family to Richmond, Virginia, from a home near our office just outside Washington, D.C. Several days a week, he now makes the trip to Washington for work. Without traffic, his drive of about eighty miles would take an hour and a half on I-95. But the highway from Richmond to Washington is never without traffic. As our last meeting of the day wraps up and he checks Google Maps on his iPhone, the line tracing his commute frequently has turned deep red (indicating standstill traffic), and the estimated trip home is well over four hours. Anyone who has ever been to Washington knows that the city saves its worst gridlock not for Congress but for the city’s drivers.

  The gridlock epidemic extends far beyond our nation’s capital. It clogs cities from coast to coast. A CBS News report last year quoted an Atlanta man who described a daily commute that would probably be illegal if inflicted on Guantanamo detainees: “I live about thirty-two miles west of my office,” he said. “The entire drive takes anywhere from one to two hours each way. What makes it brutal is that the road seems to drive directly at the sun. If you are not prepared, this sun can be literally blinding. When cars turn towards the sun, traffic comes to a screeching halt. Many are unprepared, which leads to wrecks and slowdowns. Going home, the entire process is repeated as the sun going down makes traffic a nightmare.”1

  Millions of Americans endure these monstrous commutes every day. The average commute in San Francisco is thirty-two minutes each way. In Boston, it’s thirty-three minutes. In New York, thirty-nine minutes.2 For 3.2 million of us, the drive to work is more than ninety minutes. And 16.5 million drivers, or 12.5 percent of all commuters, leave home before 6:00 a.m.3 Much of this time is spent at a standstill or alternately accelerating and slamming on our brakes, coffee sloshing back and forth in its cup holder. In 2009 the average American spent thirty-four hours sitting in traffic congestion. That adds up to $87.2 billion in wasted fuel and lost productivity—$750 per traveler. In big cities with large commuting populations, it was even worse: seventy hours a year lost to congestion in Los Angeles, sixty-two in Washington, fifty-seven in Atlanta.4 Breaking out of the gridlock prison would be an enormous improvement to the quality of life of millions.

  The hours behind the wheel can be more than an inconvenience, as well. Each year, forty thousand Americans die in car accidents, and two million are injured—a number equivalent to the population of Houston.5 Car accidents are the leading cause of death for Americans under thirty-five. There’s a good chance that driving a car is the most dangerous thing you do regularly, even if you don’t think about it.

  Ending those traffic deaths—not to mention traffic gridlock—would be almost as big a breakthrough as curing breast cancer or preventing heart attacks. But it would mean making the biggest changes to the automobile since the Model T Ford.

  The Self-Driven Future

  When we left Sebastian Thrun, the Google vice president and founder of Udacity, he was working to undermine the education establishment and bring a high-quality college degree to the average American for less than 10 percent of the current cost. But you may recall that Udacity is only the latest of Thrun’s potentially world-changing projects. The first Stanford course he offered online, which attracted 160,000 participants, was on artificial intelligence. Thrun has spent much of his career on AI and machine learning, and he began working on a variety of self-driving automobiles for competitions run by the Defense Advanced Research Projects Agency (DARPA) after he arrived at Stanford in 2003.

  The DARPA Grand Challenge, as the agency called it, sought to jump-start the development of genuinely autonomous vehicles in the hope of converting a large portion of military ground forces to drone vehicles. As the Defense Department was reminded during two ground wars in the Middle East, resupply convoys, emergency medical evacuations, and intelligence or forward observation missions can be a dangerous business. Indeed, many of the casualties in Iraq and Afghanistan have been inflicted on supply convoys traveling through hostile territory. DARPA is eager to take human beings out of these perilous tasks.

  When Thrun began working on Stanford’s Grand Challenge entry in 2004, DARPA had already held the first off-road race through the barren, sometimes mountainous landscape of the Mojave Desert. No team had managed to make it more than seven and a half miles down the 150-mile course. The 2005 race followed a similar route, and the Stanford team, led by Thrun, entered a modified cobalt blue Volkswagen Touareg R5 that they called Stanley (not exactly a Humvee but true to Thrun’s German roots). The team equipped Stanley with GPS, a camera, five laser rangefinders, and a radar system, all of which fed data into six computers in the trunk. These machines, running custom software, drove the car.6

  Although it was up against sturdier off-road vehicles, including a number of Humvees and a sixteen-ton, six-wheeled Oshkosh truck, Stanley was the front-runner almost from the beginning of the race. DARPA staggered the start times in order to give each vehicle some space, but little more than halfway through, the organizers had to push the pause button on Stanley for several minutes (briefly suspending both the SUV and the clock) because it was getting too close to Carnegie Mellon’s entry, a bright red Humvee called H1ghlander. Less than six minutes after Stanley resumed the race, it caught up with H1ghlander again, and again DARPA officials paused the vehicle—this time for even longer. Eventually, five and a half hours into the race, DARPA paused H1ghlander so Stanley could pass. The Stanford team won first place, finishing in just under seven hours.7 Four other teams finished as well, including Carnegie Mellon (where incidentally, Thrun had helped lead the Robot Learning Laboratory before joining the faculty at Stanford).

  Five vehicles driving autonomously through 130 miles of desert mountains marked a sharp improvement over the first race, in which the competitors barely got out of the gate. But it was still a long way from street-rea
dy. As anyone who has ever driven a car understands, the real challenge is dealing with everybody (and everything) else on the road. In the 2005 race, the robots didn’t have that problem at all; the DARPA officials had paused them whenever they came close to another vehicle. Nor had they faced any of the other hazards that human drivers routinely navigate, like fallen branches, bicyclers, joggers, or children chasing balls into the road.

  In 2007, Thrun and his team at Stanford entered the DARPA Urban Challenge, which took autonomous driving from the desert to a fake town, a much more complex course, complete with traffic, stoplights, traffic signs, and various hazards. For this more forgiving terrain, Thrun and his colleagues used a blue Volkswagen Passat wagon called Junior, which they equipped with fancier sensors capable of composing a detailed model of the car’s surroundings (including, now, moving objects). Competitors now had to handle intersections, roundabouts, merges, one-way roads, parking spots, encounters with other autonomous cars (some of which were liable to do strange things), and more. DARPA rules permitted vehicles to stop and analyze a given situation (as a human driver might occasionally do) for no more than ten seconds.8

  Competing vehicles had to complete a set of tasks within six hours while other competitors were also on the road. DARPA scored the teams not solely on time but also on how well the cars handled the various trials thrown at them. Six teams finished the day, and Stanford’s Junior earned second place (this time edged out by Carnegie Mellon). The Passat’s average speed was 13.7 mph.9

  The DARPA challenges had provided proof of concept, showing that autonomous cars could eventually become smart enough to handle many of the complex tasks of driving on civilian roads. But there was still a long way to go between the controlled test course, even of the urban challenge, and the chaotic California roadways.

 

‹ Prev