by Tom Wheeler
Once a week while I was FCC chairman, I would go to the agency’s SCIF (Sensitive Compartmented Information Facility) for an intelligence briefing about activities occurring on the networks that interconnect our nation and the world. The conclusion was clear: when everything is connected, then everything is vulnerable.
The training and software necessary to mount a cyberattack are easily available on the network itself. Without any difficulty, it is possible to purchase software that when pointed at a target automatically starts hacking into it. On the open market, zero-day vulnerabilities (an exploitable flaw in a piece of software) can be purchased to attack the software that runs servers, routers, and the devices that connect to them. It is even possible to use a Google-like web search to identify IoT devices so that they can be hacked and repurposed to become a cyberattacker controlled from afar, with the legitimate user none the wiser.
The internet is like the old New England commons: a shared space accessible to all for the greater good of all. Yet history has also shown the tragedy of the commons: individual action without collective responsibility tend to result in a muddy, overgrazed commons of diminished value. The commons of the internet invites similar abuse, amplified not just by greed but by a new definition of warfare.
This is the consequence of internet design decisions made decades ago. “If we had waited to build a perfectly secure network, there never would have been an internet,” Bob Kahn, one of the fathers of the internet, once told me.
Because the original design of the internet assumed mutual trust among users, it is speckled with systematic security problems. The application boom triggered by the web expanded those weaknesses with an economic focus built on speed to market, not secure design.
We live with the hydra-headed consequences of those decisions. There was neither malfeasance nor misfeasance, but the result has been that our infrastructure carries both legitimate and harmful traffic. By one estimate, for instance, malicious botnets (computers controlled without their owner’s knowledge) account for 30 percent of internet traffic.22
A key challenge in dealing with cybersecurity is the broad and diverse range of how and where the network can be exploited.
Cyberattacks are a threat to infrastructure. A cyberattack on Ukraine’s power grid in 2015 left 700,000 people without electricity for hours.23 In 2013, Iranian hackers attacked a dam outside New York City.24 In 2016, a U.S. court convicted a Russian of attacks that caused more than $169 million in losses to 3,700 financial institutions.25
Cyberattacks enable information warfare. During the 2016 U.S. presidential election, intelligence agencies identified Russian-instigated hacks. By collecting and releasing private information from individuals and political institutions, the attackers shaped the discussion of issues in the election, and perhaps its outcome.
Cyberintelligence collections purloin intellectual property. General Keith Alexander, the former director of the National Security Agency, described attacks by foreign entities on U.S. intellectual property as “the greatest transfer of wealth in history.”26 Once inside a network, with the push of a key it is possible to download in a matter of seconds information that, had it been stolen in hard copy, would fill multiple eighteen-wheel trailers. Such attacks aren’t limited to foreign attackers—they can be as American as baseball. In 2015, the scouting director of the St. Louis Cardinals was convicted of hacking into the Houston Astros’ computers to collect scouting and player personnel information.27
As the internet went mobile it became personal, opening up the opportunity to exploit individuals. The ability to reach your contacts, calendar, and all your activity on the internet is only the beginning of mobile vulnerability. As mobile phones are typically synched to your PC, they are an inviting path into your desktop, and from there into everything else in your digital life. Addicted to that connected watch on your wrist? Think of it as yet another attack vector where infectious software can be implanted by proximity to a bad guy’s network (think of the “other networks” that show up on your phone but you ignore), then jump to your PC and from there to a free ride anywhere on the internet. And infected mobile devices can literally walk right past perimeter-protecting security to attack a target.
As the IoT grows to connect tens of billions of microchips, encompassing everything from security cameras to light bulbs, it opens new avenues of attack. By one estimate, it takes only six minutes from the time a device is connected to the internet for it to be discovered and infected so that, without its owner knowing, it is under the command of someone else.28
We have failed to collectively address the cybersecurity problem. Both market forces and government have been insufficient and ineffective. Networks can increase security through more secure protocols, monitoring, and filtering. Device manufacturers can make security—including supply-chain reliability—a forethought rather than an afterthought. Government could step up with thoughtful and comprehensive legislated policies rather than relying on administrative interpretations of pre-digital-age statutes.
“Anytime you have a dependency on the internet, we’re gonna be playing catch-up in reaction to defending our networks,” former director of national intelligence James Clapper warned.29 That we have such a dependency is a given. The necessity of getting beyond catchup continues to grow.30
We survived the nuclear threat and the scourge of chemical weapons through policies of containment. But containment is the opposite of the distributed force of the internet.
Cyber vulnerabilities are like the classic science fiction tale where the helpful robot turns avenging attacker. Any computer connecting to any other computer in the world—the essence of the internet—means it is possible to compromise, take control of, and exploit any computer-connected systems—and do so at scale. “The Cold War is over,” wrote the Washington Post’s David Ignatius. “The cyber war has begun.”31
Epilogue
There really can be no close to this technological travelogue. What’s more, the trip continues at an accelerating pace. That is what makes an appreciation of the relationship of tomorrow and yesterday so darn important.
One of the late Alvin Toffler’s delightful practices was the occasional convening of an evening of fascinating individuals, each with his or her perspective on technology and its impact. In an earlier era it might have been described as a “salon.” It was at one such event, on a spring evening in Washington, D.C., that, after more than an hour of spirited discussion, I observed how our conversation had not touched on any of the topics with which the participants were involved. Instead, we had been pursuing philosophical and theological themes.
Some of the participants that evening were advancing new technologies, others were manipulating the human genome, yet others were deep in national security. But when the participants hung up their lab coats and turned off their computers they, like everyone else, were searching for anchors in a storm of change that, in many ways, they were helping to create.
“Isn’t it interesting,” I commented, “that a room full of change creators is searching for truths to bring perspective to their change?” There were only two places we could turn for this shelter, I suggested: faith and history. Faith has always provided the perspective that “there’s something bigger than me,” and history is the collected experiences of people like us as they dealt with their own (surprisingly similar) challenges. Our faith, in fact, is inseparable from our history. We study the ancient scriptures for meaning in our modern lives because they tell the stories that provide insight into the universal human condition.
As the collected stories of the human journey, history offers the fundamental lesson that the challenges we face today aren’t unique. No matter how much we flatter ourselves with self-absorption, we are but the continuation of the human saga.
I hope that notion remains this book’s takeaway: that our networked revolution is technologically iterative and sociologically similar to the previous network revolutions of history. The stories that
brought us to this point and are defining our future are incredibly interesting. What is special for us is that we inhabit a time when the combined forces of history and technology converge to, yet again, challenge us with change.
We know the stories that led us to this moment. We know how the actions of those who dealt with history’s changes created our today. Now we are in a historic moment of our own, and it’s our turn to guide how new technology determines the future.
Notes
Prologue
1. Perhaps because “breaking things” was a bit too aggressive for a company undergoing public scrutiny, Facebook has changed its motto to “Move fast with stable infrastructure.”
2. “Digital Transformation Is Racing Ahead and No Industry Is Immune,” Harvard Business Review, July 19, 2017.
3. Benjamin Mullin, “The Associated Press Will Use Automated Writing to Cover the Minor Leagues,” Poynter.org, June 30, 2016.
4. In 1983, 91.8 percent of people twenty to twenty-four years old had a driver’s license; in 2014 the figure had fallen to 76.7 percent. “Recent Decreases in the Proportion of Persons with a Driver’s License across All Age Groups,” University of Michigan Transportation Research Institute, updated April 3, 2018 (www.umich.edu/~umtriswt).
5. “Smart Tampon? The Internet of Every Single Thing Must Be Stopped,” Wall Street Journal, May 25, 2016.
6. This wonderfully descriptive alliteration was first suggested by my friend Blair Levin.
7. Walter A. McDougall, Throes of Democracy (New York: HarperCollins, 2008), p. 106.
8. Rudi Volti, Society and Technological Change (New York: St. Martin’s Press, 1955), p. 17.
9. David Sarno, “Murdoch Accuses Google of News ‘Theft,’ ” Los Angeles Times, December 2, 2009.
10. Vicar of Croydon, preaching at St. Paul’s Cross, cited in Gertrude Burford Rawlings, The Story of Books (New York: D. Appleton and Co., 1901).
11. Henry David Thoreau, Walden (1854; Princeton University Press, 1971), p. 60.
12. Allbutt Clifford, “Nervous Diseases and Modern Life,” Contemporary Review (London) 67 (1895), p. 214.
13. Rosyln Layton, “Does the Internet Create or Destroy Jobs? A Snapshot from the Global Debate on Digitally Enabled Employment,” AEIdeas (blog), American Enterprise Institute, December 29, 2014.
Chapter 1
1. I was fortunate to call Paul Baran a friend. He was an exceedingly humble man. The concept of packet switching was first theorized by Leonard Kleinrock of MIT in 1961. J. C. R. Licklider, also of MIT, had envisioned a “galactic network” of interacting computers in 1962. In parallel with Baran’s work (and unbeknownst to each other), Donald Davies and Roger Scantlebury were doing similar work in the United Kingdom; it was this work that led to the term “packet switching,” a more elegant description than Baran’s “hot potato.”
2. Let’s stipulate at the outset that numerous networks have affected the patterns of our lives. Water and electricity redefined daily life. Highways and airways recast physical movement. The focus of this book is the networks that have inexorably led to the marriage of computing and communications.
3. It is possible to consider everything from a writing stylus to the wheel as some form of technology. The term “technology-based network,” however, applies to a network enabled by mechanical (or electromechanical) capabilities.
4. Manuscripts were moved from one scriptorium to another during the Middle Ages, but that did not constitute a “network” in the sense that the information was broadly exposed and disseminated.
5. A very few exceptions relied on sound or sight to move information faster than human travel. Drum signals, smoke signals, or flashes of light could all signal over distance but were for one reason or another constrained in their application.
6. See Tom Standage, The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s On-line Pioneers (New York: Berkeley Books, 1999).
7. The term “electronic” is used as defined in Webster’s New World Dictionary, 3rd College Edition (New York: Simon & Schuster, 1988): “1. of electrons, 2. operating, produced, or done by the action of electrons or by devices dependent on such action.”
8. Michael Clapham, Printing: A History of Technology from the Renaissance to the Industrial Revolution, ed. Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor Williams, 4 vols. (Oxford University Press, 1957), p. 37. Cited in Elizabeth L. Eisenstein, The Printing Press as an Agent of Change (Cambridge University Press, 1979), p. 45.
9. Cited in W. Brian Arthur, The Nature of Technology (New York: Free Press, 2009), p. 17. Arthur calls this “combinatorial evolution” and applies it to technology.
10. Interview by Judy O’Neill, March 5, 1990, Menlo Park, Calif. See “An Interview with Paul Baran,” OH 182, Charles Babbage Institute, University of Minnesota, Minneapolis (https://conservancy.umn.edu/handle/11299/107101).
11. Angus Maddison, “World Population, GDP and Per Capita GDP, 1–2003 AD,” chart, Groningen Growth and Development Centre.
12. “Speed of Animals: Horse, Equus ferus caballus” (http://www.speedofanimals.com/animals/horse).
13. The calculation assumes the typical letter consisted of around 2,200 words and the typical English word contains, on average, 4.79 characters, as calculated by Peter Norvieg at Google. See Ed Vielmetti, “What Is the Average Number of Letters for an English Word?,” Quora.com, February 11, 2015. If each character requires the standard 8 bits of data, then the letter would contain approximately 84,000 bits (2,200 × 4.79 × 8 = 84,304). If it took a month for the letter to reach its destination, then the effective throughput would be 0.03 bits per second (84,304 bits ÷ 30.4 days/mo. = 2,773 bits/day throughput ÷ 24 hours/day = 116 bits per hour ÷ 60 min./hour = 1.9 bits per min. ÷ 60 seconds/min. = 0.03 bits per second). See Principia Cybernetica (http://pespmc1.vub.ac.be/TECACCEL.html).
14. John F. Stover, The Routledge Historical Atlas of the American Railroads (New York: Routledge, 1999), p. 21.
15. Menahem Blondheim, News over the Wires: The Telegraph and the Flow of Public Information in America, 1844–1897 (Harvard University Press, 1994), p. 17 (chart).
16. Standage, Victorian Internet, p. 57.
17. It takes a little over two seconds to tap in the Morse code for one average-size 8-bit character, thus producing a transmission rate that is approximately 3 bits per second.
18. Wireless speeds remain significantly slower than wired speeds, but increasingly at broadband speeds the value of the connection is enhanced by its ubiquity.
19. Increasingly, the decision making of individual hubs takes the user to a centralized corporate hub such as Google or Facebook, where unique decision making is replaced by algorithms programmed to hold users’ interest to keep them online so they will see more paid messages. While this diminishes the individual’s role determining in-out activity, the network continues to vest that power at the edge.
Chapter 2
1. For example, in France the twelfth-century cleric Peter Waldo tried to launch a reform movement. In fourteenth-century England John Wycliffe led a similar charge.
2. Stephen J. Nichols, The Reformation: How a Monk and a Mallet Changed the World (Wheaton, Ill.: Crossway, 2007), p. 30.
3. Elizabeth Eisenstein, The Printing Revolution in Early Modern Europe (Cambridge University Press, 1983), p. 151.
4. Luther at times hid behind unnamed friends who, he said, had given his works to printers. It was a thin veil.
5. Nicole Howard, The Book: The Life Story of a Technology (Johns Hopkins University Press, 2009), p. 58.
6. In 1508 the printer Johannes Rhau-Grunenberg was enticed to move his operations to Wittenberg. The new university in the town required printed texts. The founding of Wittenberg University at the turn of the century, in fact, had created a speculative bubble in the printing business. Five printers set up shop in the small German town, thus creating a supply in excess of demand. The Wittenberg printing bubble burst, but the basic le
vel of demand remained, and Herr Rhau-Grunenberg was recruited to meet the needs of a more rational market. See Andrew Pettegree, The Book in the Renaissance (Yale University Press, 2010), p. 92.
7. Ibid.
8. Statement to his brother, Giuliano, as quoted in William Samuel Lilly, The Claims of Christianity (1894), p. 19.
9. Pettegree, Book in the Renaissance, p. 93.
10. Nichols, Reformation, p. 29.
11. Pettegree, Book in the Renaissance, p. 94.
12. Recent scholarship has begun to ask whether the story of tacking the theses to the church door was apocryphal. Irrespective of such a debate, there is no doubt about the role played by the printing press in the propagation of Luther’s ideas. In fact, the possible absence of a public posting from which the theses could be copied even suggests a more complicit arrangement between the monk and the printers.
13. Lucien Febvre and Henri-Jean Martin, The Coming of the Book (London: Verso, 1997), p. 290.
14. Rossiter Johnson, ed., The Great Events by Famous Historians, vol. 8, The Later Renaissance: From Gutenberg to the Reformation (London: Aeterna Publishing, 2010), p. 18.
15. There is an ongoing debate over whether the printing press created the Reformation. While it is an interesting historical exercise, it does not add to our understanding here. The fact of the matter was that Luther’s writing, as disseminated by the network of printers, fell on a fertile field waiting to be watered.
16. Pettegree, Book in the Renaissance, p. 95.
17. Febvre and Martin, Coming of the Book, p. 294.
18. John Man, Gutenberg: How One Man Remade the World with Words (Hoboken, N.J.: John Wiley & Sons, 2002), p. 273.
19. Ibid., p. 276.
20. Gutenberg was not alone in this quest; history records the efforts of others, such as Coster of Haarlem in Holland (and probably forgets the role of many others). Clearly, however, Gutenberg was the first large-scale implementation and (thanks to his legal problems) the best documented.