by Tom Wheeler
Throughout history we have seen the power of connectivity to change lives. Whether our new connectivity fulfills its potential to expand opportunity for all, or simply deepens inequality, remains in contention.
Connecting Forward
Each of the chapters in this book closed with a story of how the particular technology it discussed resurfaced as a key part of a subsequent network revolution.
Johannes Gutenberg’s envisioning information in its smallest parts returns in the language of the internet.
Charles Babbage’s lament, “I wish to God these calculations had been executed by steam,” opened the path to automated computing.
Samuel Morse’s “flash of genius,” while a triumph of narcissism over fact, nonetheless opened an era of electronically disembodied information.
And for the past half century, the modern extensions of these network revolutions have been churning to deliver the network forces that will determine our future.
Today, with a degree of anxiety, we observe the approach of newer effects of our network evolution. The takeaway from the previous network revolutions is that no one was prescient. Everyone, from the humblest individual to the greatest scientist to those whose innovations harnessed the new technology, was making it up as they went. Each was assembling a giant jigsaw puzzle without benefit of the picture on the top of the box.
History is told in seemingly clear-cut conclusions generated by a set of facts that appear self-obvious. Historians know how things turned out; contemporaries, like us, who are living through the upheaval have no such advantage. One thing we know from history, however, is that change is unruly, unstructured, and unpredictable. Decision-making in real time is messy and insecure.
I began this book by challenging the assumption that we are living through the most transformative period of history. We now turn to how technology has put us on a course capable of crossing that threshold.
Nine
Connecting Forward
Chapter 1 began with a discussion of Paul Baran’s 1964 description of the move from a centralized network to a distributed network. After fifty years, Baran’s concept has been widely implemented and is the structural revolution on which our future will operate.
A principle of this book is that it is never the primary network technology that is transformational but rather the secondary effects made possible by that technology. The previous chapter sampled some of the ongoing effects of Baran’s network concepts. This chapter looks at how the distributed digital network underpins the four network-based forces that will define the future: a new generation of the web, artificial intelligence, distributed trust, and cybersecurity. Together they possess the potential to deliver a level of network-enabled commercial and cultural transformation that just might rival that of the last great network-driven revolution in the mid-nineteenth century.
The union of low-cost computing and ubiquitous connectivity across a low-cost distributed digital infrastructure has the capability to deliver us fully into the third great network-driven transformation. For as much as we talk about living in an information age, we have yet to experience a period to rival the Industrial Revolution. Our networks, for instance, while shaping many daily activities, have yet to have an industrial-age-level impact on the heart of the economy: the productivity of the creation of goods and services.
In his masterful The Rise and Fall of American Growth, economist Robert Gordon compared the growth in productivity of the years 1920–70 with the years that followed. The average annual growth in productivity per hour dropped from 2.8 percent in the mid-twentieth century to 1.6 percent as the digital era emerged.1
The early days of the internet’s growth had a significant impact on economic productivity, but it was not sustained. Between 1996 and 2004, American productivity rode on the back of ubiquitous personal computing and the web to increase annually by an average of 3.1 percent.
But then productivity growth fell away. In the period 1970–2014 the average annual growth in productivity per hour was below that of the post–Civil War era. Certainly, the network continued to have an impact on our personal lives, but being able to order a pizza on a smart phone is a far cry from the expansion of efficiency in the production of goods and services at the heart of the economy.
The culprit was the network itself—and the new economic rewards structure it created.
With the introduction of the World Wide Web around 1990, the internet became usably accessible to mere mortals like you and me. While the early digital network had created the ability to access diverse databases of information, its use was limited to an internet priesthood and nerdy hobbyists. Tim Berners-Lee’s creation of the web broke through those barriers by creating a common protocol for finding and displaying information from disparate databases across disparate networks in a simple request/response format.
The first iteration of the web (Web 1.0) gave us browsers with which to seamlessly search the world’s information. Accompanying that capability, and driving its adoption, was the opportunity to sell advertising associated with the information. Adoption of the web coincided with and helped fuel the 1996–2004 decade of internet-driven productivity growth.
About a dozen years later, Web 2.0 democratized the network by allowing anyone to create and deliver information. It was the birth of social media. Consumer-facing, as opposed to productivity-enhancing, activities dominated the web. The economic model of selling to businesses a consumer’s self-expressed interests took off to become the dominant economic model of Web 2.0.
Outside social media, the new business model did little to improve the production efficiency of converting inputs into outputs of goods and services—the basic measure of productivity. Despite all the innovations we have seen in our personal lives—from Facebook, to Netflix, to Waze—productivity growth slowed.2
There is a fundamental difference between Facebook or Netflix on your smart device and a revolution in the core production capabilities of the economy. “Economic growth since 1970 has been simultaneously dazzling and disappointing,” Robert Gordon observed, because “advances since 1970 have tended to be channeled into a narrow sphere of human activity having to do with entertainment, communications, and the collection and processing of information. For the rest of what humans care about—food, clothing, shelter, transportation, health, and working conditions both inside and outside the home—progress has been slow.”3
But the cavalry is on the way. The distributed digital network has become the infrastructure for game-changing and productivity-enhancing uses of the network. It all starts with a new iteration of the web itself.
Web 3.0
Creating Value by Orchestrating Intelligence
The new Web 3.0—called the semantic web by Berners-Lee—stands the web’s traditional request-response structure on its head. As Moore’s law enables microchips to be built into everything, wireless connections allow access to the functionality of N+1 number of chips and the intelligence they generate. Rather than the call-response of earlier versions of the web, in which existing information was discovered and displayed, Web 3.0 orchestrates intelligence to create something new.
Using the web to deliver a movie or a Facebook post is the transportation of information that has already been created. Web 3.0, in contrast, is the orchestration of a flood of intelligence from connected microchips. It is, for instance, the difference between a connected car and an autonomous vehicle.
The cars we drive today are wirelessly connected in the manner of earlier iterations of the web: there is an ongoing request-and-reply of information into and out of the automobile. Autonomous vehicles, in contrast, are full of microprocessors generating intelligence that must be orchestrated with that of other vehicles, road signs, weather sensors, and a myriad of other inputs. That orchestration creates a new product (the safe coexistence and cooperation of vehicles), and that product creates new productivity (more efficient uses of highways and roads).
Each autonomous vehi
cle is expected to produce 25 gigabytes of data per hour—the equivalent of a dozen HD movies.4 Former Intel CEO Brian Krzanich estimated that in one day, a single car will generate about as much data as 3,000 people do in a similar period today.5
The autonomous vehicle is but one example of how the distributed network becomes a platform for new applications as the intelligence being produced by innumerable nodes throughout the network is connected so that it can be manipulated to create new products and drive productivity.
As we have seen, the earlier versions of the web produced brief efficiency gains. Putting computers with internet access throughout an enterprise increased productivity through the improved transmission of and access to information, but the next productivity jump was elusive. Web 3.0’s semantic capabilities, however, promise continually increasing productivity improvements to accompany the exponential increase in intelligence generated by connected microchips.
The move from transporting preexisting information to orchestrating new intelligence to produce new products and services will redefine the economics of the network from push to pull. Thus far, the business model of the web has been dominated by pushing information to targeted users and selling that capability to advertisers. Web 3.0 redefines value creation as pulling the intelligence created by tens of billions of connected microchips so that it may be manipulated to create new products and capabilities.6
The first act of Web 3.0 has been dubbed the internet of things (IoT).
Consider, for instance, how connected microchips change the industrial process. A company such as Boeing, for instance, must orchestrate the activities of more than 28,000 suppliers and activities in seventy countries, not to mention products that are continually on the move. By including inexpensive microchips in component parts, Boeing can track the shipment of parts from suppliers for just-in-time arrival. Once on the assembly floor, wireless sensors read the whereabouts of the parts and instruct automated equipment. Then, when the finished product leaves the plant, the same kind of connected intelligence provides global, real-time tracking and tracing of the aircraft and its components.
It’s not just in industrial settings where IoT enhances productivity. Sensors monitoring sunlight, humidity, and ground moisture help agricultural operations protect crops and maximize yield. When the produce is shipped to market, sensors track its movement to market, as well as how fast the produce is ripening. The latter information is especially instrumental to productivity as it can trigger cooler refrigeration or even delivery to a closer intermediary market.
IoT can even help reduce your water bill. In the typical public water system in the United States, 16 percent of the water put into the system never reaches the consumer.7 Intelligent sensors placed strategically throughout the water system can identify and report leaks that otherwise would go unnoticed. A new product—real-time monitoring of the water system—thus results in increased productivity of the water system to save ratepayers money.
Whether in industrial, agricultural, or smart-city applications, the opening act of Web 3.0, the internet of things, creates a new information product: the real-time awareness of everything going on. The application of that information increases productivity.
The old economic model of the web had to be principally consumer-facing as it depended on advertising dollars. Making money with Web 3.0 will be different as intelligence becomes a raw material used to create new products that will in turn make those things more productive. The business question of Web 3.0 then becomes “What can I build?” as opposed to “What can I sell?”
Artificial Intelligence
Our Network Resembles Our Brains
As Charles Babbage struggled to explain his analytical engine in nineteenth-century terms, he described the first computer as “eating its own tail” because it based one calculation on the results of preceding calculations. It is a description that could also be applied to what we today label “artificial intelligence.” And Babbage’s mechanization of human reason raised, in the Victorian era, the same kinds of existential issues that have emerged around artificial intelligence today.
The term artificial intelligence (AI) first surfaced in a 1979 article by Stanford professor John McCarthy.8 Since then it has been used and abused in popular culture and endless science fiction thrillers. Computer science legend Ray Kurzweil forecasted that by 2045, the ability of AI to continuously improve on itself would result in the “singularity”—machine-based superintelligence greater than human intelligence.9 For the foreseeable future, however, our reality will be shaped by multiple levels of evolutionary computer intelligence that will creep into daily life.
One level of computer intelligence—often called machine learning—is the ability of machines to sift through great quantities of information to, Babbage-like, inform subsequent activities. Amazed that after you type only part of your search query, Google completes it for you? Pleased by how Amazon recommends books on topics of interest to you? Happy your radiograph is being read quickly and precisely? All of these are examples of “intelligent” machines accessing databases of previous searches, prior purchases, and earlier diagnoses to provide an answer.
The concept of an intelligent machine came to the nation’s popular attention in 2011 when IBM’s Watson computer beat two human champions on the TV game show Jeopardy. While it was referred to in shorthand as the computer “thinking,” Watson really wasn’t thinking. The format of the game show lent itself to a computer parlor trick where preloaded information produced answers that made the computer appear to be thinking.
The secret to Jeopardy, former champion Richard Cordray once told me, is a multistep preparation process. First, contestants must identify facts that lend themselves to the “What is …” answer format of the show. Then they must amass notebooks full of this information and learn those facts. It is an activity perfectly suited for an entry-level intelligent computer like the Watson of 2011. The basis of the computer’s victory was that manmade algorithms followed Cordray’s technique, but in instantly accessible digital code rather than stacks of notebooks (Watson’s data included, for instance, the entire contents of Wikipedia). The algorithms would identify what was being asked, search the database for relevant information, and proffer an answer.
The next generation of intelligent computing programmed the computer to derive conclusions from data, not just spit out answers. “Conclusions” are different from “answers” in that the output is determined from the data being observed, rather than that which is programmed in. It is the difference, for instance, between the computer being told the Nile is the longest river in the world (an answer) and the computer searching geographic databases to report the answer (a conclusion).
As machines evolve from following specific instructions to the processing of diverse information, they begin to look like they are thinking—but they are not. What they are doing, however, is behaving in ways that mimic the neural network patterns of the human brain.
The brain is essentially an input-output system, just like a computer. There are 10 billion neurons that receive, process, and transmit inputs (for example, the stove is hot). The network connecting these neurons allows the brain to draw upon and assemble all these inputs to trigger an output response. Intelligent computing creates an artificial neural network of software and hardware to replicate the same process of harnessing diverse, distributed, but interconnected inputs to produce a conclusion.
As I stood in Boeing’s new Composite Wing Center in Seattle—a building that could house twenty-five football fields—huge machines laid the strips of carbon-fiber-reinforced polymer used in the wings of Boeing’s next-generation 777. To build the struts that run the length of the almost 118-foot-long wing, the machines painstakingly lay down 120 layers of carbon-fiber composite tape. The result is stronger yet lighter and more flexible than traditional metal struts.
My visit took place just after the new facility opened and before actual production began. From high above the floo
r, I watched half a dozen white-coated inspectors with magnifying glasses and flashlights poring over every millimeter of the gigantic strut checking for “gaps and [over]laps” in the most recent layer of composite tape. By feeding the “gaps and laps” into a database, the computer “learns” from its mistakes and “teaches” itself to eliminate the problems in subsequent runs.
The aircraft flying on these new wings might someday similarly be piloted by machine intelligence. The autopilots in use today are like the early Watson: loaded with instructions for what to do in specific circumstances. Confronted by something new, however, the autopilots default to human pilots to solve the unforeseen with their biological neural networks. Store all the learned experience of the humans in artificial neural network computers, however, and the automatic pilot could conceivably manage unanticipated circumstances in the same manner as human pilots.
At University College London, researchers have built an autopilot that calls on ten separate artificial neural networks, one for each different aircraft control (throttle, pitch, yaw, and so on). These systems then collect the digital records created by human pilots in simulators to determine how they respond to various circumstances, including the unpredictable. Using this database, the new autopilot builds a collection of information from which to draw in making decisions.
To utilize the huge amounts of data required for machine learning requires significant computing power. Accomplishing this has become a Gutenberg-like reassembly of diverse pieces of information, done at high speed. Interestingly, it draws on microchip designs created for video games. Graphic processing unit (GPU) chips were originally developed because to be realistic, video games required breaking large blocks of descriptive information into smaller pieces to be processed in parallel and reassembled on the fly into the finished product. It is exactly this parallel processing of large amounts of data that is necessary for machine learning.10