by Tom Wheeler
Machine AI, therefore, isn’t really “thinking” but is rather a mathematical activity designed by cognitive individuals for the rapid mining and manipulation of information. True AI is a broader, more powerful concept whereby a machine becomes “smart” through its own intelligence. Thus far, it is more a concept than a reality (which is not saying it cannot become reality). Nonetheless, it has sparked its own debates. From White House workshops to the cautions of visionaries, the technical possibility and ethics of true AI remain a front-burner item. Long before we get to true AI, however, we will be forced to deal with the creeping effects of its early iterations.
During a meeting with the communications minister of Argentina about that country’s plans for next-generation connectivity, the minister asked about the issue that has accompanied technological innovation throughout history and now circles around intelligent computing: what about the impact on jobs? Specifically, he referenced a 2013 study by two University of Oxford professors forecasting that 47 percent of the jobs in the United States were at risk of being replaced by machine intelligence.11
Such concerns have historically accompanied technological upheavals. The term “Luddite” made its way into the English language, for instance, when early-nineteenth-century English weavers protested the Industrial Revolution’s automation of weaving looms. Characterized by the smashing of looms by the probably fictitious Ned Ludd, the movement galvanized around the fear that skilled textile workers would be replaced by machines that would require only unskilled operators.
Such automation produced the industrial age, a period when, indeed, sector-specific jobs were lost. However, the basic skills required in most jobs meant a worker could lose his job in one sector but still put his skills to work in another. The automation may have displaced workers from specific jobs, but it ended up being beneficial for the overall economy—and job creation—since the increased productivity led to lower prices, which drove up demand for products, and thus demand for workers.
Concern about job-killing automation resurfaced in the mid-twentieth century as computers began to come out of their sanctuaries. As far back as 1961 President Kennedy warned, “The challenge of the sixties is to maintain full employment at a time when automation is replacing men.”12 Two decades later, when PCs began appearing on desktops, the fear of being automated out of work returned.13 Today, intelligent computing has reignited the issue yet again.
The challenge of an automated economy is not simply the replacement of lost jobs, it is the reallocation of workers and skills. The widespread adoption of bank automated teller machines (ATMs), for instance, substituted software and computer chips for a job done by humans. The number of bank tellers declined dramatically. This, however, had the effect of reducing the operating cost of each bank branch. As a result, banks opened more branches. The new branches didn’t need as many tellers, but they did need sales and customer service personnel. The new job skills were different from those of a teller, but not that different—and they were harder to automate. As a result, automating tellers turned out to lead to an overall increase in bank employment.14
Intelligent machines also hold the potential to address some systematic problems, including, perversely, a potential shortage of workers. Over the next decade it is estimated that 3.5 million manufacturing jobs will open.15 Yet the growth of the labor force available to fill those jobs is down by two-thirds owing to retiring baby boomers and lower birth rates.16 Something is going to have to fill the void.
Automation could also mean the return of activities that for decades have been siphoned away by lower-wage countries. Why go to the expense and bother of assembling something offshore if it can be just as inexpensively assembled by intelligent machines domestically? Even if foreign countries adopt the technology themselves, they will still lose their cost advantage. Since foreign intelligent machines will presumably have the same capabilities as domestic machines, there is no economic gain from exporting the activity in the first place.17 Operating the machines will likely represent a new occupation demanding new skills—and it will mean returning jobs.
It is important to be neither too rosy nor too dark about the challenges of machine learning and AI. It is doubtful the world will sit calmly by and not respond to the challenge of automation. Replacing 47 percent of jobs with machines may be a sustainable assumption in academia and in economic models, but this nightmare scenario can occur only if everyone sits on their hands. As machine intelligence will be implemented neither instantaneously nor in isolation, economic and political structures will have an opportunity to respond.
The ATM example is instructive. The jobs that were lost were rote activities that could be automated. The jobs that were created required creativity and the ability to solve problems and make decisions. Responding to intelligent machines isn’t so much about creating new jobs as it is about new occupations.18 Such a situation challenges us to reassess educational preparation to deprioritize skills that can be automated and reprioritize training for intellectual skills that can’t be automated.
Automation also challenges us to revisit national policies built on industrial age assumptions. Workers’ rights, for instance, have evolved as we’ve moved from the shop floor to the digital economy. Increased productivity became the rationale to cut the workweek by one third. Government policy and collective bargaining rebalanced the employer-employee relationship that industrialization had skewed. Without a doubt, similar adjustments will be necessary as we deal with the reality of machine intelligence.
But the greatest worker’s right is the right to be prepared—both entering the job market and continuing in it. One study projects that 60 percent of the 3.5 million industrial jobs that would become available over the next decade will go unfilled because our education system isn’t producing individuals with the requisite technical, computer, and problem-solving skills.19 And when the CEO of AT&T tells employees to spend five to ten hours per week expanding their skills to “retool yourselves,” the right to preparation takes on lifelong proportions.20
Machine intelligence will be essential to handling the tsunami of data from Web 3.0. Both the flood of product-producing data, and its automated applications will reprise the experience of upheaval and angst that we have seen technology-driven change create throughout this book. How we think about production, education, workforce allocation, and economic models must change. It was the pressure of earlier network revolutions that forced revolutionary education and labor policies that are today the accepted status quo. That experience is important to remember. Our technology may be new, but dealing with the effects of the change it creates is familiar.
Blockchain Trust
A Distributed Network Creates Distributed Trust
About forty years after Johannes Gutenberg’s printing press, Venetian mathematician Luca Pacioli’s book (see chapter 2) introduced double-entry bookkeeping to a world beyond the merchants of Venice. With his common principles of double-entry accounting, Pacioli established the basis for a coordinated banking system rooted in counterparty trust. A bank in a distant city knew it was safe to transfer money into a particular account based on a trusted relationship with the originating bank, a relationship that began with the knowledge both banks were keeping score the same way. The banks then built their businesses by charging a fee to use that trusted relationship.
All the world’s transactions—not just cash transactions—have at their heart the need for trust between the parties. Unchanged over the centuries, trust was a hierarchical, hub-and-spoke system. Every time we pull out a credit card to make a purchase, for instance, that classic trust mechanism goes to work, and the banking intermediaries make money. The credit card company sells to the merchant the trust that the bill will be paid, while at the same time selling to the customer the trust that the merchant will honor the plastic. The credit card company’s service functions, just like the railroad hub or telephone switchboard, are a centralized activity.
As Web 3.
0 orchestrates the intelligence from tens of billions of microchips, counterparty trust becomes even more important. The validity of the source of the intelligence, and of the intelligence itself, will need to be affirmed in real time. The slow centralized trust mechanism will no longer be adequate to verify the request for and creation of the flood of information from N+1 microchips.
Creating the interparty trust required by any transaction—whether trading stocks, purchasing goods and services, or any other movement of an asset—has traditionally been a hierarchical activity built around a centralized ledger. The migration of activity to the edge of the network has created both the need for a more efficient trust-building mechanism—and also created the vehicle for its delivery.
Generically described as “blockchain,” the distributed network has enabled a system of distributed ledgers. These distributed ledgers can do for internet transactions what the web did for the internet: bring a layer of simplicity and increased performance. The web made it possible to find and link to information in the vast morass of the internet. Blockchain creates similar links among ledgers that record value.
The earliest iteration of this distribution of trust was Bitcoin, a non-state-authorized pseudo-currency. Since any currency is a fiction backed up by the trust that the value is as represented, Bitcoin simply mimicked this online by moving the trust validation function out of a centralized governmental structure onto the distributed network. The pseudo-currency moves over the network in peer-to-peer transactions without the need for a centralized trust validator, but with validation nonetheless.
To accomplish this, each bitcoin has a complex ID number (called a hexadecimal code). Tracking this code replaces the institutional validator by what amounts to a decentralized super-spreadsheet accessible to anyone participating. The idea surfaced in a 2008 white paper signed by a still unknown person or group called “Satoshi Nakamoto.” In place of handling financial transactions the same way as switching boxcars—that is, bringing everything to a central sorting and validation function—the white paper described a distributed ledger technology that follows the path of the distributed network to move activity outward to the multiplicity of points at the edge of the network.
The new distributed ledger technology became known as blockchain. What these distributed ledgers do is replace central databases of who owns what and who owes what with a network of duplicate databases holding the same information. Because they are networked, these databases are constantly updating each other with information on the latest transaction. It amounts to a synchronized global ledger of all executed transactions that is securely distributed across multiple physical locations. It tracks, verifies, and records all transactions; putting them on permanent display to inform all other transactions. Every transaction is a “block,” and every block, when recorded in the ledger, creates a new asset-allocation reality by which the next block is measured—thus the term blockchain, a continuous chain of new blocks of information that are constantly updating reality.
When the distributed network is used to enable distributed ledgers, trust is created not by holding proprietary information but through a collaborative understanding in which everyone knows what everyone else owns, owes, and is doing.
Luca Pacioli turned accounting into math where capital is always a credit and cash is always a debit and the two must reconcile (assets = liabilities + equity). Pacioli’s ledgers were authenticated by a mercantile officer, just as today’s ledgers are authenticated by trust-checking auditors. Blockchain’s giant collaborative ledger replaces those costly and time-consuming trust-checkers with algorithms that constantly validate the implementation of transactions.
Look at the credit card example again. When you present a piece of plastic, the merchant determines whether to trust that you will pay by running the card through a terminal linked to the credit card issuer. A simple lookup is performed by the credit card company to verify the user is creditworthy and returns a “trust it” message (or the dreaded “I’m sorry but your card has been declined”). For providing this service, the card company charges the merchant a percentage of the transaction.
But what if instead of going to the credit card company’s proprietary database, the information request went to a networked set of shared ledgers? In this instance, the algorithm created by the merchant’s inquiry would cause the distributed spreadsheets to update, deduct the appropriate amount from the customer’s account, and credit the merchant’s account (or alternatively decline the transaction). A proprietary information-hoarding activity would be replaced by collaborative sharing of information.
Because it is handled electronically on a network with declining marginal costs, the expense of blockchain validation plummets. By moving from a proprietary to a collaborative ledger, the extraction of fees to use a centralized database also declines. Just as the distributed delivery of a phone call replaced the high cost of a centralized system for communications, so does the use of a distributed ledger replace the high cost of conducting a transaction using a dedicated trust agent. At the same time, fraud and dispute resolution are reduced as the super-spreadsheet knows all the customer and merchant information, not just the information that might be in one company’s records.
While blockchain’s first application was Bitcoin, the technology can record and report on any kind of transaction. Any item that can be tracked by a ledger can be secured with a distributed ledger. Because everything has a supply chain coming from somewhere and going somewhere else, everything can be tracked.
Counterfeit drugs, for instance, are a life-and-death threat when unscrupulous middlemen violate their position of trust and substitute look-alike pills for the real thing. With blockchain’s ledger tracking and recording of every transfer, however, the compromise can be caught. Similarly, the serial numbers etched into diamonds can be easily and openly tracked and available to all, making them harder to resell and less likely to be stolen, and exposing illegal extraction practices. And for assets where provenance of the object is essential to its value, such as in the art business, tracking the item on a blockchain ledger guards against both theft and forgeries. Distributed ledger technology can secure anything that can be entered into such a ledger—from the serial number of an expensive piece of electronics to the tracking number on livestock.
Blockchain also is a potential solution to the problems created by the electronic network’s peer-to-peer redistribution of assets such as music. Before the peer-to-peer transaction capabilities of the internet, record labels provided the trust function to protect copyrights by controlling distribution and preventing copying. The arrival of digital music storage and peer-to-peer networking bypassed that function, and with it payments to artists and composers. However, if music were delivered peer to peer using blockchain, a file on the ledger would exist for each piece of music. Each file could then even have its own unique rules (for instance, “play once for free, thereafter debit the player’s account and credit the artist’s”).21 And blockchain is tailor-made for micropayments for each play, something that has been impossible due to the high cost of centralized settlement systems.
Just possibly, the same kind of distributed ledger model could help each of us recover control of our private information. Today, huge centralized databases suck up personal information and sell it. Just as blockchain can create a micropayments structure for music, it can create individual micropayments every time a piece of our personal information is used. Platform companies such as Google and Facebook make huge profits by brokering the sale of such personal information. Blockchain can create a means for individuals to control that information, determine what to make available, and even receive payment for it. If blockchain can eliminate centralized credit card companies’ brokering of personal credit information, it could do the same for all other information and put consumers back in control of their privacy.
As the distributed network expands the flow of data and the universe of transactions, a fast, low-cost means f
or validating those transactions becomes essential. Blockchain takes Pacioli’s principle of a standardized protocol providing trust and disperses it throughout a distributed network to expand the provision of trust necessary for a successful transaction.
Cyber Vulnerability
When Everything Is Connected, Everything Is Vulnerable
From the beginning of time, network pathways have been avenues of attack. Primitive cultures mounted attacks by following animal paths. Alexander and Caesar conquered the world using roads and waterways; “all roads lead to Rome” for a reason. Britannia ruled the waves to keep open the trade routes of the empire. And in the twentieth century the nations of the world simultaneously used land, sea, and air pathways to deliver the bloodiest decade in history.
The digital pathways of the twenty-first century are no exception. Therefore, we should not be surprised that the new digital networks have become the new pathways for attack.
Exploitation of the new network can vary from criminal activity to intelligence collection and acts of war. The openness, ease of access to, and interconnection of a multiplicity of digital networks can aid and abet these diverse exploitations.
As the nature of what is connected has changed, the opportunity and incentive for cyberattacks have increased. The early digital networks simply connected computing machines; the security threat was the mischief of hackers or vandals. As data storage became both costless and essential, customized attacks on corporate and governmental databases became a tool of espionage, blackmail, and exploitation. Then, as networks began connecting people, mobile devices, social networks, and personally identifiable services became a target-rich environment for criminals and nation-states. Finally, the internet of things’ tens of billions of connected microchips has become the addition of tens of billions of new attack vectors.