Solomon's Code
Page 30
“It had to be Connor all along,” Emily fumed. “Can’t you see that this is exactly what he wants? He wants us fighting. He wants to get back into your life.” She’d stormed out the door and Ava heard her slip about halfway down the stairs. Emily unleashed a string of venom on her way out the front door and onto the street. She didn’t come back that night, and the next morning Ava sent out the alert to their shared network of friends, trying to keep it casual and not spark too much concern.
“Where the hell did you go?” she asked when Emily walked in.
“I went to see a few people,” Emily said. “You don’t want to know who.”
Ava knew enough not to ask; Emily would never reveal those contacts. They sat in silence for almost an hour after Emily had showered, drinking cabernet and trying to figure out what had happened and how it could happen so fast. Ava finally broke the silence, asking her PAL to turn on the lamp and then looking at Emily. “This isn’t Connor,” she said. “He was really bitter when he left, but never about you.”
“I know,” Emily said, her jaw tight. “None of this made any sense, so my people hacked through all our shared systems to see if they could find anything. Turns out a vile little piece of code opened a couple of our PAL apps to a phantom data stream. Someone was watching all our information and feeding in fake data. Our apps did a pretty good job of recognizing the quirks, but this guy was pretty good, really subtle. Eventually, though, your Bouncer Bot started sending some fake streams to different places to see if it could identify the attacker. Someone was trying to drive us apart, but it wasn’t Connor.”
Emily was trembling now. Ava, sobbing, couldn’t figure out if that made the invasion better or worse. “Is it fixed?” she asked.
“Yeah, my contacts cleared it out after setting a trap and turning everything over to the investigators,” Emily said. “The FBI already had your metadata, and they’d been tracking this guy for months now. Their code forensics team got a few hints about him a while back, and then their artificial identity teams started tracking his digital footprints. Apparently, he loves to drink Woodford Reserve bourbon, and they used that to track him to some small town in Canada. They predicted where he’d come back across the border and nailed him. He’s Russian, but he went to high school here in the United States.”
The next day, as Ava walked back up the hill from work to home, the recollection made her knees buckle. Years ago, Connor introduced her to an old Russian school friend at one of his parties. She glanced around, relieved no one had seen her stumble. Her mind was racing. Her gut still told her Connor would never do it, but she had to know. She pulled out her PAL and told her Bouncer Bot about the memory.
“I’m aware of the connection,” it replied, jarring Ava by speaking, as usual, in Connor’s voice. “We’re investigating a recent meeting between the two of them. You must know the probability that Connor did this, while quite low, is still there.”
Ava got home and went straight to the half-empty bottle of cabernet on the counter. She reclined on the sofa, her mind slowly starting to ease with the hope that investigators would make an arrest and this whole thing would end. Who knows what will happen with Emily, she thought, but either way I need to hear it from Connor himself. She added him to the invitation list for the part to celebrate their lost friend, Leo. Something about the decision felt certain, and Ava laid her head back on the arm of the couch.
The lights had automatically dimmed and the speakers quietly sounded the hints of waves lapping the beach. She had already dozed off when the lock clicked open. Emily was home.
8
A World Worth Shaping
The morning arrived wet and cold, and the January 1215 day wouldn’t get much better for King John of England. Having just returned from France, battle worn and financially strapped, he now faced angry barons in his own backyard. They wanted to end his unpopular vis et voluntas (“force and will”) rule over the realm. So, to appease them and retain his throne, the king and the Archbishop of Canterbury brought twenty-five rebellious barons together in London to negotiate a “Charter of Liberties.” The charter would enshrine a body of rights and serve as a check on the king’s discretionary power.
The meeting that winter’s day launched an arduous process, one fraught with tension and a struggle for power and moral authority. But, by that June, they had hammered out an agreement. It provided the barons greater transparency and representation in royal decision-making, limited taxes and feudal payments, and even established some limited rights for serfs. The famous Magna Carta emerged an imperfect document, teeming with special interest provisions and pitted against other, smaller charters, but it established one of the world’s first foundations for human rights.
It would take another 300 years and multiple iterations for the Magna Carta to gain its stature as a reference for property rights, fair taxation, judicial processes, and a supreme law of government. Legally challenged throughout the centuries, the contentious document nonetheless prompted a dialogue about more democratic governance beyond England’s borders. When settlers arrived on the shores of North America, they established their own charters for the colonies, followed eventually by the Constitution and its Bill of Rights, which brought to fruition the ideals seen in the Magna Carta and established them for every citizen, regardless of title and birth.
Today, we regard the Magna Carta less as a historical example of binding legal code and more as a watershed moment in humanity’s advancement toward an equitable relationship between power and those subject to it. But it also marks the beginning of an unruly period in history, a transition that included the movement of people between continents and across oceans, the emergence of new political structures, and the many deadly conflicts that would arise between developing nation-states in an increasingly connected world. It set the stage for dialogue between powers, leading eventually to the Enlightenment, the Renaissance, and constitutional democracy, which rose out of much bloodshed over the course of centuries. Though mistakes were certainly made, it is probably also fair to say that—at least for Western countries—it provided fertile ground for the institutions and political and regulatory governance frameworks that eventually led to economic growth in trade and investment, as well as human and civilizational growth through the dialectic between powers in societies on the European and American continents.
As we delegate more judgment and decision-making to AI-based systems, we face some of the same types of questions about values, trust, and power over which King John and his barons haggled centuries ago. How will humans, across all levels of power and income, be engaged and represented? Which social problems, made solvable by powerful intelligent systems, should we prioritize? How will we govern this brave new world of machine meritocracy, so AI reflects the values and interests of society and leads to answers for humanity’s most difficult questions? Moreover, how will we balance conflicting values, political regimes, and cultural norms across the world’s diverse societies?
These questions take us well beyond technology and economics. Cognitive computer systems now influence almost every facet of life in most of the world’s largest industrialized countries, and they’re quickly pervading developing economies, as well. But we can’t put the genie back in the bottle—nor should we try. As we note throughout this book, the benefits of artificial intelligence and its related advanced technologies can transform our lives for the better, leading us to new frontiers in human growth and development. We stand at the threshold of an evolutionary explosion of opportunity for the betterment of the human experience, unlike anything in the last millennium. And yet, explosions and revolutions are messy, murky, and fraught with ethical perils. We need to harness the vast potential of AI systems for human and economic growth, but to do so we need to consider the social ripple effects that cognitive systems, like the millions of bacteria and viruses in our lives, generate in our brains and our societies. Hence, as we set new applications out into the wild, we also need to put in place the ke
y elements of an enabling infrastructure—the digitization of sociocultural norms with respect for their diversity, appropriate sensor and data collection technologies, governance institutions, and the policies and laws that will regulate development and deployment.
Because cognition pervades all areas of life, these structures can’t be built by technical, commercial, or government experts alone, nor can they be prescribed in sweeping policy measures that don’t comprehend the kaleidoscopic variety of AI applications and their potential. We need a range of historical, anthropological, sociological, and psychological expertise that can encompass the diversity of thinking about entire value systems in communities and how advanced technologies will influence them. We need to figure out which application areas raise which issues, and then neither over- nor under-restrict them if we want to realize their potential. And most of all, we need to make sure that the widest array of voices is heard and their values respected, so AI can be deployed for the common good of all humanity.
To accomplish this, we have proposed and have started working on a modern digital Magna Carta for the Global AI Economy—an inclusive, collectively developed charter that embodies the objectives and responsibilities that will guide the ongoing development of AI-related technologies and lay the groundwork for the future of human-machine coexistence. By integrating economic, social, and political contexts, this charter should begin to shape the collective values, power relationships, and levels of trust that we, as humans in this shared world, will expect of the systems and the people and organizations that control them.
We will need a living, breathing charter, one malleable enough to adjust to the inevitable disruptions that AI will generate. It should establish a global governance institution with a mutually accepted verification and enforcement capacity. We have dubbed it the “Cambrian Congress” to signify that the explosion of human growth opportunity could be akin to the explosion of life during the Cambrian Era in earth’s history. The Congress should involve public, private, NGO, and academic institutions alike, with a collective aim of supporting human empowerment through agreed-upon norms. It should encourage the levels of transparency and explicability that foster greater trust and parity between human and machine intelligences. And it should foster a greater, symbio-intelligent relationship that enhances our humanity and its potential, not just machine efficiencies.
After all, the use of artificial intelligence and its cousin technologies can facilitate great advancement and terrible consequence in equal measure. Without a charter or congress to develop and oversee it, we might not mitigate the serious threats to our safety and security, but we also might miss out on an unprecedented opportunity to enhance our collective humanity and understanding.
BUILDING ON A RICH NETWORK OF EXISTING INITIATIVES
Sometimes, an idea just arrives before its time, and sometimes the innovations can’t come fast enough. Sepp Hochreiter has seen both sides of that dilemma. Hochreiter is head of the Johannes Kepler University Linz Institute of Bioinformatics, but in AI circles he’s known for his invention of “long short-term memory” (LSTM). The process essentially allows an AI system to remember certain information without having to store every little bit and byte of data it consumes. When Hochreiter and Jürgen Schmidhuber tried to publish the concept back in the mid-1990s, no one was interested. Conferences even rejected the research paper. Now, LSTM makes almost every kind of deep neural network more effective, providing a platform for feasible autonomous vehicles and vastly more efficient speech-processing applications.
While that idea sat in obscurity, Hochreiter says, he started applying his skills to the deep data needs of bioinformatics, where he could find plenty of work. Now, he’s driving breakthroughs in pharmaceutical research. The potential effectiveness of any new drug must be weighed against its harmful effects, and toxic elements often lurk in the most minute nooks and crannies of the incredibly complex molecular structures that constitute today’s drugs. Hochreiter and his colleagues have developed an AI model that can sniff out many of those toxic substructures. However, it didn’t stop at the known toxicities, it started to discover new, often smaller structures that could produce similar ill effects—substructures previously unknown to the pharmaceutical researchers. “I have a neural network that knows more about the chemistry you’re employing. It won’t retire. It will work day and night. It won’t leave your company. It will stay there,” Hochreiter says. “But you have to turn data into knowledge you can make a decision on. Do I go with this compound or no? You have to transfer the data into knowledge, and from knowledge you make decisions.”
We can take heart from Hochreiter’s discoveries, both for the future of powerful new drugs that can cure disease, but also from the value of blending historical knowledge with our view for the future. Fortunately for the governance of artificial intelligence, we have a field of precursors from which to draw inspiration and initial guidance. It’s heartening to see new governance initiatives and organizations emerge from commercial, professional, and academic sources, but few of them bring together all those varied interests in any meaningful way—none of them in a manner holistic enough to facilitate the broad agreement needed for an effective global charter. John C. Havens and the committees who developed IEEE’s Ethically Aligned Design report might have produced the most comprehensive global representation of any such effort, having solicited input from developers, ethicists, and other interdisciplinary AI experts from around the world. The document will help establish standards for the technical aspects of AI development for IEEE members. What impact it might have on other spheres remains to be seen.
Others, such as Wendell Wallach, a lecturer at Yale and a senior adviser at the Hastings Center, have proposed ways to spread that thinking and a potential governance structure across broader spheres, including with “governance coordinating committees” that involve government, private industry, and civil society institutions.* Even some of the world’s Digital Barons are looking ahead toward potential outcomes and how to prepare for them. The Partnership on AI is perhaps the most widely known initiative to emerge from the corporate sector, bringing together partners such as Apple, Amazon, Facebook, Google, Microsoft, and IBM, as well as the American Civil Liberties Union, Amnesty International, and other organizations. The Partnership also works with OpenAI, a nonprofit AI research company initially sponsored by Elon Musk, Peter Thiel, and others to find a pathway to safe AI development. In fact, many of the large efforts to emerge from the commercial sector work through a foundation or NGO structure, and often involve multidisciplinary participants, including links to policy makers.
Academically rooted institutions also bring together commercial and political interests, but rarely in a concerted effort to build actual governance structures. In England, some of the world’s great thinkers have convened at the Cambridge Center for Existential Risk and the Future of Humanity Institute, both of which seek to develop strategies and protocols for a safe and beneficial future for advanced technologies. The Leverhulme Centre for the Future of Intelligence has crafted a sort of hub-and-spoke approach, assembling a team of top minds leading efforts around the world and forging connections with policy makers and technologists. Stanford University has launched a 100-year effort, called AI100, to study and anticipate the ongoing, high-level ripple effects of these technologies in all facets of life.
International and quasi-national organizations have jumped into the game as well. The United Nations Interregional Crime and Justice Research Institute has created the Centre for Artificial Intelligence and Robotics. In late 2016, the World Economic Forum founded its Center for the Fourth Industrial Revolution in San Francisco, hoping to create a trusted space where various public, private, and civil sector stakeholders could collectively cultivate the types of policy norms and partnerships needed to foster beneficial development of advanced science and technology. Hoping to help formulate some reliable, human-centered governance standards that countries around the world might embr
ace—and using its convening power to help facilitate it—the WEF opened a second center in Japan in 2018 and plans to establish others in China, India, and as many as seven other countries by the middle of 2019. The WEF also established its Council on the Future of AI and Robotics, convening top government, corporate, and academic leaders from around the world to chart a roadmap for the centers’ AI and robotics initiatives and to consider global governance structures for those technologies.
The WEF conceived of the center as a “do tank,” running practical pilot projects to see what works rather than coming up with abstract theories that cannot be acted upon, says Kay Firth-Butterfield, the center’s global AI Head. The projects are co-created with partner governments, businesses, civil organizations, and academics with a view to those partner governments piloting the policy and then adopting it afterward, she says. The center’s AI-related teams have already started experimenting with a variety of novel ground-up approaches to technology governance. One such project established a depository where professors and other instructors could upload the curricula they use for teaching ethics and values within AI and computer science programs. By making those available to other teachers around the world, who can customize the instruction for their own situations and cultures and then share those, the initiative could develop global best practices and accommodate the world’s diversity at the same time.
They also supported a remarkable on-the-ground program created by the Center’s drone team, which worked with Rwanda as the partner government. That initiative used medical drones in the African country to help establish a standard for autonomous flight that several other nations have adopted in the months since. Rural women in Rwanda faced severe risks during childbirth because the country’s medical personnel couldn’t get blood supplies to remote areas if serious bleeding problems arose. So they started using drones to quickly send blood to clinics where it was needed. As of May 2018, a Silicon Valley firm called Zipline had delivered 7,000 units of blood on more than 5,000 autonomous flights across the country.† However, the Rwandan aviation authorities eventually objected to the unregulated use of drones in national airspace. So, the various parties came together, along with the WEF center, and developed the world’s first performance-based regulations for drone traffic. Drone operators must meet certain safety and operational standards, and aviation regulators will account for qualified drone use in national airspace—potentially opening up Rwandan skies to a variety of innovators and their ideas. Regulators can specify certain safety standards, but then they also will accept qualifying drone missions and operations. “Now my colleagues on the drone team are getting requests from other countries to use that model to help commercialize drone deliveries,” Firth-Butterfield says.