Solomon's Code

Home > Other > Solomon's Code > Page 19
Solomon's Code Page 19

by Olaf Groth


  *The DAMO Academy acronym stands for Discovery, Adventure, Momentum, and Outlook.

  †Neal E. Boudette, “Fatal Tesla Crash Raises New Questions About Autopilot System,” New York Times, March 31, 2018.

  ‡Cathy Newman, “Decoding Jeff Jonas, Wizard of Big Data,” National Geographic, May 6, 2014.

  §Thomas H. Davenport and D.J. Patil, “Data Scientist: The Sexiest Job of the 21st Century,” Harvard Business Review, October 2012.

  ¶James Manyika, et al., “Unlocking the Potential of the Internet of Things,” McKinsey & Company (June 2015).

  #“From Not Working to Neural Networking.” (special report) The Economist, June 25, 2016.

  **Interview with the authors at Google’s offices and via video conference, March 7, 2018

  ††Samuel Gibbs, “Google buys UK artificial intelligence startup Deepmind for £400m,” The Guardian, Jan. 27, 2014.

  ‡‡Piyush Mubayi, et al., “China’s Rise in Artificial Intelligence,” Goldman Sachs (August 31, 2017).

  §§AI in the UK: Ready, willing and able? House of Lords Select Committee on Artificial Intelligence, Report of Session 2017–19.

  ¶¶Nicholas Thompson, “Emmanuel Macron Talks to WIRED About France’s AI Strategy,” Wired, March 31, 2018.

  ##Bhaskar Chakravorti, “Growth in the machine,” The Indian Express, June 20, 2018.

  ***Ananya Bhattacharya, “India hopes to become an AI powerhouse by copying China’s model,” Quartz, Feb. 13, 2018.

  5

  The Race for Global AI Influence

  In 2012, representatives from 193 countries gathered in Dubai to hammer out a set of international telecommunications regulations that would govern everything from phone lines to the Internet. Not one of them could’ve reasonably expected a unilateral agreement on all the issues they would face. The United States would insist on a free Internet with few controls on content, a stance that would surely be opposed by China, Russia, and many other governments around the world. But after some of the preliminary meetings, US Ambassador Terry Kramer felt the delegates might find common ground on at least eight of the ten topics at hand. And then came what he still calls the “oh-shit moment.”

  In an early meeting to set the foundation for the conference, a Swedish representative argued that the conversation ought to include representatives from NGOs and other civil society organizations. The topic was charged enough and the argument earnest enough that Hamadoun Touré, the secretary general of the UN’s International Telecommunication Union, demanded the Swede’s immediate removal. “I believe [Touré] had good intentions in his heart,” Kramer says, “but how he led the discussion created a problematic negotiating environment.” The signal sent didn’t kill negotiations outright, but representatives immediately started to solidify their battle lines and created a “quid pro quo” mindset on issues so fundamental in nature that one couldn’t be easily traded off for another.

  Still, Kramer saw reason for hope. Despite two issues on which opposed alliances might never find compromise, including content-restricting regulations, virtually all the countries had broad agreement on ideas to fight spam and common cybersecurity threats. Touré might have worked on issues of consensus and built some goodwill, Kramer figures. He didn’t, choosing to force votes when more discussion might have produced results. “I think the net effect of his actions was to make certain nations feel ‘named and shamed’ in a very public setting,” Kramer says. “That was a gross miscalculation regarding reactions from nations like the United States and missed an opportunity to drive towards consensus on critical issues that all nations could align around.”

  It all came to a head on the final evening of the conference, when the ITU leadership allowed a last-minute proposal from Iran to go to a vote without any prior discussion or notice. The Iranian proposal essentially said countries have a sovereign right to regulate the Internet however they’d like. Whether the leadership’s decision was an effort to isolate the United States or not, that’s exactly what it threatened to do. Kramer hesitantly got up, not knowing whether any countries, including allies in Europe, would support his blanket rejection of any proposal to limit a free and open Internet. The vote was called, and fifty-five nations came out and supported the American position. “The US created a system that allows expression and entrepreneurialism that works,” Kramer says. Those supportive countries “were not going to take positions against these key principles.”

  In the end, the International Telecommunications Regulations, as they were called, were approved by a majority of the countries—the United States in the minority and unwilling to accept or sign the treaty. The Internet has not become free and open around the world, as Kramer and American interests had hoped it would. “This is a long game, and if people aren’t up for the long game we’re going to have a bad outcome,” Kramer says. “You have to hope for success longer term.”

  Whether in terms of Internet freedom or the regulation of artificial intelligence, the horizon the United States imagined remains well off in a distant future. American and, to some extent, European interests remain solidly in a minority amongst the global community of nations. Almost any way one might try to establish a convening regulatory body—aligned by type of government or by population, for example—and the United States would remain in the minority on these kinds of issues. The one possible exception, to organize by gross domestic product ($1 equals one vote), would represent a vast rift between developed and developing economies. The United States might use the fifty-five countries in its alliance to address issues other countries are advocating, working through other means such as trade negotiations to gain leverage, Kramer suggests. “If you do think there’s leverage and a chance for improvement in those other countries, then you try to push that,” he says.

  Yet, within the community of AI developers itself, some broader consensus seems to be emerging. Concerns about how the world’s diverse mix of cultural norms, political needs, and data streams shape AI development has many observers calling for a universally accepted set of standards. IEEE is leading from a technical standpoint with IEEE P7000™ standardization projects, which directly relate to issues raised in Ethically Aligned Design, as well as its Global Initiative on Ethics of Autonomous and Intelligent Systems, which focuses on embedding ethical considerations at all stages of AI and robotics development. As John C. Havens coordinates these global efforts, he hopes to push thinking about AI development “Beyond GDP,” so we measure success by more than just gross domestic product expansion. “Our goal is to align individual well-being with societal well-being by integrating applied ethical thinking towards new economic metrics beyond growth and productivity,” Havens says.* A variety of initiatives around the world are striving toward similar goals, as we discuss in the last chapter of this book, but standards have to go beyond the lab, the work bench, or the corporate boardroom. “We need to stop and reflect before we move into a future in which AI systems affect an individual’s agency, identity, or emotion,” Havens says. We need a corporate environment in which an engineer blowing the whistle on lazy, ignorant, or nefarious programming is “lauded for bringing innovation into the cycle.”

  The IEEE almost certainly will adopt a set of standards for its more than 420,000 members in 160 countries. Corporations might even adopt some of Havens’s “beyond GDP” thinking about aligning economic growth with human development, as public pressure builds against careless corporate use of so much personal data. But outside the expectations placed on professional engineers and the developing schools of thought about algorithmic ethics, nation-states will continue to vie for supremacy on a geopolitical level. AI isn’t just an arms race, although it has very definite military and defense manifestations, as we discussed in the previous chapter. It’s a political-cultural race, a battle over cognitive power and its ability to sway mindsets, societies, and economies. The runners in this race include national governments and non-state political actors, but also groups of like-minded individuals
, private companies, and other institutions, such as labor unions and educators. Because humans embed their values in AI code, and because we allow those algorithms to make more of our decisions, those systems and the sensibilities of their creators will affect our lives. Will those values come from a group of mostly white and Asian male programmers? Will they come from a central authoritarian government? Or could they develop in a multidisciplinary environment of both private- and public-sector actors that seeks to accommodate the well-being of all humanity?

  DIVERGENT PATHS INTO THE AI FUTURE: THE UNITED STATES, EUROPE, AND CHINA

  The race for AI dominance will play out across a few dimensions. The first is country power and its different drivers; this includes the amount of funding provided to scientists and entrepreneurs, the collection of scarce AI talent, the caliber of research and the fluidity of the entrepreneurial ecosystem, as well as the ability of a country to unify and outwardly project its civil society’s value system and trust. The second dimension builds on this, but blends in individual power, as well. This is driven by the ability of citizens to exercise and change their personas and choices freely, the ability to take recourse in the face of mistakes made by AI systems, the existence of off switches or opt-out paths, and a way for citizens to help shape AI governance, all without stifling its growth—a difficult balance, to be sure. Finally, there are drivers of institutional power that will shape AI development: Are data sets sufficiently large, statistically valid, and accurate, and do they comply with local norms of interest representation? For instance, do programmers, companies, and governments have the resolve to codify systems in a way that balances individual and community goals? Do they safeguard privacy, agency, and the power to protect one’s true persona? Is there sufficient technocratic expertise and capacity to govern AI transparently on behalf of their citizens, facilitating growth and safeguarding abuse?

  This mix of country, individual, and institutional power is hardly new, of course. It resides at the center of most political and economic interactions today, and has since the inception of social contracts and nation-states. Yet, as Henry Kissinger notes in his essay “How the Enlightenment Ends,” AI fundamentally changes these inherently human interactions.† Previously, our interactions forced us to ponder our interpersonal and institutional relationships, reflecting on our values versus theirs, training our critical thinking capabilities, and honing our creative skills to improve these partnerships. Artificial intelligence relieves us of some of those burdens, adding great convenience but also, if we’re not careful, a numbing of deep consideration and decision-making abilities.

  Individuals and institutions will have to evolve new ways to balance efficiency and convenience with the need for an educated and civically fit citizenry. Many people and organizations will stumble away from this fine line, readily accepting the ease and convenience of better technological tools and avoiding the arduous and lonely process of deep reflection. (We already see this happening on social media, in ways that change the actual neurological wiring of our brains.) Pressure will grow for institutions to lower hurdles and create more fluid channels for data, opening the pipeline for the types of relationships, investments, and pronouncements that make for sensational headlines and play to our basest instincts. Yet, despite that, a truer power might begin to emerge from institutions that balance the convenient with the conscious, focusing as much on human growth as economic growth.

  Countries that help facilitate this balance will attract the companies and people who appreciate the value of thoughtful design, patient deliberation, and a search for the common good. This does not necessarily point to an advantage for democratic governments or free-market economies. One can already see the nascent stages of such deliberate approaches in a diverse set of countries, from Denmark and Sweden to Singapore and the UAE. Regardless of whether an outsider finds their philosophies agreeable or objectionable, these countries support a coherent philosophy of cognitive growth. They’re instituting a unified approach with a high degree of technocratic support, sophisticated planning, and a scientific and technological competency that, as Parag Khanna points out in his book Technocracy in America, facilitates the growth of economic and political power in today’s world.‡

  People, institutions, and countries will disagree with one another’s approaches to these dimensions, especially when it comes to regulation and data-protection regimes. Nations have battled over these fronts throughout the last wave of globalization, whether about free speech on the Internet, free passage for commercial airline traffic, restrictions on ownership of telecommunications providers, visa regimes for immigration, tariffs on trade, or taxes on multinational corporations. We can already see some even more formative and powerful differences in political and strategic directions taking shape around artificial intelligence, and each of them will have a major impact on the cognitive race ahead.

  The three major AI powers have followed their history, hearts, and minds into the future. The United States approach features the “Brawny Barons,” relying heavily on its free-market capitalism and the strong private sector and entrepreneurial community it produced. While critical funding for basic and advanced research comes from government sources, the bulk of innovation and control rests in the hands of the country’s start-ups and giant digital companies. In China, of course, the government and the Party take a more direct hand in the development of advanced technologies—the “Party Protectorate” in place since Communism took hold. The country has its powerful Digital Barons, too, but the lines between corporate and government influence have blurred, and in some cases have been eliminated, after many years of government letting the Barons run their own show. Western Europe is trailing both the United States and China, but recently chose a middle path: a “Community Commons” in which its lack of Digital Barons and a measured level of governmental involvement has led to a collective EU-wide effort to balance innovation and individual protection.

  THE EUROPEAN UNION: COMMUNITY COMMONS

  As we discussed in the previous chapter, Europe has no big Digital Barons of its own. Having sought multiple times to limit Google and Microsoft through antitrust proceedings and possessing all too much historical experience with totalitarianism, the European Union feared that US or Chinese companies might mine citizens’ data and keep the information in jurisdictions traditionally less concerned with individual protections (if for different reasons). Of course, privacy issues concern more than just Europeans. Chinese citizens pushed back on systems that encouraged neighbors to report on one another. And Americans have paid more attention since learning that the National Security Agency spied on “persons of interest” within its own borders after the 9/11 terrorist attacks. In fact, private-sector companies have heightened those concerns as they’ve reported a growing number of data breeches or have actively shared personal information in ways that stretch the bounds of their users’ trust and privacy. Facebook and its data-sharing agreements with Cambridge Analytica and other partners, including Chinese companies, has landed CEO Mark Zuckerberg in the glaring spotlight of US congressional hearings, setting up a confrontation with which neither side is comfortable.

  To its credit, the EU has moved a step beyond other countries and regions in this regard, tying individual protections to a more stringent set of requirements for data and artificial intelligence. The formative General Data Protection Regulation puts more agency over personal data in the hands of the individual. If someone wants their data deleted, a company has to do so or face significant fines. The rules also require that companies have the ability to explain why their systems made a particular decision about a particular person. They need “explainable AI,” a difficult proposition because, as we described, many of the complex neural networks that enable machines to learn can’t tell human operators how or why they reached their conclusions. Making AI explainable seems generally desirable, but it brings certain important drawbacks. From an economic standpoint, it could destroy competitive advantages for
commercial designers of these neural networks and therefore reduce investment in useful applications of them. It could also lead to the development and application of only very narrow types of explainable AI systems, while more powerful or beneficial applications work in less restrictive markets. The early regulation could sharply limit experimentation.

  Damian Borth at DFKI notes that many researchers are torn by the regulations, understanding the clear need for individual protections while lamenting the chill it could put on AI development and deployment across the EU. “We don’t know everything about how an airplane works before we fly in it,” he says. What’s most critical is that we have assurances that the aircraft, pilots, and air traffic control will work effectively and safely. But that, too, requires a certain threshold of regulatory oversight, to ensure that the dire scenarios don’t emerge and potentially trigger an even sharper backlash.

  Governments and regulators need to address the difficulties, including personal privacy issues and the threat AI poses for labor, says Catelijne Muller, a member of the European Economic and Social Committee and rapporteur on artificial intelligence. EU officials have brought together different interest groups in society to discuss the impact on jobs, with labor union officials collaborating alongside corporate executives to gain a better understanding of possible futures. “If we want to benefit, truly benefit from all great potential of this technology, we should address the challenges,” Muller says. “If we don’t address the challenges that are obviously there, one government in the future is going to say they’re going to prohibit this. It’s gone too far. So, I don’t think of this as stifling innovation, but promoting it in a sensible way.”

 

‹ Prev