by Olaf Groth
THE UNITED STATES: LEADERSHIP IN THE BALANCE
In 1987, the eminent economist Lester Thurow delivered an address in Sendai, Japan, and one of his remarks stuck with me (Mark) over the many years since. Thurow noted that, after World War II, the United States led the world in every industry except one: bicycles. Italy took that solitary honor at the time. By the time we wrote this book, that undisputable pole position dwindled and gave way to a more balanced, multipolar global economy, including in some advanced technology fields. The rising geopolitical and technological muscle of China has not yet diminished US leadership in artificial intelligence. American universities, companies, and government-backed initiatives continue to push the frontier of innovation, and the emergence of competing powers still springs, at least in part, from the cutting-edge work done in Silicon Valley, Boston, Seattle, and other US high-tech centers. According to the Chinese Ministry of Education, more than 608,000 students left the country to study overseas on 2017, most of them going to universities in the United States and Europe. The number of students returning to China, often dubbed the “sea turtles,” increased more than 11 percent in 2017—and of those, almost half had earned a master’s degree or higher from Western universities that are still considered the cutting edge of breakthrough thinking, both in terms of science and technology and their socially critical examination.¶¶
The Stanford Center for Advanced Study in the Behavioral Sciences is just one such leading center. After several years leading DARPA during the Obama administration, Arati Prabhakar moved back to Silicon Valley, where she had been a venture capitalist, to take up a fellowship there. Through the fellowship she’s pushing the outer limits of how we model and understand the extreme complexity in today’s world. Much of her advanced research grows out of a fertile blend of experience that’s not uncommon among America’s innovation leaders, and it makes her ideally suited to help drive the story of US science and technology leadership in AI. For example, she’s contemplating the concept of “adaptive regulations” that “allow you to experiment and learn without going too far,” she says at a coffee shop just off campus. She notes that policies and regulations should achieve a degree of consensus and then provide stability so individuals and companies can count on a set of ground rules for a certain period of time. “We won’t ever make the pace of regulation as fast as the pace of technology—we would be whipsawed if we did,” she says, “but we can keep it closer.”
Advanced technology, including AI, could have the greatest impact on humanity as it tackles societal problems and learns about human behavior, Prabhakar says, but “we’re really, really early on that.” While cognitive machines can process staggeringly large arrays, none can bring any depth of understanding to the table. So, researchers try to build economic and behavioral models that can leverage the massive but narrow computational power of AI in a way that lends to better human understanding. But, of course, models are inherently inadequate, so we add and innovate on top of them to make better models. “If you want to take those next leaps to have richer, more representative models—knowing that you’re not going to emulate everything—how do you do that?” she asks.
At one point, DARPA worked on a program to develop a model that might predict food crises in places like Africa or the Middle East, tracking weather, soil conditions, and several other environmental and human factors. Yet, one could never fully model how a government regime might react, and thus jeopardize or facilitate agricultural production. So, a deeper model will have to factor in those variables as well. “The overarching narrative with IT today is the ability to tackle scale and complexity we never thought was possible before,” Prabhakar says.
In this sort of complex-systems thinking, beyond just the deep technical and research expertise dedicated to AI development, American researchers remain better at defining the “future evolution of AI,” says Tom Kalil, chief innovation officer at Schmidt Futures and the former deputy director for technology and innovation at the White House Office of Science and Technology Policy (OSTP). Yet, a clash of models has ensued, and it’s not entirely clear what will emerge as they interact. “China’s political economy has a greater focus on maximizing national power; America’s political economy is better at the efficient allocation of capital,” Kalil says. “If China is willing to spend whatever it takes to establish a leadership position in technologies such as AI and quantum computing, it may be inefficient but could still be effective. I don’t think America’s political and business leadership have a strategy for dealing with this.”
China still has major institutional challenges and issues with academic performance and integrity, but the return of the “sea turtles” will help mitigate some of those issues. And upon their return, they bring new technical, managerial, and cultural insights that will enhance China’s AI ecosystem. To the extent that AI breakthroughs will benefit humanity in some ways regardless of their country of origin, this intersectional relationship between the United States and China and other countries has the potential to improve lives around the world. What begins to trouble some American experts, however, is the degree to which China fuses civil society with defense objectives. A clear divergence in values is emerging, values that govern whether and how regular citizens should contribute to national security. The United States, and Western militaries in general, draw a clear distinction between war and peace, and the separation between civic and military domains whereas China’s People’s Liberation Army tends to see them on a continuum.## Political or economic competition is viewed as part of an ongoing struggle in which every citizen plays a part, even if the country remains far from any outright military clash.
This provokes a great deal of concern about China’s interference in Western democracy and society, or the poaching of western intellectual property in otherwise unassuming business or personal relationships. That’s why it is important to distinguish between the traditional concept of an “AI arms race” and the intelligence and counterintelligence operations that countries conduct toward one another on an ongoing basis these days, says James Andrew Lewis, senior vice president at the Center for Strategic and International Studies. In the race for AI leadership, Lewis says, “military terminology doesn’t make sense. It’s not a war. It’s not an arms race.” The push to drive greater digital innovation in the military spheres is hardly a novel concept. However, he says, the United States very definitely faces a new, politically divergent competitor for economic and cultural influence, a counterforce it hasn’t seen since the Cold War and one that sees the rivalry as a competition across multiple domains, all to be won or lost.
The Office of Science and Technology Policy under President Donald Trump adopted a more libertarian approach to AI-related regulation, both domestically and internationally. Within the United States, the president’s Office of Science and Technology Policy will seek to reduce barriers to high-tech start-ups, aiming to keep the country at the forefront of entrepreneurialism and innovation, according to public remarks by Michael Kratsios, the deputy assistant to the president for technology policy. In a May 2018 address announcing the creation of the National Science and Technology Council’s Select Committee on Artificial Intelligence, Kratsios said that, in many cases, “the most significant action our government can take is to get out of the way.”*** The administration would not try to solve problems that don’t exist, he said. Rather, it would seek to provide the private sector with more access to resources such as government labs and data. And while Kratsios also partnered with other G7 representatives to declare “the importance of investing in AI R&D and our mutual goal to increase public trust as we adopt AI technologies,” he also stressed that the White House “will not hamstring American potential on the international stage.” Domestically, command-control policies can’t keep up with private innovation, and the administration will not bind the country “with international commitments rooted in fear of worst-case scenarios,” he said. “We didn’t roll out the red tape before Edison tu
rned on the first lightbulb.”
As we explore in Chapter 8, reliance on existing international institutions might also result in a lack of expertise and a lack of inclusiveness that could drive Chinese authorities to create their own model of international governance. As it asserts itself, China no longer wants to play under Western rule sets and regimes, and the rulebook for both hot and cold wars has changed. During the Cold War, we dealt with an adversary proficient at psychological warfare, and we certainly had our own influencing techniques, such as Radio Free America and the creation of universities in West Germany and the Middle East. Since then, we have conceived of “psych ops” as the business of Fifth Avenue rather than Pennsylvania Avenue. “An Egyptian colleague told me, ‘You Americans, you’re hopeless at propaganda,’” Lewis says. “‘You think it’s like selling soda pop.’” That might be giving the United States too little credit, but it’s clear China and Russia have recently posted a strong track record of multiplying psychological impact through AI and social networks, which bind to our emotional receptors much more effectively than leaflets or in-person seductions. Today’s battleground is the deep tissue of society in what is now called a “hybrid conflict” through civil-military fusion (CMF).
Today, Chinese and Russian military doctrines consider cognitive and economic actions, seeking to gather more public and private data or sow confusion in rival societies. “The big changes will be economic,” Lewis says. Efforts to integrate AI and autonomy into weapons systems has gone on for years, so more intriguing now is the fact that “your decision making will change as a consumer or business because of the ability to access AI.” To wit, the United States Cyber Command has shifted its perspective on cyber activity from the idea of individual hacks or attacks to one of sustained, sophisticated campaigns to undermine anything from American military power to social cohesion.†††
Of course, that doesn’t mean AI in traditional defense applications doesn’t matter. While China has made great advances in its cultural and economic influence, the United States has retained, for now, an edge on power and smart military technologies. This emerges in part from the Third Offset Strategy initially laid out in 2014 by then secretary of defense Chuck Hagel. The strategy seeks to lead the integration of autonomous and other AI-powered technologies into “warfighting potential” and restore the military’s “eroding conventional overmatch versus any potential adversary, thereby strengthening conventional deterrence,” the Hon. Robert O. Work, himself a former deputy secretary of defense under both the Obama and Trump administrations, writes in a report on US Department of Defense spending on advanced technology.‡‡‡
The report, produced by Govini, a government analytics and big data firm where Secretary Work is a board member, found that unclassified defense spending on AI, big data, and cloud technologies reached $7.4 billion in fiscal 2017, up by 32.4 percent from fiscal 2012. While artificial intelligence accounted for just a third of that total, it was the largest contributor to the increase over that five-year span. Major flows of funding went to virtual reality, virtual agents, and computer vision, explains Matt Hummer, Govini’s director of analytics and advisory services and coauthor of the report. The most growth centered on intelligence, surveillance, and reconnaissance, activities that generate massive inflows of audio, video, and other data that can be sorted through and parsed by AI-powered technologies. In one DARPA program Hummer describes, natural language processing programs work in conjunction with virtual agents to provide advanced field translation services for the niche dialects soldiers might encounter while deployed overseas. However, now, the US military could record even harmless chats with unassuming civilians, evaluate them, and then use them to direct military strikes, inadvertently turning civilians into informants and putting bull’s-eyes on their backs. In other applications, the immense amount of video footage gathered by reconnaissance drones can now be evaluated in a much more holistic fashion. Whereas human analysts previously struggled to identify all objects of interest in a given frame, machine learning can now analyze contextual images much more quickly and effectively. This is a powerful advantage for a nation that has more military data from operations in recent history than any other on earth. Both the United States and Chinese private sector have built the huge data sets necessary to train most AI models. But when talking about military applications, it’s not just any data set that matters. “It’s collecting data in operating contexts,” Hummer says, “and the US has a huge advantage in those spaces.”
So, defense spending on both sides won’t decrease in these advanced fields any time soon, especially considering that the results of innovations at DARPA and similar agencies can and often do trickle out into commercial use over time as well. Continuing to push from a leading position on the frontier will help retain both a military and economic edge. And for this, as Hummer points out, big data remains essential.
THE COMING CLASH OF MODELS
This will not be a two- or three-horse race. While the United States and China clearly lead the AI competition, and a well-established second tier with the United Kingdom, Russia, and Israel are not so far behind, dozens of other countries will play a key role in a future of pervasive AI deployment. The United Arab Emirates have created a government ministry solely dedicated to artificial intelligence, and the Emirates have opened their arms to companies seeking to test ideas like personal drones for transportation and other futuristic advanced technologies. In October 2017 the vice president and prime minister, Sheikh Mohammed Bin Rashid Al Maktoum, appointed twenty-seven-year-old Omar Bin Sultan Al Olama as the first state minister for AI, charging him with the task of making the UAE “the most prepared country for AI” through the “pursuit of future skills, future sciences, and future technologies.” What path the UAE will take exactly—whether regulation-forward like Europe, experimentation-forward like the United States, or decree-forward like China—will reveal itself in the years to come. What is clear, however, is that AI will become a more holistic economic-development strategy that aims to establish the Emirates as a hub for futuristic experimentation and investment, including in ideas such as the Hyperloop, a high-speed transportation concept popularized by Elon Musk; autonomous passenger drones being tested by Chinese micromultinational eTang; or new projects in desalination and solar energy. Having learned the lessons of its meteoric rise onto the stage of the world economy thanks to oil and high finance, commensurate with their respective boom and bust cycles, the Emirates are now diversifying into the future. This will be further fueled by competition with neighbors like Qatar and Iran, both technologically advanced Middle Eastern powers with rivaling economic positions, political interests, and allies in the region.
Small, culturally more homogenous, reasonably cash rich, and with a solid business infrastructure, the UAE already meets some of the accelerating conditions for AI investment, not unlike Singapore or Denmark. Its centralized government and emphasis on safety, security, and stability over individual privacy and freedom make it, like China, an open field for experimentation. However, it lacks a rich tradition and ecosystem of digital entrepreneurship inherent in the United States, and, as a small nation, the large data pool required to train AI systems. But those mixes of advantages and disadvantages are limiting factors, not outright barriers. Anyone with the right data set, the right expertise, and the right amount of computing power can leap ahead in this race, and their participation will start to reshape the geopolitical doctrines we understand today.
As Parag Khanna, author of How to Run the World and Connectography suggests, we’re in a new version of the medieval world, where myriad actors—including governments, cities, corporations, NGOs, and individuals—negotiate for power and influence. It remains to be seen whether we turn that into a new renaissance or another global conflict. That was the picture before the new AI spring, too, but data now flows silently across borders, and what’s collected in one country might be processed and audited by a small cadre of highly qualified engin
eers in another. Those who can attract and help grow that cadre could end up creating a vibrant new renaissance.
Even if their intentions are noble, rarely does that group of entrepreneurs or researchers include experts who can draw conclusions about the cultural, ethical, or legal implications in different countries. So, national governments continue to experiment with different types of AI regulation and policy without much concern for the often invisible digital fallout. As we have explained, some are predisposed to a laissez-faire approach, while others will take a more proactive role in regulating the use of personal data and the transparency and explicability of AI processes. Others simply rule by ad hoc degree. This divergence will become more pronounced in the decade to come as the Digital Barons seek to expand their multinational reach in their insatiable drive for data and profit. This will heighten old concerns and generate new ones. Philosophies of regulation, influence, and social and economic participation will conflict—as they should.
Those clashes and their outcomes will coalesce around issues of values, trust, and power. A sustained assault on US society and institutions might prompt the government to militarize its citizens, raising new questions about America’s conception of its moral authority and exceptionalism. Artificial intelligence has moved beyond cyberwarfare and now interferes with the individual lives that make up the fabric of societies. That could threaten America’s deeply valued separation of military and civilian spheres. Notions of trust also will come into conflict, with the United States and Europe likely to lead the development of AI regulatory models that preserve individual control of data and how it’s used. While China takes a different path on individual privacy, Western companies and governments might need to develop new business models that account for much greater individual control and agency—and thus enhance trust.