Solomon's Code

Home > Other > Solomon's Code > Page 22
Solomon's Code Page 22

by Olaf Groth


  That trust could play out, then, in the race for geopolitical influence and soft power, with Western countries offering trust-based models but less investment, and China offering more money and infrastructure but less individual control over data and civil rights protection. As Kai Fu Lee, the CEO of Sinovation Ventures writes in a New York Times op-ed, that could make developing countries an “economic dependent, taking in welfare subsidies in exchange for letting the ‘parent nation’s’ A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.”§§§

  Meanwhile, we should not assume that US science and technology leadership in cognitive computing and related applications cannot wane. Much of that ill-advised insistence rests on the assumption that China’s academic institutions will take a long time to become as capable as the best Western universities, perhaps as long as it took the United States to establish its own system of research universities. Yet China already has proved itself adept at siphoning off talent and expertise in much the same way the United States did from Europe following World War II. And let us not forget that China has lifted well more than 300 million people out of poverty in the last sixty years, nor that it has surpassed the Americas in the number of machine learning research papers in recent years.

  The global competition in AI will raise as many new challenges as it solves; such is the currency of progress. But whatever pivots arise in this AI race, the applications that emerge from the global seedbed of innovation will change our world and benefit humanity in ways we can barely fathom today.

  GEO-COGNITIVE POWER

  Perhaps nothing underscores the deeper civil-military fusion enabled by AI and its cousin technologies better than their application in economic development programs. China has used its Belt and Road Initiative to encourage developing countries to play by its rules, essentially requiring that they bow to its notions of fairness, equity, and justice in exchange for the major investments it offers.¶¶¶ For example, when Kazakhstan joined about seventy other countries that have received Chinese-built infrastructure networks, it quickly saw a boom in traffic for the freshly upgraded ports along the highway and rail routes between the countries. In the first ten months of 2017, rail volumes at one station had doubled from the prior year.### Yet, those gifts came with strings attached. China’s agreements favored its companies, products, and labor, making it more difficult for Kazakh officials to foster opportunities for their country’s small and medium-size businesses. Months before some of the individual projects were implemented, Kazakh residents protested a provision that would’ve opened agricultural land to long-term leases by foreign companies.

  Those countries might have to lean even further toward Beijing’s sensibilities about the deployment and use of AI and related advanced technologies as well, especially given the fact that Chinese technology will ride atop of and be integrated into this new “smart infrastructure.” That sort of geo-cognitive power projection—a technology-enabled race to spread and enhance a nation’s global influence—could have far-reaching effects. China and other superpowers aren’t merely trying to project economic, political, and military power; they also want greater control over what people around the world believe, how they make decisions, and how much they let machines make decisions for them.

  It might be hard to imagine about seventy countries with more than 60 percent of the world’s population acquiescing to China’s social credit system, but the people in those nations, who help produce about a third of global trade and about 40 percent of global GDP, might have to go along with those dictates if they aim to do business with the world’s most populous market. Those governments and residents are coming to depend on the influx of Chinese mobile phones and the country’s willingness to help developing markets build out multimillion-dollar networks, all based on technologies produced by Huawei and other Chinese titans. Artificial intelligence applications undoubtedly will ride on top of that development. In an era of waning US leadership around the world, much less a coherent industrialized-country vision for the global economy, China’s vision might become an attractive alternative.

  It’s virtually impossible to assess all the second- and third-order ripple effects that the emergence of China as a potential geo-cognitive superpower could have on the global economic order, especially because it’s not entirely clear what China will ultimately stand for. The unfolding of the post-World War II era with the American-led Bretton Woods Conference and the UN-style system followed more transparent and predictable lines. And while China is far older than the United States, its experience with Communist rule is just seventy years old—and it was evolving again with Xi Jinping’s consolidation of power in early 2018. It is also not yet as experienced as the United States in designing and influencing global regimes that depend on it and draw countries to it. In addition, despite its large standing army, China has considerably less hard power to project via aircraft carriers and long-range bombers or cruise missiles.

  The United States does have both the hard-projection capabilities and the soft power of its popular culture. It maintains a successful model of digitization among its population that has driven unparalleled economic progress. While that story still holds appeal in many places around the world, it has lost some of its luster, particularly in comparison to the emerging appeal of the Chinese narrative that is based on its own impressive success story. America’s projection of AI applications, ethics, and influence still pervades the many countries in which its diplomatic and private-sector organizations operate. After all, Facebook would be the largest country in the world if subscribers were citizens. But in light of Cambridge Analytica and Facebook’s other scandals, as well as the problems faced by America’s other Digital Barons and its federal agencies, the extent of US influence might be in jeopardy.

  *Interview with the authors via phone, November 21, 2017

  †Henry A. Kissenger, “How the Enlightenment Ends,” The Atlantic, June 2018.

  ‡Parag Khanna, Technocracy in America: Rise of the Info-State (CreateSpace Independent Publishing: January 2017).

  §Robotics and artificial intelligence team, Digital Single Market Policy: Artificial Intelligence, European Commission, Updated May 31, 2018.

  ¶James Manyika, “10 imperatives for Europe in the age of AI and automation,” McKinsey & Company (October 2017).

  #Bhaskar Chakravorti, et al., “The 4 Dimensions of Digital Trust, Charted Across 42 Countries,” Harvard Business Review (Feb. 19, 2018).

  **James Vincent, “Putin says the nation that leads in AI ‘will be the ruler of the world,’” The Verge, Sept. 4, 2017.

  ††Gary King, Jennifer Pan, and Margaret E. Roberts, “How Censorship in China Allows Government Criticism but Silences Collective Expression,” American Political Science Review 107, no. 2 (2013).

  ‡‡Ali Montag, “Billionaire Alibaba founder Jack Ma was rejected from every job he applied to after college, even KFC,” CNBC, Aug. 10, 2017.

  §§Fortune 500 profiles, “No. 21: Jack Ma,” Fortune, Updated July 31, 2018.

  ¶¶Ministry of Education of the People’s Republic of China, 2017 sees increase in number of Chinese students studying abroad and returning after overseas studies, April 4, 2018.

  ##Elsa Kania, “China’s quest for political control and military supremacy in the cyber domain,” The Strategist, Australian Strategic Policy Institute, March 16, 2018.

  ***Summary of the 2018 White House Summit on Artificial Intelligence for American Industry, The White House Office of Science and Technology Policy, May 10, 2018.

  †††Richard J. Harknett, “United States Cyber Command’s New Vision: What It Entails and Why It Matters,” Lawfare, The Lawfare Institute, March 23, 2018.

  ‡‡‡Artificial Intelligence, Big Data and Cloud Taxonomy, Govini, 2017.

  §§§Kai-Fu Lee, “The Real Threat of Artificial Intelligence,” New York Times, June 24, 2017.

  ¶¶¶Nyshka Chan
dran, “China’s plans for creating new international courts are raising fears of bias,” CNBC, Feb. 1, 2018.

  ###Kemal Kirişci and Philippe Le Corre, The new geopolitics of Central Asia: China vies for influence in Russia’s backyard, The Brookings Institution, Jan. 2, 2018.

  6

  Pandora’s Box

  No matter how careful, a perpetrator always leaves a footprint somewhere. To find it, Jeff Jonas looks in some unusual places.

  Jonas is not your typical data scientist. As of this writing, he was one of just four people who’d participated in every Ironman Triathlon on the world circuit. A system he developed for Las Vegas casinos helped catch the MIT blackjack team made famous in the best-selling book Bringing Down the House and the movie 21. Even his latest venture, Senzing, was part of an unusual move by IBM, which agreed to spin out Jonas and his team into a separate firm that helps clients identify who’s who across all their data sets. The process, called “entity resolution,” helps companies prepare for the EU’s new data-protection rules. But what he really, truly loves is finding that hidden footprint. “My particular passion is for systems that take down bad guys,” Jonas says. “Helping our customers take down some real clever bastards brings me great joy.”

  He does this by uncovering unexpected connections across often-unrelated sets of information. (Among other things, he coded the idea into the Non-Obvious Relationship Awareness [NORA] software he created for casinos.) The approach looks across a range of data sets, including places so far out in left field that nefarious actors don’t think about covering their tracks there. By piecing together relationships across multiple bodies of information, the process can start to develop a trail of evidence in the inconsistencies or discrepancies across various data sets. If you just moved into a new house and the guy across the street says he never travels overseas, you’d never know it’s a lie. But then his wife gets drunk and says he lived in France. You just found an inconsistency. Simple enough in that case, but the challenge gets significantly tougher with criminals employing fancy tradecraft to try to cover their tracks.

  Outside of someone making an obvious mistake or their accomplices giving them up, investigators have two options. First, they can search for the types of observations the perpetrators would never imagine or expect you to have—shifting into new observational space, as Jonas describes it. Second, investigators can use technology to compute in ways unknown to or unforeseen by the perpetrator. For example, an adversary might know you have a video camera, but he might not know anything about your license plate readers. Jonas’s approach cuts across both of those. “You have to ask what data source might be available to present the contrarian evidence that would be more difficult for the bad actor to control,” he explains. “What is the third record that might provide the glue to bind two other data points? And you might not know it’s of interest until it acts as the glue.”

  Yet, for someone so deeply reliant on data to capture the bad guys, Jonas gets even more animated when talking about the need for individual privacy. He admits he didn’t think much about it when trying to track card counters and other criminals in the Vegas casinos. But of late, he has made data privacy a concurrent pursuit alongside his company and his passion for finding bad guys. He despises the concept of the social credit score in China, calling it “possibly the most evil thing” because it “suppresses dissent and contrarian opinion.” He thinks the EU’s stringent data-protection regulations will likely become the standard for privacy regulations around the world. Yet, he has no problem with the inevitable growth of a surveillance society. “To surveil is to look. It’s not bad. You surveil the street to see if it’s safe to pass,” he explains. “So, the primary point is what data is in your observation space, and do you have a legal and policy right to it.”

  One might debate, for example, whether Cambridge Analytica should’ve had a legal and ethical right to the data it scraped from Facebook for the sake of voter targeting, sparking a scandal about the social media platform’s mishandling of members’ private information. But the other side of it is that all too often, consumers jump right into the fray for irresistible product and service offers or for new digital experiences, willingly giving up their privacy every day for some benefit or convenience in return. The question isn’t whether the AI systems that power the analyses of our lives are good or bad. They’re tools that can be used for multiple simultaneous purposes—and they will continue to be, unless people are willing to read the fine print of the user agreement or opt out, a move that usually leaves them out of the flow of transactions and information.

  For example, providing broad access to patient data might produce amazing breakthroughs for health care, facilitating discoveries at atomic and genetic levels that greatly improve human well-being for all of society. Yet, an individual might want to hold personal health information much more closely, raising tensions in turn about the results an AI system is then able to produce with the smaller set of data available to it. In these and so many other AI examples, the most critical questions center on the value-based decisions about where we draw boundaries between private and public rights. The same sort of distinction holds true in discussions of surveillance and privacy, Jonas says. Surveillance is the mere act of looking. The important concerns are who looks at what and the control citizens have over that decision. Those are questions of privacy and the value individuals and societies place on it. And while security in any country naturally requires surveillance, the extent of that surveillance and the boundaries of privacy can vary widely from one nation to the next.

  Those lines often move, too. People might come to demand broader and more-intensive surveillance as powerful new threats arise, especially since we can safely assume many of today’s costly advanced technologies will become cheaper and commonplace in the future. “What if common, everyday technology could enable a single person to kill 100 million people for $5?” Jonas asks. “What must happen then?” One of his friends, who works on privacy and civil liberties issues, once pushed further on that idea with another intriguing question: In this sort of future, are we better off keeping tighter control over who can access the ubiquitous surveillance, or opening access to the widest set of people? Power corrupts if unchecked by the crowd, his friend suggested, so perhaps we’re better off making sure that ubiquitous surveillance is available to the masses rather than concentrating it. Given the global explosion of data and the existing open-source availability of so much AI code, the ecosystem’s inclination already leans toward broader access. If that continues, as one would expect, the possibility of someone using cheaper and more powerful AI to disrupt our lives could heighten the need for ubiquitous surveillance in turn. It’s an arms race of sorts, based on the vast exponential effects of the technology.

  These already difficult challenges to our fundamental notions of values, trust, and power won’t get any easier as the deeper integration of cognitive machines in our lives triggers new dilemmas and super-charges old ones. We need to consider these “North Star” questions as we help shape a future of beneficial AI. How do we respect and preserve privacy? How do we maintain human choice and agency? How can we ensure justice and fairness? How do we build technologies that enhance human creativity and empathy? Absent a consideration of these and other North Star issues, we leave ourselves ill-equipped to respond to the potentially harmful uses of AI-powered technologies and, just as importantly, ill-prepared to capitalize on the opportunity these powerful tools provide to build a prosperous future for humanity.

  BLISSFUL IN THE FISHBOWL?

  Closed-circuit camera systems have been the norm in UK public safety for decades. Originally intended to detect bomb-setting terrorists of the Irish Republication Army (IRA), which sought independence from English control in Northern Ireland, the surveillance systems have become a widespread tool for crime detection, especially in London and other larger cities. English authorities use facial recognition as a means of identifying persons of interest. Moscow
employs a similar system, with about 160,000 installed cameras, though only a few thousand are active at any given time for cost reasons.* According to Artem Ermolaev, the spokesperson of the city’s department of information technology, the system covers some 95 percent of the city’s apartment buildings, has an identification-success ratio of approximately 30 percent, and led to six arrests in its initial trial period. Low light conditions are still tough on the cameras, but that will likely improve in the near term.

  Manindra Majumdar has put camera systems and AI detection systems together in an effort to tackle two of India’s most notorious problems in recent years: untended garbage and public harassment of women. After launching an image-based search and shopping app called GoFind, Majumdar created a new start-up called CityVision AI and submitted bids in two Indian cities to test surveillance systems that identify garbage dumping or potential crimes. (It since has submitted a bid for one surveillance program in Dubai and another in Toronto, where it relocated its base of operations.) In India, the CityVision system tracks areas where people commonly dump refuse, and then alerts authorities when bins need to be collected or informs them about patterns of illicit dumping activity in targeted areas. It also can be used to spot when minor harassment of women begins to escalate and pass along the relevant location data and video for police to review.

  Because the Indian population is so much larger and more diverse than in most other countries, Majumdar says, people more readily accept surveillance as something necessary for safety. Privacy is a different issue, though, so CityVision takes steps to make sure it protects identities and leaves any direct enforcement action in the hands of the sanitation department or police.

  It’s not hard to imagine cities around the globe latching on to these types of technologies, especially as rising urbanization rates increase socioeconomic struggle and heighten the possibility of conflict. The need to manage and de-escalate tensions in crowded places likely will override the need for privacy, at least in public spaces. And as migration, trade, media consumption, and money flows across borders accelerate, governments will interconnect these systems to track individuals and assets moving from one jurisdiction to the next. One day, we might very well see this tracking ability as the hallmark of a safe and trustworthy society.

 

‹ Prev