Solomon's Code

Home > Other > Solomon's Code > Page 15
Solomon's Code Page 15

by Olaf Groth


  “You can get autonomous driving engineers in China,” Tong says. “You can get them, but if you want the top ones you have to come here.” He’s convinced that will change over time, as Chinese academic institutions and private companies, backed by billions of dollars from national and regional governments, expand their expertise. But for now, Roadstar.ai can ride on the talent available in the high-tech capital of the world. That’s hardly unique from an American or a Chinese perspective, of course. Waymo, Tesla, and a variety of traditional carmakers all have a significant presence in Northern California and fight tooth and nail for brilliant developers. Some of the battles for top personnel have spilled over into the courtroom.

  Meanwhile, dozens of Chinese companies have set up shop in the United States, building new AI labs and recruiting from US universities and companies. Alibaba has kicked off a reported $15 billion investment in the DAMO Academy,* which will consist of seven labs in major high-tech hubs, including Hangzhou, Beijing, Singapore, Moscow, Tel Aviv, Silicon Valley, and the Seattle area. The collaborative effort, headed by Alibaba CTO Jeff Zhang, will include University of California Berkeley’s RISE Lab, and other major US universities including MIT, Princeton, and Harvard.

  What might set Roadstar.ai apart, however, is its connection back to China, where the founders recently launched a domestic company to develop and operate its robo-taxi services. “When the car becomes fully autonomous, the Chinese government has to have control of the technology or the car,” Tong says. “So, it has to be someone with a Chinese background or citizen to work on this to make sense for China. That’s the big advantage for us.” And once it’s there, Roadstar.ai will be able to work with a government that’s both receptive and able to quickly create the conditions necessary for autonomous vehicles to work. For example, Hangzhou has started to build out what people are calling the “City Brain.” Home to more than 9 million residents, the city government hopes to redesign its entire infrastructure in collaboration with Foxconn, Alibaba, and other high-tech giants. The City Brain will track residents via social media and surveillance technologies, but it also will begin to build out the infrastructure needed to support increasingly autonomous driving—including technologies such as Roadstar.ai’s “man behind the curtain” remote driving centers.

  These sorts of country-to-country differences can shape the direction of technical development of AI applications, but so too can the diverse values, notions of trust, and power relationships found in different parts of the globe. On the technical side, the lack of rapid, centralized infrastructure development means US companies tend to conceive of each autonomous vehicle as a separate entity, rather than a fleet backstopped by a central remote system. On the cultural side, we already have seen the limits of Americans’ notions of values and trust emerge in response to fatal testing errors, most notably in March 2018, when one of Uber’s autonomous cars killed a woman pushing a bike along the roadside in Phoenix. Most companies proclaimed a temporary moratorium on testing until developers and investigators could pinpoint the cause of the collision, and Tesla would later disclose that one of its Model X SUVs crashed into a concrete highway divider and killed a man while the autopilot systems were engaged, the second autopilot-implicated fatality for the company.†

  The public outcries and potential litigation surrounding these sorts of incidents won’t halt the groundbreaking technological leaps that Tesla, Waymo, Uber, Roadstar.ai, and other autonomous vehicle companies will make, but they could slow deployment and cast a far-reaching shadow across the industry worldwide. In the United States alone, cars kill an average of almost 100 people a day, but the idea of a driverless car killing a person seems far more troubling to most people, regardless of nationality. This deep skepticism puts heightened pressure on the companies developing advanced technologies for self-driving cars, which must achieve far greater safety records than the imperfect human drivers they hope to eventually supplant. So, the responsibility of ensuring safety as a top priority is universal, requiring that developers make autonomy work under all sorts of road conditions, infrastructures, and regulations.

  Yet, these companies cannot ignore the varied cultural norms and biases within the markets they serve. After all, these intelligent systems will make black and white decisions about many of the gray-area aspects of our personal and social systems. The fact that a Google image search for “cute baby” will yield nearly all Caucasian babies in the United States and nearly all Japanese babies in Japan raises enough serious questions about bias and discrimination. But what happens when those biases dictate something even more consequential than search results and aesthetics? How might those considerations affect Tong’s robo-taxi service in Shenzhen or the future of Uber’s autonomous vehicle testing in the United States?

  THE FORCES THAT SHAPE THE WORLD’S DIVERGENT AI JOURNEYS

  Building effective intelligent systems requires capturing the immediate objective of its users and beneficiaries, but also the broader society’s values and norms into the system. A user must trust that the system will make a proper decision on its behalf. That’s especially hard for complex or social tasks, such as evaluating employee performance, booking travel, shopping for nonperishables, or, at a more immediate level, engaging with a conversational assistant about these things. People are often unclear about their own objectives, and high-tech industries do not have a stellar record of understanding how those nuances can change from one moment to the next, let alone one culture to the next. The cultural and political sensibilities embedded in an AI system by developers in one country could power applications in every corner of the world, including in places where those sensibilities are impractical, offensive, or dangerous. That’s because software is sneaky; it seeps into the nooks and crannies of our lives in often subtle, invisible ways.

  We have a difficult enough time understanding how we can best deploy a new smart app on our phones, let alone how someone a mile away, a state away, or half a world away would grapple with the same technology. But we can boil down the overwhelming mix of influences into three essential forces that will shape the development of technologies and how they diverge from one place to another—the quality of data sets; the demographic, political, and economic needs of a country; and the diversity of cultural norms and values.

  The quality and size of data sets and how they are used to train new applications will shape the power relationships between companies, governments, and individuals. Google faced a modest uproar when its Image searches for “beauty queens” returned photos of only white women. That biased data set offended people in the United States, largely because Google search results are so pervasive they can shape cultural norms and beliefs. But bias against minorities is not as big a concern in many other cultures, so, the old “garbage-in, garbage-out” adage remains in full effect. Biased, poorly constructed, or incomplete data sets lead to prejudiced or incomplete outputs, potentially excluding people from important civic, political, and economic dialogues, both within and between societies.

  Countries often express their demographic, political, and economic needs in national strategies for artificial intelligence. As we note later in this chapter, these different national strategies, whether intentionally or not, sometimes align and sometimes diverge. In the United States and Israel, for example, the military drives much of the basic advanced-technology research even beyond defense applications. Yet, the United States has a far more prominent collection of Digital Barons, giant companies that dominate data and development. China has the same class of titans in Baidu, Alibaba, and Tencent, but the government plays a more active role in their strategic direction. Meanwhile, Japan and Canada take somewhat unique approaches to the development of AI-powered systems. Japan’s deep background of Shinto belief lends itself to a broader cultural acceptance of alternative forms of life and consciousness, a fact that’s often reinforced in its popular culture. Canada has created a more democratic and inclusive model of AI development thanks to the singular influ
ence of a small group of developers and support from focused government science grants.

  These many divergences stem in part from cultural notions about the balance of power and agency between individuals and institutions or communities. For example, when the global Institute of Electrical and Electronics Engineers (IEEE) released the first version of Ethically Aligned Design, a report about how to integrate values-driven, ethical thinking into every aspect of AI development, many Asian experts noted it was “extremely Western,” says John C. Havens, executive director of the IEEE Council on Extended Intelligence. The second version, released in December 2017, pulled from a broader range of viewpoints and integrated ethical concepts from Confucian, Shinto, and Ubuntu philosophies. For example, an Ubuntu mindset focuses on forgiveness and reconciliation over vengeance. It recognizes, in the words of Nobel Laureate Archbishop Desmond Tutu that “my humanity is caught up, inextricably bound up, in what is yours.” Considering those sorts of concepts, Havens says, “completely gets you out of Western thinking.”

  Widening the cultural and economic lens reveals an intriguing mix of AI-development models that have emerged around the globe. They often diverge, but they also overlap with one another, as most countries and regions share some elements even as they follow their own unique pathways. As we note below, the Cambrian Countries feature robust entrepreneurial ecosystems that are tightly linked with their strong academic institutions. The Castle Countries possess some of the world’s most advanced scientific and technical minds, but they haven’t yet developed the type of start-up environment that can build and scale the massive private-sector data titans. The Knights of the Cambrian Era have leveraged military spending and resources to build expertise across an array of AI uses, whether for defense or other purposes. The Improv Artists have found ways to encourage or develop unique applications of advanced technologies that address problems that many developing economies face. And then there are the outlier approaches seen in Japan and Canada, where developers have blazed their own unique trails.

  In the years to come, elements of all these approaches will meld and conflict as researchers interact, companies work across borders, and nations press their ongoing quests for technological, economic, and political influence. We address more of the AI race in the next chapter, but to understand these models and how they might evolve in the future, we first need to understand just how data derives its power—and how massive troves of data have created new Digital Barons that exert an outsized influence on the development, regulation, and public opinion of artificial intelligence.

  AS TRILLIONS OF DOLLARS FLOAT INVISIBLY BY

  It’s critical to understand how these cultural and political forces will direct the pathways AI development takes around the world (as we discuss below), but all those divergences fork off from the central notion that data is power. “Gigabytes? Terabytes? Bah, small potatoes,” Cathy Newman writes in National Geographic.‡ “These days the world is full of exabytes—zettabytes, even.” No one can quantify the precise amount of data generated in a day, but reasonable estimates are incredible. If one gigabyte is roughly akin to the information held on a ten-yard shelf of books, Newman says, the world filled about 2.5 billion of those shelves in the past twenty-four hours. No wonder Harvard Business Review named data scientist as the “sexiest job of the 21st century.”§

  That data comes in all sorts of shapes and sizes, from a Bangalore mother’s online shopping preferences to a tsunami sensor’s readings off the coast of Japan. For Internet companies, “life-pattern” data attracts the most attention. A 2015 McKinsey & Company report forecasts that data from all connected devices will create about $11 trillion in economic value by 2025.¶ As part of the group contributing a significant portion of the $11 trillion, individuals might want a better understanding of that data and where it derives its value. Consider the lamp that might be shedding light on this page, and then imagine a translucent blue box around it. That box is a “semantic space,” a description of what the lamp consists of, who built it when, and how they put it together. It might also describe the intended customer segment for that style or the suggested retail price. This semantic space appears when the factory produces the lamp, a digital representation of the lamp’s characteristics that resides in the company’s database. It lets management ensure better quality, encourage more sales, and develop an even better version at the next go round. The semantic space gets created when a factory makes the lamp.

  Now, with the addition of more sensors, cheaper memory chips, and higher compute power, as well as new data processing capabilities embedded in the lamp itself, the semantic space tied to each individual lamp continues to evolve long after the sale. The lamp might connect with physical infrastructure, like electricity grids, building walls, and ceilings. It might connect to people in both passive and active ways, potentially monitoring usage patterns to optimize power consumption and convenience. It becomes one of a rapidly growing number of objects and entities that bridge between people and content, making someone a producer and a consumer of data. And that data has all kinds of value—social value for friends and family and the lamp company that wants to learn from that evolving semantic space. Combine that with the myriad other streams of data that we and our environments produce every day, and you begin to see the incredible economic value for employers, businesses, utilities, retailers, and employers. Suddenly, McKinsey’s $11 trillion estimate, nearly the size of the entire Chinese economy in 2017, might seem conservative.

  What seems like a laidback Friday night with a book at home is becoming, invisibly in the background, a tremendously complex integration effort for all the data streams that emerge as you switch on that cozy lamp, light the crackling fireplace, and turn on a little smooth jazz. Of course, companies will use that life-pattern data to try and sell you more of things you like, but they also will use that data to train and refine their AI systems to more accurately predict what you would most enjoy or benefit from at any particular moment. A home assistant might plan meals, suggest an outing for tomorrow, and remind you to connect with Tom, whom you haven’t seen in a while. It might interpret the different ways you move around your house and analyze the intentions behind your patterns, perhaps warming the toilet seat when you awake on a cold winter morning. It might precisely calculate how your nutritional intake integrates with your exercise patterns, suggesting meals or snacks that optimize your fitness. We can’t visualize all the ways these refinements and innovations will change our lives. Hopefully, companies use these systems to make better products, provide more convenience, and enhance people’s productivity and leisure time, rather than just serve up more advertisements and enhance their profit margins. But having loads of data means having loads of power. What happens to trust in a society when all that data and power are concentrated in the hands of a few companies and governments?

  THE DIGITAL BARONS

  The Blob captured the imagination of moviegoers in 1958 and oozed its way into film culture in the decades since. In the movie, an alien lifeform travels to earth on a meteorite, hitting the ground somewhere in the woods of rural Pennsylvania. Two teenagers, Steve and Jane, played by the unforgettable Steve McQueen and Aneta Corsaut, see the meteorite crash over the hill and, driving over to investigate, almost hit an elderly man on the road. The anguished man had poked the meteorite with a stick, and now has the blob attached to his hand. Before the doctor can amputate his arm, the blob consumes the man . . . and then the nurse . . . and then the doctor himself. Freshly fed, the blob rolls on and over everyone who crosses its path, gorging itself until it becomes so large it threatens to consume entire buildings and the people within. But our heroes, Steve and Jane, eventually realize the blob doesn’t care for the cold, and the town neutralizes it with fire extinguishers before the Air Force transports it to the Arctic.

  To many of us, this is what the big Internet companies feel like—the giant data blob that rolls over every area of life, from retail to finance, from expert advice
to dating, health services to car sharing. Nothing seems safe, it just keeps growing, and we have no fire extinguisher large enough to freeze it. Fortunately, we mostly reap the benefits of this particular blob, because it makes our lives easier, more convenient, and more interconnected. As these companies acquire ever-greater levels of detail about our lives the more we use their services, they make sure that their platforms feed our needs and keep us happy. The skeptic might say “addicted,” as Salesforce CEO Marc Benioff does when describing social media users. “Intimate” might be a better term, because it embodies both the opportunities and risks inherent in deeper sets of life-pattern data. The collection and analysis of vast data sets can create greater intimacy between the Internet platforms and their users, as well as between individual users themselves. Google can provide more precise results for that search you were struggling to define. Facebook can help you find and reconnect with a long-lost friend. Amazon can suggest just the right product to supplement that gift for your spouse. Baidu, Alibaba, and Tencent do the same for Chinese users.

  Of course, all that intimacy can produce negative results if companies don’t live up to the data stewardship expected of them by users, societies, and governments. But even when they live up to both cultural and regulatory standards, how much data is enough data? Does that sort of intimacy start to feel uncomfortable if you know that different signals of your romantic evening are being tracked by your thermostat, lamp, surveillance cameras, and smartphone? Absent their closest friends, few people want to share that kind of intimacy with anyone, let alone the engineers at Google. And most importantly: How much power are we giving others by letting them follow our intimate patterns too closely, happily intermediated by that soft-glowing, energy-saving, house-monitoring lamp?

 

‹ Prev