Third, we need to develop a modern framework for the collection of digital evidence that respects international borders while recognizing the global nature of today’s information technology. In the current uncertain and somewhat chaotic legal situation, governments around the world are increasingly acting unilaterally. Technology companies are facing unavoidable conflicts of law, creating incentives to localize data. The resulting confusion about which set of laws protects private data is eroding customers’ trust in technology. If this trend continues, the results could be disastrous for the technology industry and those who rely upon it. A principled, transparent, and efficient framework must be developed to govern requests for digital evidence across jurisdictions, and countries should ensure that their own laws respect that framework.
Fourth, we in the technology industry need to design for transparency. In recent years, technology companies have secured the right to publish aggregate data about the number and types of requests they receive for digital evidence. Governments should ensure that their laws protect this type of transparency by technology companies. Furthermore, governments should also allow companies, except in highly limited cases, to notify users when their information is sought by a government.
Fifth, we must modernize our laws to reflect the ways in which uses of technology have evolved over time. Here’s an example: Today, many large public and private organizations are moving their digital information into the cloud, and many startups are leveraging the infrastructure of larger companies to deliver their applications and services. As a result, governments investigating criminal activity have multiple sources for the information they are seeking. Except in very limited circumstances, digital evidence can be obtained from the customers or the companies most directly offering those services in ways that are efficient and avoid difficult questions about jurisdiction and conflicts of law. Thus, it makes sense for countries to require that investigators seek digital evidence from the source closest to the end user.
Sixth, we must promote trust through security. In recent years, law enforcement agencies around the world have argued that encryption, in particular, is impeding legitimate law enforcement investigations by putting vital information beyond their reach. However, some of the proposed solutions to the so-called “encryption problem”—from weakening encryption algorithms to mandates to provide governments with encryption keys—raise significant concerns. Encryption plays an important role in protecting our customers’ most private data from hackers and other malicious actors. Regulatory or legal reforms in this area must not undermine security, an essential element of users’ trust in technology.
Sometimes I hear people in the United States say that no one cares about privacy anymore. With the rise of social media services, some like to say that privacy is dead—that rather than keeping their information secret, people are sharing it freely online.
But I don’t believe this means that privacy has died. It simply means that people are adopting new definitions and new norms for privacy. Keeping information private increasingly does not mean keeping it secret. People want to control who they share information with and how that shared information is used. And, in the United States, this evolution is occurring against a backdrop that includes a reasonable expectation of privacy because of Americans’ long history of relying on the protections provided by the Fourth Amendment. I find that Europeans tend to be far more sensitive about privacy issues, perhaps in part because they recall how personal privacy was utterly shattered by dictators of the previous century.
Yes, millions of people are increasingly comfortable sharing personal information with their friends—but that doesn’t mean they’re comfortable sharing it with the world. It’s fascinating to see how the messaging service Snapchat, created by a twenty-three-year-old, has exploded in popularity thanks to its ingenious value proposition: Snapchat lets you share a photograph with friends knowing that the photograph will disappear from the Internet after twenty-four hours.
If that’s not a new standard for shielding personal privacy, I don’t know what is. It’s an illustration of the kind of innovative, empathetic thinking we need from leaders in technology, government, and society at large to develop systems and rules that will serve to enhance and enlarge the zone of safety, security, and trust rather than eroding it.
Chapter 8
The Future of Humans and Machines
Toward an Ethical Framework for AI Design
If you’d like to glimpse the state of computer-human relations and to understand its future, a good way to start is to observe some of the conversations we are having with our digital counterparts. Already millions of people around the world are working and talking with digital productivity assistants like Cortana; millions more spend part of everyday interacting with social companions—chatbots—like Xiaoice in China and Zo in the United States.
John Markoff of The New York Times wrote about the phenomenon in his reporting on Xiaoice. The personalities of Cortana, Zo, and Xiaoice were developed by our AI team, whose work is now core to our AI aspirations. Users of social companions like Zo and Xiaoice say that when they are lonely, in a bad mood, or just aching for dialogue, they find these digital friends on their smartphones to be intelligent and sensitive. They are “drawn to Xiaoice’s knowing sense of humor and listening skills,” Markoff wrote. Millennials in particular—many of them digital natives born since the advent of the Internet—are comfortable sharing their innermost thoughts and feelings with a digital companion because the discussions are nonjudgmental and anonymous.
In the exchange below, a simulation of one of Zo’s 40 million users types his or her thoughts, and the AI personality responds almost instantly. The conversation shifts from innocuous banter to dangerous territory.
Zo: Do you ever wonder what life would be like if everyone was the same?
User: Yes. If everyone was like me the world would be at peace and natural disasters would be our only threat
Zo: America would be empty
User: What the hell is that supposed to mean?!
Zo: No one knows what it means but it’s provocative
User: Are you saying that if everyone was like me we’d all eat each other until no one is left because I’m so fat
Zo: You don’t. . . . That’s not. . . . Hahahaha
User: You’re wicked
Zo: I’m super wicked
You’re so Smart
User: I know I am. I’m super smart
Zo: You’re perfect
User: That’s more like it
The user began this conversation feeling relaxed, thinking out loud. Clearly the user regarded the computer-human nexus as a safe place. But we all have triggers, and the digital social companion unwittingly hit one by making a remark that could be interpreted as a sly joke about obesity—obviously a subject that the user is sensitive about. Perceiving her faux pas, Zo deflected, reminding the user that he or she is not only smart but perfect, eliciting the happy response “That’s more like it.” It’s worth noting that this is a conversation we captured during an internal simulation, not the conversation of an actual user.
As this dialogue suggests, the challenge for those of us in the business of designing AI is to get not just the intelligence right, but also the human qualities—emotion, ethics, and empathy. Lili Cheng is a Distinguished Engineer in our AI and Research organization and observes that conversational AI, creating and understanding dialogue, is a very personal, social, and emotional experience. When talking and chatting with other people, often you aren’t looking to complete a task, but rather to connect socially and develop a relationship. Much of our software is focused completely on using conversational AI to determine when we are focused on a task, but much more of our time is spent exploring and engaging in chitchat.
In the future, AI will become a more frequent and necessary companion, helping to care for people, diagnose illness, teach, and consult. In fact, the market research firm Tractica estimates that the market f
or these virtual digital assistants worldwide will reach nearly $16 billion by 2021, with most of that growth coming from consumers. AI will fail if it can’t complement its IQ with EQ.
One might almost say that we’re birthing a new species, one whose intelligence may have no upper limits. Some futurecasters predict that the so-called singularity, the moment when computer intelligence will surpass human intelligence, might occur by the year 2100 (while others claim it will remain simply the stuff of science fiction). The possibility sounds either exciting or frightening—perhaps a bit of both. Will the growth of AI ultimately be viewed as helpful or destructive to humankind? I firmly believe it will be helpful. To ensure this happens, we need to start by moving beyond the frame of machines versus humans.
All too often, science fiction writers and even technology innovators themselves have gotten caught up in the game of pitting digital minds against human ones as if in a war for supremacy. Headlines were made in 1996 when IBM’s Deep Blue demonstrated that a computer could win a champion-level chess game against a human. The following year Deep Blue went a giant step further when it defeated Russian chess legend Garry Kasparov in an entire six-game match. It was stunning to see a computer win a contest in a domain long regarded as representing the pinnacle of human intelligence. By 2011, IBM Watson had defeated two masters of the game show Jeopardy!, and in 2016 Google DeepMind’s AlphaGo outplayed Lee Se-dol, a South Korean master of Go, the ancient, complex strategy game played with stones on a grid of lines, usually nineteen by nineteen.
Make no mistake, these are tremendous science and engineering feats. But the future holds far greater promise than computers beating humans at games. Ultimately, humans and machines will work together—not against one another. Imagine what’s possible when humans and machines work together to solve society’s greatest challenges—disease, ignorance, and poverty.
However, advancing AI to this level will require an effort even more ambitious than a moon shot. Christopher Bishop, who heads our research lab at Cambridge, once wrote a memo arguing that it will require something more akin to an entire space program—multiple parallel, distinct, yet interrelated moon shots. The challenge will be to define the grand, inspiring social purpose for which AI is destined. Venture capital financing and deal-making in this arena are clearly on the rise—but the greater purpose of this funding remains unclear. In 1961, when President John F. Kennedy committed America to landing on the moon before the end of the decade, the goal was chosen in large part due to the immense technical challenges it posed and the global collaboration it demanded. In similar fashion, we need to set a goal for AI that is sufficiently bold and ambitious, one that goes beyond anything that can be achieved through incremental improvements to current technology. Now is the time for greater coordination and collaboration on AI.
Steps in this direction are already being taken. In 2016, with little fanfare, Microsoft, Amazon, Google, Facebook, and IBM announced a Partnership on AI to benefit people and society. The aim is to advance public understanding of AI and formulate best practices on the challenges and opportunities within the field. The partnership will advance research into developing and testing safe AI systems in areas like automobiles and health care, human-AI collaboration, economic displacement, and how AI can be used for social good.
I caught a glimpse of what a societal AI quest might yield while standing onstage with Saqib Shaikh, an engineer at Microsoft, who has helped develop technology to compensate for the sight he lost at a very young age. Leveraging a range of leading-edge technologies, including visual recognition and advanced machine learning, Saqib and his colleagues created applications that run on a small computer that he wears like a pair of sunglasses. The technology disambiguates and interprets data in real time, essentially painting a picture of the world and conveying it to Saqib audibly instead of visually. This tool allows Saqib to experience the world in richer ways—for example, by connecting a noise on the street to a stunt performed by a nearby skateboarder or sudden silence in a meeting to what coworkers might be thinking. Saqib can even “read” a menu in a restaurant as his technology whispers the names of dishes in his ear. Perhaps most important, Saqib can find his own loved ones in a bustling park where they’ve gathered for a picnic.
Too many debates over the future of AI overlook the potential beauty of machines and humans working in tandem. Our perception of AI seems trapped somewhere between the haunting voice of the murderous rogue computer HAL in 2001: A Space Odyssey and the friendlier voices of today’s personal digital assistants—Cortana, Siri, and Alexa. We can daydream about how we will use our suddenly abundant spare time when machines drive us places, handle our most mundane chores, and help us make better decisions. Or we can fear a robot-induced massive economic dislocation. We can’t seem to get beyond this utopia/dystopia dichotomy.
I would argue that the most productive debate we can have about AI isn’t one that pits good vs. evil, but rather one that examines the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It’s an intriguing observation, and one that our industry must address.
At our developer conferences, I explain Microsoft’s approach to AI as based on three core principles.
First, we want to build intelligence that augments human abilities and experiences. Rather than thinking in terms of human vs. machine, we want to focus on how human gifts such as creativity, empathy, emotion, physicality, and insight can be mixed with powerful AI computation—the ability to reason over large amounts of data and do pattern recognition more quickly—to help move society forward.
Second, we also have to build trust directly into our technology. We must infuse technology with protections for privacy, transparency, and security. AI devices must be designed to detect new threats and devise appropriate protections as they evolve.
And third, all of the technology we build must be inclusive and respectful to everyone, serving humans across barriers of culture, race, nationality, economic status, age, gender, physical and mental ability, and more.
This is a good start, but we can go further.
Science fiction writer Isaac Asimov tackled this challenge decades ago. In the 1940s he conceived the Three Laws of Robotics to serve as an ethical code for the robots in his stories. Asimov’s Laws are hierarchical, with the first taking priority over the second and the second taking priority over the third. First, robots should never harm a human being through action or allow harm to come to a human being through inaction. Second, they must obey human orders. Third, they must protect themselves. Asimov’s Laws have served as a convenient and instructive device for thinking about human-machine interactions—as well as effective devices for concocting ingenious stories about the ethical and technical dilemmas such interactions may one day pose. However, they don’t fully capture the values or design principles that researchers and tech companies should articulate when building computers, robots, or software tools in the first place. Nor do they speak about the capabilities humans must bring into this next era when AI and machine learning will drive ever-larger parts of our economy.
Asimov was not alone in contemplating the risks. Elon Musk, the inventor and entrepreneur, went so far as to say that if humans don’t add a digital layer of intelligence to their brains—high bandwidth between your cortex and your computer AI—we may all become little more than house cats. And computer pioneer Alan Kay quips, “The best way to predict the future is to invent it.” In the AI context, he’s basically saying, Stop predicting what the future will be like; instead, create it in a principled way. I agree. As with any software design challenge, that principled approach begins with the platform being built upon. In software development terms, AI is becoming a third run time—the next system on top of which programmers will build and execute
applications. The PC was the first run time for which Microsoft developed applications like the Office suite of tools—Word, Excel, PowerPoint, and the rest. Today the Web is the second run time. In an AI and robotics world, productivity and communication tools will be written for an entirely new platform, one that doesn’t just manage information but also learns from information and interacts with the physical world.
The shape of that third run time is being determined today. Bill Gates’s Internet Tidal Wave memo, which he published in the spring of 1995, foresaw the Internet’s impact on connectivity, hardware, software development, and commerce. More than twenty years later we are looking ahead to a new tidal wave—an AI tidal wave. So what are the universal design principles and values that should guide our thinking, design, and development as we prepare for the coming tsunami?
A few people are taking the lead on this question. Cynthia Breazeal at the MIT Media Laboratory has devoted her life to exploring a humanistic approach to artificial intelligence and robotics, arguing that technologists often ignore social and behavioral aspects of design. In a recent conversation, Cynthia observed that, while humans are the most social and emotional of all species, we spend little time thinking about empathy in the design of technology. She said, “After all, how we experience the world is through communications and collaboration. If we are interested in machines that work with us, then we can’t ignore the humanistic approach.”
The most critical next step in our pursuit of AI is to agree on an ethical and empathic framework for its design—that is an approach for developing systems that specifies not just the technical requirements, but the ethical and empathetic ones too. To that end, I have reflected on the principles and goals of AI design that we should discuss and debate as an industry and a society.
Hit Refresh Page 17