by Olaf Groth
In this world where AI-powered networks create more value and produce more of the products and services we use each day—and produce this with less and less human control over designs and decisions—our jobs and livelihoods will change significantly. For centuries, technology has destroyed inefficient forms of manual labor and replaced them with more productive work. But more so than any other time in history, economists worry about our ability to create jobs fast enough to replace the ones lost to the automation of artificial intelligence. Our own creations are running circles around us, faster than we can count the laps.
The disruptive impact of AI and automation spans all areas of life. Machines make decisions for us without our conscious and proactive involvement, or even our consent. Algorithms comb through our aggregated data and recognize our past patterns, and the patterns of allegedly similar people across the world. We receive news that shapes our opinions, outlooks, and actions based on the subconscious inclinations we expressed in past actions, or the actions of others in our bubble. While driving our cars, we share our behavioral patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous vehicle technologies, which in return provide us with new conveniences and safer transportation. We enjoy richer, customized entertainment and video games, the makers of which know our socioeconomic profiles, our movement patterns, and our cognitive and visual preferences. Those developers use that information to tailor prices to our personal level of perceived satisfaction, our need to pass the time, or our level of addiction. One person might buy a game at $2, but the next person, who exhibits a vastly different profile, might have to pony up $10.
None of this means the machines will enslave us and deprive us of free will. We already “opt in” to many deals from Internet companies, often blindly agreeing to the details buried in the fine print because of the benefits we reap in return. Yet, as we continue to opt into more services going forward, we might be doing so for entire sections of our lives, allowing AI to manage complex collections of big and small decisions that will help automate our days, make services more convenient, and tailor offerings to our desires. No longer will we revisit each decision deliberately; we’ll choose instead to trust a machine to “get us right.” That’s part of the appeal. And to be sure, the machine will get to know us in better and, perhaps, more honest ways than we know ourselves, at least from a strictly rational perspective.
Even when we willingly participate, however, the machine might not account for cognitive disconnects between what we purport to be and what we actually are. Reliant on real data from our real actions, the machine could constrain us to what we have been, rather than what we wish to be. Even with the best data, AI developers might fashion algorithms based on their own experiences, unwittingly creating a system that guides us toward actions we might not choose. So, does the machine then eliminate or reduce our personal choice? Does it do away with life’s serendipity? Does it plan and plot our lives so we meet people like us, and thus deprive us of encounters with people who spark the types of creative friction that make us think, reconsider our views, and evolve into different, perhaps better, human beings?
The trade-offs are endless. A machine might judge us on our expressed values—especially our commercial interests—and provide greater convenience, yet overlook other deeply held values we’ve suppressed. It might not account for newly formed beliefs or changes in what we value. It might even make decisions about our safety that compromise the well-being of others, and do so in ways we find objectionable. Perhaps more troubling, a machine might discriminate against less-healthy or less-affluent people because its algorithms focus instead on statistical averages or pattern recognition that favors the survival of the fittest. After all, we’re complex beings who regularly make value trade-offs within the context of the situation at hand, and sometimes those situations have little or no precedent for an AI to process.
Nor can we assume an AI will work with objective efficiency all the time, free of biases and assumptions. While the machine lacks complex emotions or the quirkiness of a human personality with all its ego-shaping psychology, a programmer’s personal history, predisposition, and unseen biases—or the motivations and incentives of his or her employer—might still get baked into algorithms and selections of data sets. We’ve seen examples already where even the most earnest efforts have had unintended consequences. In 2016, Uber faced criticism about longer wait times in zip codes with large minority populations, where its demand-based algorithms triggered fewer instances of the surge pricing that would draw more drivers to those neighborhoods.†
So, how will we balance these economic, social, and political priorities? Will public companies develop AIs that favor their customers, partners, executives, or shareholders? Will an AI jointly developed by technology firms, hospital corporations, and insurance companies act solely in the patient’s best interest, or will it also prioritize a certain financial return? Will military drones and police robots begin to act more defensively or offensively when they receive updates, and will those instructions change with every new political administration? Whether in an economic, social, or political context, we will face critical questions about the institutions and people we want to hold responsible and accountable for AI’s intersections with our human lives. Absent that, we will never establish enough trust in artificial intelligence to fully capitalize on the extraordinary opportunities it could afford us.
Too complex, you might say. But whether we answer these questions or turn our backs on them, the influence of machines on our lives will expand. We can’t put the genie back in the bottle. Nor should we try to; the benefits in virtually every scenario could be transformative, and lead us to new frontiers in human growth and development. But make no mistake: We stand at the threshold of an evolutionary explosion unlike anything the planet has seen since the biological eruption during the Cambrian Era. The coming AI explosion will deliver on grand promises and grand risks alike. The trade-offs will be messy, difficult to assess, and even harder to navigate. We will make mistakes and suffer dramatic setbacks. But if we can establish clear guidelines that ensure trust and reliability in thinking machines first, we can prevent the worst mishaps and lay the foundation for a deep examination of the healthiest path for the advancement of AI.
THE THREE Cs OF AI PROGRESSION
Ideally, the path toward a future of ubiquitous artificial intelligence will inspire a range of ideas and policies. To guide what’s bound to be a rapid and unpredictable evolution into this future, however, it helps to think in terms of a living, malleable framework that captures AI’s fundamental progression through the Three Cs—Cognition, Consciousness, and Conscience. The first, cognition, captures the range of brain function, including perception (e.g., object and speech recognition), pattern recognition and sense-making, reasoning and problem-solving, task planning, and learning. Researchers have studied these capabilities for the past half-century, ever since John McCarthy coined the term “artificial intelligence” in 1955. However, cognition without consciousness—a machine’s ability to reflect on what it sees and recommends—can pose serious risks. Not able to reflect on its own actions and existence, a machine cannot evaluate its role and its impact on the human environment. Furthermore, the ability to reflect without a commensurate ability to assess morality could create even greater dangers. In human psychological terms, we’d call such an actor a “sociopath,” or a being without a conscience.
These Three Cs provide useful mileposts along our race toward greater artificial intelligence. AI scientists are sprinting toward a finish line of machine consciousness. Whether they’re just leaving the starting blocks or entering the home stretch depends on which expert you ask. Either way, though, we must instill the machine with a conscience before it reaches consciousness. We accomplish this by using the Three Cs in our own human endeavors, thinking about the ways AI systems learn about our past, present, and future (cognition); reflecting on the ways
artificial intelligence should reflect on our lives, societies, and economies in the years to come (consciousness); and developing a charter to guide AI development toward a beneficial future for humanity (conscience).
A CRITICAL QUESTION OF TRANSPARENCY
Any credible contemplation of AI’s progression through the Three Cs will eventually run into questions about transparency and the opportunity for independent assessment. Absent a high degree of insight into AI development, algorithms, and data sets, we will have little chance of ensuring that machines follow a model of conscience that safeguards human values. Monitoring the vast opportunities these machines will present, and mitigating the tremendous risks they pose, is a necessary step before reaching the full beneficial potential of AI.
We cannot accomplish this with meetings of scientists or venture capitalists or hacker sessions in Silicon Valley laboratories. It needs to be an open and inclusive effort. We already see attempts to bring technological development into the sunshine. In one such example, a group of tech luminaries in the United States launched OpenAI, funded the research organization with $1 billion, and set its mission “to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible.” However, this and most other efforts focus on technical solutions for an ethical problem. New policies and social solutions depend on more than a technology, and they require a broader societal discourse that will tackle the promises and risks of a new era of artificial intelligence. Time is of the essence. Jobs and identities are at stake. As corporations forge alliances and discuss standards and operating procedures, the Digital Barons—such as Facebook, Baidu, Google, and Alibaba—are consolidating considerable power. Will we produce an ethical framework that preserves both the shared and the diverse rights and interests of individual citizens around the world? Will we be able to do so without stifling the boundless opportunities AI could offer humanity? Ultimately, will we nurture the best of both human and machine?
VALUES, POWER, AND TRUST
From the wheel to the internal combustion engine, from the first personal computer to the most sophisticated supercomputer operating today, technology always changes the way we live and work. It redefines how we produce, learn, earn, and interact with each other and the institutions in our lives. Yet, with their ability to replicate human cognitive and physical abilities, AI systems also become a unique actor in our everyday lives, whether on their own or when remotely controlled by “the man behind the curtain.” And by introducing such a new and forceful element to our existence, one that more closely imitates us than any technology before it, we create new types of questions about values, power, and trust in our technologies and fellow humans alike. “The biggest problem with the AI conversation is we talk about what AI is going to do to us,” says Arati Prabhakar, former head of the US Defense Advanced Research Projects Agency (DARPA), which invests in breakthrough technologies for national security. “The real conversation should be about what we’re doing to ourselves as humans” as AI evolves.
Our human conceptions of values, trust, and power will evolve as we allow AI systems to make more decisions on our behalf. From our need to project our individual selves to the world, to the “trolley problem” questions of how autonomous cars should react in situations where someone would die one way or the other, widespread use of thinking machines will force us to consider how we want these systems to represent our diverse values and identities. How focused on us do we want our AI’s decisions to be? Do we want to maximize the benefit for ourselves and our families, or do we want to set our assistants on “mensch mode” and balance our interests with those of others on their own life paths? How do the values of communities and societies get judged alongside our own? Who gets to decide the mix?
Of course, those decisions relate directly to the power we want to exert on society and on the people around us. Whether from the perspective of individuals, companies, or nation-states, the contest between AI-fueled powers will reshape our lives with or without our active participation. But how that happens also depends on the power balance between human and machine. As companies gather ever more data about our attitudes, behaviors, and preferences, will the algorithms they deploy keep the best interest of their customers and societies in mind, or will profit be the only driver of their decisions?
As of this writing, we have little understanding of what happens inside the AI black box, leaving us in the dark about how and why the system categorizes and represents us the way it does. This will change how our identity is projected into the world and, thus, the influence we might have on it. Perhaps smart algorithms and the people designing them decide to portray us authentically, with all our flaws included. We might massage our personal profiles on a dating site to make us appear more attractive, but algorithms that process far more data streams might see through our hyperbole and present us more objectively. Doing so might make us more human, more respected, and more trustworthy, but it might also leave us with less control over our place in life.
In this uncertain environment around values and power, our frail sense of trust becomes our most valuable currency in society, more so than even money or knowledge. A lack of transparency and understanding of AI systems will put a heightened premium on credibility and integrity. How humans and machines can assure both remains an open question. In early 2018, in a downtown Berkeley conference room, we typed “Solomon’s Code” and our names into a Google search to find the initial Amazon listing of this book. We sat directly across the table from each other, both on the same Wi-Fi network, entering the exact same terms, and we got different results. One search showed the book atop the page, the other buried it a page later. What explains the difference? Why did the algorithm treat us differently? Does it do the same thing when we search for health treatments, financial advice, or information on political candidates? The explanation might be simple enough given our search histories, and we got a chuckle out of it. But the truth is we can’t always know how these systems are classifying and segmenting us. As algorithms guide more important facets of our lives, we need to trust that the machines will treat us fairly and guide us toward the best possible version of humanity. That will come down to the people who write those algorithms and the ethical frameworks that guide their creativity.
THE HUMAN ELEMENT
These grand questions of humanity take on a special urgency when applied to the most basic and intimate of human concerns—our personal well-being. The health-care industry has already emerged as one of the most prominent laboratories for artificial intelligence. In 2016, for example, IBM’s Watson for Oncology announced a partnership that would merge its powerful AI capabilities with Quest Diagnostics’ genomic sequencing of tumors. The combination of Watson’s vast research capabilities and Quest’s precise identification of genomes could suggest treatments that would attack the specific cancer mutations of an individual patient, increasing effectiveness and reducing side effects. At the time of the announcement, the companies said the partnership would extend the service to oncologists who serve almost three of every four US cancer patients.
The reach of Watson and other AI systems will extend deeper into health care in the years to come. As machines grow increasingly capable and we, as patients, come to accept the greater role they will play in helping us and our doctors preserve our well-being, the line between machine- and human-directed care will blur. How fully will we rely on the analytical power of systems that can gather, process, and learn from vastly more research and data than any human expert can possibly consume? How will we view a doctor’s expertise in comparison? And how will we balance a coldly objective AI with all the squishy elements that make us human—all the bias, instinct, fear, and willpower that influence our health, for better or worse?
Consider a personal example, in which those distinctly human attributes might have made all the difference when my (Olaf’s) wife, Ann, was diagnosed with breast cancer while carrying our first child. We got married in April 2004 and, while vis
iting her mother for Christmas that same year, found out she was pregnant. We sat around the holiday table in giddy disbelief; only the two of us were aware of our joy. But less than three months later, before we’d planned to tell our friends and families about the pregnancy, Ann went in for a breast cancer screening. She had a history of benign tumors in her breast tissue, and we’d come to regard these checkups as routine. But when she called later that day, I heard the fear in her voice. The radiologist wanted her to come back to the office. They’d found something and they didn’t want to discuss it over the phone. As we arrived at the doctor’s office, Ann told me she couldn’t hear the news directly from the radiologist. She needed me to deliver the news in a way she could bear it.
The diagnosis confirmed our worst fears, and neither one of us slept much that night. Pregnant women rarely develop breast cancer, and, when they do, it threatens both mother and child. So, when we met with Berlin’s top breast cancer specialist the next day, he said Ann had to terminate the pregnancy and begin chemotherapy immediately. The certainty and bluntness of his recommendation shocked us. Ann stammered out a protest, only to be cut off by a former patient the doctor brought into the office in an unsuccessful attempt to warm his frigid bedside manner. Both of them agreed: That was our only option.
We left the appointment feeling worse than we did going in, so we changed our approach. Ann refused to give up our baby, and together we decided to lean on our friends and family around the world. We lost the pure joy of telling people Ann was pregnant, having to shadow that announcement with news of her cancer, but in return we received an overwhelming outpouring of support and a new peace of mind. If you ask Ann now, she’ll tell you she slept as well that night as she ever has.