Solomon's Code

Home > Other > Solomon's Code > Page 1
Solomon's Code Page 1

by Olaf Groth




  SOLOMON’S CODE

  Humanity in a World of Thinking Machines

  OLAF GROTH

  and MARK NITZBERG

  WITH DAN ZEHR

  To our children, Hannah & Fiona Groth-Reidy and Henry & Cecily Nitzberg:

  may the era of thinking machines empower your humanity.

  CONTENTS

  Foreword

  Introduction

  1: Where Human Meets Machine

  2: A New Power Balance

  3: Think Symbiosis

  4: Frontiers of a Smarter World

  5: The Race for Global AI Influence

  6: Pandora’s Box

  7: Life and love in 2035

  8: A World Worth Shaping

  Afterword

  Acknowledgments

  FOREWORD

  by Admiral James G. Stavridis

  Since graduating from the US Naval Academy, I have spent forty years in public service, analyzing opportunities and threats and leading large-scale efforts that seize the former and mitigate the latter. I consider myself fortunate to have served with many formidable colleagues in US and allied administrations as we navigated through the Cold War, the rising threat of terrorism in its aftermath, and the accelerating changes across the geo-political and geo-economic landscape in the years since. Turbocharged now by digital technologies, this constant process of global transformation has both enabled and threatened peace and prosperity across borders. We have seen technology facilitate a number of key shifts throughout recent history—from a bipolar to a multipolar global order surfacing myriad new actors; from an analog to digital economy yielding unrivaled degrees of connectivity and anonymity; from technology for the elites to technology for the masses enabling entirely new ways of educating and participating; and from physical labor to knowledge workers leading to new distributions in the economics of working and earning.

  We now are living amidst a fifth seismic shift, moving from purely linear computing systems to more cognitive and human-like technologies. The application of neural networks and machine learning techniques has given computer systems the ability to learn with minimal supervision, recognize complex patterns, and make recommendations and decisions on our behalf. Some of these decisions are small, subtle, and merely convenient to our everyday lives; others will have major impacts on people around the world. The application of both types is increasing rapidly and driven often in stealth fashion, by both commercial and political actors, most of whom hold honorable intentions and want to make this world a better place. To a great extent, we do get a better world. But even so, far-reaching ripple effects and unintended consequences make these advanced cognitive technologies both a panacea and a Pandora’s box.

  As a Strike Group Commander more than a decade ago, I oversaw the use of remotely piloted aircrafts to conduct strikes throughout Iraq, Afghanistan, and the Horn of Africa. These systems were highly effective and reduced the odds of collateral damage. A few years later, as a four-star Admiral and NATO Commander, we used the same technologies extensively in Libya. That 2011 campaign recorded the lowest number of collateral damage issues of any major air battle in history. But in every one of those operations we kept a man or woman “in the loop.” It was obvious to me at the time that, sooner or later, we would have to grapple with the crucial issues that emerge if or when we decide to take a human out of the loop. The technical, ethical, and moral issues involved are essential, and the debate that is now emerging has been a long time in the making.

  Similar crucial discussions are unfolding around a wide range of AI-powered technologies, and we all need to engage in these deliberations. As these thinking machines analyze knowledge and synthesize new insights and wisdom, they become powerful command-and-control instruments in the hands of those who understand them. Those who don’t understand these cognitive systems will face a decisive disadvantage, and that imbalance of power illustrates the dark side of this rapidly expanding field. But we must remember that power also lies in the ability of burgeoning AI models to synthesize various data streams and develop workable solutions for wickedly complex problems. Our remarkable human ability to develop technologies that augment the limitations of the human brain and all our human foibles is the bright side of this new cognitive era.

  Consider, for example, the complex problem humanity faces in conserving natural resources and preventing climate change. Think about ways we might provide more equitable health care in the United States, where we all too often dismiss the problem as a zero-sum contest between dollars and benefits, regularly overlooking more holistic ways we could lead healthier lives. And what about diversity and productivity at work—a balancing act that seems so difficult we devolve into endless stereotyping and alienation? The sheer complexity of all these major challenges can overwhelm us, and we too often default to reductionist thinking to find quick and easy answers.

  In our lives and careers, all of us must deal with the types of difficult problems that stretch our brains to their limits or beyond. My career brought the same, whether in my role as a destroyer captain, the supreme commander at the helm of NATO forces in Europe, the dean of one of the world’s premier international leadership academies, or in my everyday life as a father and a husband. But what drives success is our ability, as adaptable and malleable human beings, to convert these challenges into new horizons for personal and societal growth. Here, the ability of artificial intelligence and cognitive computing systems to help us make decisions could provide vast benefits.

  Imagine a sophisticated AI-driven system that could diagnose where the world’s next food crisis will occur and then recalibrate the global supply chain to solve the bottlenecks preemptively. It turns out we’re already working on exactly that. Envision AI-powered mental-health interventions that could sharply reduce suicide rates among the mentally ill. There are smart people working on that, too. We are already creating new technologies that help illiterate people partake in the economy through computer vision technologies. Developers have created ways to teach children languages, math, and other subjects in ways that are more fun, more individualized to their needs, and more effective. Some researchers are creating innovative ways to help employers motivate their employees by making their work more meaningful and providing them a deeper sense of purpose. Others are beating cyber terrorists at their game, using cognitive computing systems to predict attack patterns and react far faster to breaches that do occur.

  These developments inspire me, but as someone who spent decades protecting and leading people, I know that power and trust can be abused if guided by the wrong values. One must navigate through dark places before reaching the light. That’s why we need to shape the design and application of these cognitive technologies so they serve all humankind, rather than just the powerful and wealthy. Ensuring those beneficial ends, harnessing their potential and circumventing their destructive power relies on our ability to once again muster the world’s talent and create a coordinated global effort. I encourage you to engage and become part of that formative endeavor.

  We have lots of work ahead of us if we hope to ensure that these systems know us—and vice versa—before we allow them to guide, speak, or act for us. They need to be able to reflect on their own reasoning and, equally important, be able to explain their decisions and actions in ways that humans can understand, especially when life-critical decisions are at stake. They should include mechanisms that check and correct for biases in the data they use for analysis and learning. And they shouldn’t be designed or operated to suppress others, discriminate against minorities, or cheat and take advantage of those who are less digitally experienced.

  Solomon’s Code is the first book I’ve read that fully illustrates how innovation in thinking machines is taking place around t
he world, and the different ways power, trust and values are playing out across societies. Olaf Groth and Mark Nitzberg establish a trailhead for our thinking, sketching out the likely pathways in which cognitive systems will influence our lives over the next ten to fifteen years. The authors paint a vision of the grand possibilities and what we have to do to achieve them, but they don’t shy away from the perils and pitfalls we’ll have to navigate along the way. They illuminate a promising future full of interesting, challenging dilemmas, but they stay away from unrealistic and oversimplified utopian or dystopian visions.

  That is the kind of responsible leadership that works with soldiers in the field of battle, with stakeholders in the economy, with students in educational settings, and between all of us as everyday citizens in civic life. The people who design and control the new generations of thinking machines will need to embrace the same kind of leadership if we want to make it to the next horizon of human growth.

  —Admiral James G. Stavridis, author of Sea Power:

  The History and Geopolitics of the World’s Oceans,

  former Supreme Commander at NATO and

  Dean of the Fletcher School at Tufts University

  Introduction

  The once-grandiose tales of artificial intelligence have become quotidian stories. Before the robots started to look and sound human, they automated real jobs and transformed industries. Before AI put self-driving cars and trucks on the highways, it helped find alternate routes around traffic jams. Before AI gave us brain-enhancing implants, it gave us personal assistants that converse with us and respond to the sound of our voices. While previous plotlines for AI promised sudden and sweeping changes in our lives, today’s AI bloom has delivered a transformation one step at a time, not through an apocalyptic blowout.

  Artificial intelligence now pervades our lives, and it’s not going away. Sure, the machines we call “intelligent” today might strike us as rote tomorrow, but the tremendous gains in computing power, availability of massive data sets, and a handful of engineering and scientific breakthroughs have lifted AI from its early Wright Brothers days to NASA heights. And as researchers fortify those underlying elements, more and more companies will integrate thinking machines into their products and services—and by extension, deeper into our daily lives.

  These and future developments will continue to reshape our existence in both radical and mundane ways, and their ongoing emergence will raise more and new questions not only about intelligence, but also about the very nature of our humanity. Those of us not involved in the field can sit passively by and let this unfolding plot carry us wherever it leads, or we can help write a story about beneficial human-machine coexistence. We can wait until it is time to march in the streets, asking governments to step in and protect us, or we can get in front of these developments and figure out how we want to relate to them. That is what this book intends to do: To help you, the reader, confront some of the societal, ethical, economic, and cultural quandaries that an increasingly powerful set of AI technologies will generate. The following chapters illustrate how AI will force us to consider what it means to be intelligent, human, and autonomous—and how our humanity makes us question how AI might become capable of ethical, compassionate decision-making and something more than just brutally efficient.

  These issues will challenge our local and global conception of values, trust, and power, and we touch on those three themes throughout Solomon’s Code. The title itself refers to the biblical King Solomon, an archetype of wealth and ethics-based wisdom but also a flawed leader. As we create the computer code that will power AI systems of the future, we do well to heed the cautionary tale of Solomon. In the end, the magnificent kingdom he built and ruled imploded—largely due to his own sins—and the subsequent breakup of his realm ushered in an era of violent unrest and social decline. The gift of wisdom was squandered, and society paid the price. Our book takes the position that humanity can prosper if we act with wisdom and foresight, and if we learn from our shortcomings as we design the next generation of AI. After all, we are already dealing with new tests that these advanced technologies have presented for our values, trust, and power. Governments, citizens, and companies around the world are debating personal-data protections and struggling to forge agreements on values of privacy and security. Stories about Google’s Duplex placing humanlike automated calls to make reservations with restaurants or Amazon’s Alexa accidentally listening in to conversations have people wondering just how much trust they can put in AI systems. The United States, China, the European Union and others are already seeking to spread their influence and power through the use of these advanced technologies, accelerating into a global AI race that might help address climate change or, just as easily, lead to even more meddling in other countries’ domestic affairs.

  Meanwhile technologically, as these systems gain more and more cognitive power, they might begin to reflect a certain level of what we would call consciousness, or the ability to metareflect on their actions and their context. We all win if we can first instill a proper conscience in AI developers and the systems they create, so we ensure these technologies influence our lives in beneficial ways. And we can only accomplish this by joining forces, engaging in public discourse, creating relevant new policy, educating ourselves and our children, and developing and following a globally sourced open code of ethics. Whatever pathway the future of AI might take, we must create an inclusive and open loop that enables individuals and companies to increase productivity, heighten professional and personal satisfaction, and drive our progressive evolution.

  Humanity’s innate and undaunted desire to explore, develop, and advance will continue to spawn transformative new applications of artificial intelligence. That genie is out of the bottle, despite the unknown risks and rewards that might come of it. If we endeavor to build a machine that facilitates our higher development—rather than the other way around—we must maintain a focus on the subtle ways AI will transform values, trust, and power. And to do that, we must understand what AI can tell us about humanity itself, with all its rich global diversity, its critical challenges, and its remarkable potential.

  SOLOMON’S CODE

  1

  Where Human Meets Machine

  People move through life in blissful ignorance. In many ways, our bodies and lives work like a black box, and we consider it a surprise misfortune when disease or disaster strikes. For better or worse, we stumble through our existence and figure out our strengths and weaknesses through trial and error. But what happens as we start to fill in more and more of the unknowns with insights generated by smart algorithms? We might get remarkable new ways to enhance our strengths, mitigate our weaknesses, and defuse threats to our well-being. But we might also start to limit our choices, blocking off some enriching pathways in life because of a small chance they could lead to disastrous outcomes. If I want to make a risky choice, will I be the only one who has agency over that decision? Can my employer discriminate against me because I decided not to take the safest path? Have I sacrificed an improbable joy for fear of a possible misfortune?

  And what happens to us fifteen years from now, when AI-powered technologies permeate so many more facets of everyday lives?

  The chimes from Ava’s home artificial intelligence system grew louder as she rolled over and covered her head with the pillow. Despite her better judgment, not to mention the constant reminders from her PAL, she’d ordered another vodka tonic at last call. She already hated this day—the anniversary of her mother’s diagnosis thirty years earlier—but the hangover throbbing in her temples was making this morning downright painful. The blinds rising and the bedroom lights growing steadily brighter didn’t help. “Yeah, yeah. I’m up,” she growled as she steadied herself with a foot on the floor. Slowly, she rose and walked toward the bathroom, her assistant quietly reminding her of what she couldn’t put out of her mind this morning no matter how hard she might try: precancer screening today.

  So far, the doc
tors and their machines hadn’t seen any need for action. But given her mother’s medical history, Ava knew she carried an elevated risk of breast cancer. I’m only twenty-nine years old, she thought, I shouldn’t have to worry about this yet. Her mother was pregnant with Ava when she got her diagnosis, so it came as a complete shock. Her parents agonized over what to do—about the cancer and the baby—until they found a doctor who made all the difference. In the two decades since, progress in artificial intelligence and biomedical breakthroughs had eliminated many of the worst health-care surprises, and it seemed like medical science conquered a couple new ones in the few years since. Ava was old enough to remember when AI could identify and predict ailments half the time. Now, it hit 90 percent thresholds, and most people trusted the machine to make critical decisions (even if they still wanted a doctor in the room).

  Ava snapped back into focus: “Where are my goddamned keys?”

  A patient, disembodied voice reminded her: “You left your keys and sunglasses on the kitchen counter. I’ll order a car for you.”

  She winced. “I gotta change your speech setting,” she said. “You still sound too much like Connor.” No time now. She headed out the door for the doctor’s office. If she could, she would skip the precautionary checkups, but then she’d lose her health insurance. So today, she just had to go through the motions and the machines.

  Just don’t say that word. A couple hours later, seated in the consultation room, the pounding in Ava’s head finally faded. The anxiety didn’t. “Sorry about the delay,” her doctor said as she breezed in and sat down. “Everything looks fine for now, but we’re starting to see some patterns in your biomarkers. The WellScreen check says about 78 percent of patients with your combination of markers and genetic disposition develop cancer within a decade. It’s about time we look at some preventative measures.”

 

‹ Prev