Hit Refresh
Page 18
AI must be designed to assist humanity. Even as we build more autonomous machines, we need to respect human autonomy. Collaborative robots (co-bots) should take on dangerous work like mining, thus creating a safety net and safeguards for human workers.
AI must be transparent. All of us, not just tech experts, should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines; not just artificial intelligence but symbiotic intelligence. The technology will know things about humans, but the humans must also know about how the technology sees and analyzes the world. What if your credit score is wrong but you can’t access the score? Transparency is needed when social media collects information about you but draws the wrong conclusions. Ethics and design go hand in hand.
AI must maximize efficiencies without destroying the dignity of people. It should preserve cultural commitments, empowering diversity. To ensure this outcome, we need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future. Nor should they be controlled solely by the small swath of humankind living in the wealthy, politically powerful regions of North America, Western Europe, and East Asia. Peoples from every culture should have an opportunity to participate in shaping the values and purposes inherent in AI design. AI must guard against social and cultural biases, ensuring proper and representative research so that flawed heuristics do not perpetuate discrimination, either deliberately or inadvertently.
AI must be designed for intelligent privacy, embodying sophisticated protections that secure personal and group information in ways that earn trust.
AI must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.
Many of these ethical considerations come together, for example, in our digital experiences. Increasingly algorithms that reason over your previous actions and preferences mediate our human experience—what we read, whom we meet, what we may “like.” All of these suggestions are being served to us hundreds of times a day. For me it calls into question what the exercise of free will means in such a world and how might it affect the many people and communities who receive very different perspectives on the world. What is the role of social diversity and inclusion when it comes to designing content and information platforms? Ideally, we would all have a transparent understanding of how our data is being used to personalize content and services and we should have control over this data. But as we move increasingly into a complex world of artificial intelligence, that won’t always be easy. How can we protect ourselves and our society from the adverse effects of information platforms—increasingly built on AI—that prioritize engagement and ad dollars over the valuable education that comes with encountering social diversity of facts, opinion, and context? This is a driving question that needs much more work.
But there are “musts” for humans, too—particularly when it comes to thinking clearly about the skills future generations must prioritize and cultivate. To stay relevant, our kids and their kids will need:
Empathy—Empathy, which is so difficult to replicate in machines, will be invaluable in the human-AI world. The ability to perceive others’ thoughts and feelings, to collaborate and build relationships will be critical. If we hope to harness technology to serve human needs, we humans must lead the way by developing a deeper understanding and respect for one another’s values, cultures, emotions, and drives.
Education—Some argue that because life spans will increase, birth rates will decline, and thus spending on education will decline as well. But I believe that to create and manage innovations we cannot fathom today, we will need increased investment in education to attain higher level thinking and more equitable education outcomes. Developing the knowledge and skills needed to implement new technologies on a large scale is a difficult social problem that will take a long time to resolve. The power loom was invented in 1810 but took thirty-five years to transform the clothing industry because of shortages of trained mechanics.
Creativity—One of the most coveted human skills is creativity, and this won’t change. Machines will enrich and augment our creativity, but the human drive to create will remain central. In an interview, novelist Jhumpa Lahiri was asked why an author with such a special voice in English chose to create a new literary voice in Italian, her third language. “Isn’t that the point of creativity, to keep searching?”
Judgment and accountability—We may be willing to accept a computer-generated diagnosis or legal decision, but we will still expect a human to be ultimately accountable for the outcomes.
We’ll look more closely at this in the coming chapter, but what is to become of the economic inequality problem that so many people around the world are currently focused on? Will automation lead to greater or lesser equality? Some economic thinkers advise us not to worry about it, pointing out that, throughout history, technological advances have consistently made the majority of workers richer, not poorer. Others warn that economic displacement will be so extreme that entrepreneurs, engineers, and economists should adopt a “new grand challenge”—a promise to design only technology that complements rather than replaces human labor. They recommend, and I agree, that we business leaders must replace our labor-saving and automation mindset with a maker and creation mindset.
The trajectory of AI and its influence on society is only beginning. To truly grasp the meaning of this coming era will require in-depth, multi-constituent analysis. My colleague Eric Horvitz in Microsoft Research, a pioneer in the AI field, has been asking these questions for many years. Eric and his family have personally helped to fund Stanford University’s One Hundred Year Study; at regular intervals for the coming century, it will report on near-term and long-term socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.
In their first report, Artificial Intelligence and Life in 2030, the study panel noted that AI and robotics will be applied “across the globe in industries struggling to attract younger workers, such as agriculture, food processing, fulfillment centers and factories.” The report found no cause for concern that AI is an imminent threat to humankind. “No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.”
While there is no clear road map for what lies ahead, in previous industrial revolutions we’ve seen society transition, not always smoothly, through a series of phases. First, we invent and design the technologies of transformation, which is where we are today. Second, we retrofit for the future. We’ll be entering this phase shortly. For example, drone pilots will need training; conversion of traditional cars into autonomous vehicles will require redesign and rebuilding. Third, we navigate distortion, dissonance, and dislocation. This phase will raise challenging new questions. What is a radiologist’s job when the machines can read the X-ray better? What is the function of a lawyer when computers can detect legal patterns in millions of documents that no human can spot?
Each of these transitional phases poses difficult issues. But if we’ve incorporated the right values and design principles, and if we’ve prepared ourselves for the skills we as humans will need, humans and society can flourish even as we transform our world.
Writing for The New York Times, cognitive scientist and philosopher Colin Allen concludes, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically. Not perfect machines, to be sure, but better.”
AI, robotics, and even quantum computing will simply be the latest examples of machines that can work in concert with people to achieve something greater. Historian David McCullough has told the story of Wilbu
r Wright, the bike mechanic and innovator of heavier-than-air flight at the turn of the last century. McCullough describes how Wilbur used everything he could humanly muster—his mind, body, and soul—to coax his gliding machine into flight. The grainy old film, shot from a distance, fails to capture his grit and determination. But if we could zoom in, we’d see his muscles tense, his mind focus, and the very spirit of innovation flow as man and machine soared into the air for the first time, together. When history was made at Kitty Hawk, it was man with machine—not man against machine.
Today we don’t think of aviation as “artificial flight”—it’s simply flight. In the same way, we shouldn’t think of technological intelligence as artificial, but rather as intelligence that serves to augment human capabilities and capacities.
Chapter 9
Restoring Economic Growth for Everyone
The Role of Companies in a Global Society
Michelle Obama, seated directly in front of me in the gallery overlooking the chamber of the House of Representatives, listened intently as her husband delivered his final State of the Union address before a joint session of Congress. It was a poignant night. The political divisions on Capitol Hill that cold winter evening were deep and widening, with a historically bitter presidential race still looming. It had been twenty-eight years since I’d first arrived in the United States, and now, as CEO of Microsoft, I was the guest of the First Lady, following along with tens of millions of others around the world as President Obama somberly outlined some of the key questions his successor would have to address, no matter who he or she turned out to be.
One of the president’s questions felt as though it was addressed directly to me: “How do we make technology work for us, and not against us — especially when it comes to solving urgent challenges like climate change?”
I sensed—or did I imagine?—more than a few eyes searching for my reaction.
The president continued. “The reason that a lot of Americans feel anxious is that the economy has been changing in profound ways, changes that started long before the Great Recession hit and haven’t let up. Today, technology doesn’t just replace jobs on the assembly line, but any job where work can be automated. Companies in a global economy can locate anywhere, and face tougher competition.”
I squirmed a little in my chair. In a few words, the president had expressed some of the anxiety we all feel about technology and its impact on jobs—anxiety that would later play out in the election of President Donald Trump. In fact, just after the election, I joined my colleagues from the tech sector for a roundtable discussion with President-elect Trump who, like his predecessor, wanted to explore how we continue to innovate while also creating new jobs.
Ultimately, we need technological breakthroughs to drive growth beyond what we’re seeing, and I believe mixed reality, artificial intelligence, and quantum are the type of innovations that will serve as accelerants.
The son of an economist and as a business leader, I am hardwired to obsess about these problems. Are we growing economically? No. Are we growing equality? No. Do we need new technological breakthroughs to achieve these goals? Yes. Will new technologies create job displacement? Yes. And so how can we, therefore, solve for more inclusive growth? Finding the answer to this last question is perhaps the most pressing need of our times.
In recent decades, the world has invested hundreds of billions of dollars in technology infrastructure—PCs, cell phones, tablets, printers, robots, smart devices of many kinds, and a vast networking system to link them all. The aim has been to increase productivity and efficiency. Yet what, exactly, do we have to show for it? Nobel Prize–winning economist Robert Solow once quipped, “You can see the computer age everywhere but in the productivity statistics.” However, from the mid-1990s to 2004, the PC Revolution did help to reignite once-stagnant productivity growth. But other than this too brief window, worldwide per capita GDP growth—a proxy for economic productivity—has been disappointing, just a little over 1 percent per year.
Of course, GDP growth can be a crude measure of actual improvement in the well-being of humanity. In a panel discussion with me in Davos, Switzerland, MIT management school professor Andrew McAfee pointed out that productivity data fail to measure many of the ways technology has enhanced human life, from improvements in health care to the way tools like Wikipedia have made information available to millions of people anytime, anywhere. Think about it another way. Would you prefer to have $100,000 today or be a millionaire in 1920? Many would love to be a millionaire in the previous century, but your money then could not buy lifesaving penicillin, a phone call to family on the other side of the country, or many of the benefits of innovations we take for granted today.
And so beyond this one measure called GDP, we have practically a moral obligation to continue to innovate, to build technology to solve big problems—to be a force for good in the world as well as a tool for economic growth. How can we harness technology to tackle society’s greatest challenges—the climate, cancer, and the challenge of providing people with useful, productive, and meaningful work to replace the jobs eliminated by automation?
Just the week before that State of the Union in Washington, DC, questions and observations much like those raised by the president had been leveled at me by heads of state during meetings with customers and partners in the Middle East, in Dubai, Cairo, and Istanbul. Leaders were asking how the latest wave of technology could be used to grow jobs and economic opportunity. It’s the question I get most often from city, state, and national leaders wherever I travel.
Part of my response is to urge policymakers to broaden their thinking about the role of technology in economic development. Too often they focus on trying to attract Silicon Valley companies in hopes they will open offices locally. They want Silicon Valley satellites. Instead, they should be working on plans to make the best technologies available to local entrepreneurs so that they can organically grow more jobs at home—not just in high-tech industries but in every economic sector. They need to develop economic strategies that can enhance the natural advantages their regions enjoy in particular industries by fully and quickly embracing supportive leading-edge technologies. But there is often an even bigger problem—they are uncertain about investing in the latest technology, like the cloud. The most profound difference between leaders is whether they fear or embrace new technology. It’s a difference that can determine the trajectory of a nation’s economy.
Take a look at history. During the Industrial Revolution of the nineteenth century, many of the key enabling technologies were originally developed in the United Kingdom. Naturally, this gave Britain a big advantage in the race for economic supremacy. But the fate of other nations was determined in large part by their response to British technological breakthroughs. Belgium dramatically increased its industrial production to a level rivaling that of the United Kingdom by leveraging key British innovations, investing in supporting infrastructure like railroads, and creating a pro-business regulatory environment. As a result of these policies, Belgium emerged as a leader in the coal, metalworking, and textile industries. By contrast, industrial productivity in Spain significantly lagged the rest of Europe as a result of Spain’s slow adoption of outside innovations and protectionist policies that decreased its global competitiveness.
We see the same principle at work in recent history. The African nation of Malawi has been one of the poorest in the world. But in the past decade, Malawi’s rapid adoption of mobile phones has had a powerful positive impact on its development. Economically handicapped by its minimal landline telephone infrastructure, Malawi leapfrogged directly to cellular beginning in 2006 by creating a National ICT for Development policy that encouraged investment in mobile infrastructure and removed barriers to adoption—for example, by eliminating import taxes on mobile phones. As a result, mobile phone penetration has dramatically risen, which in turn has enabled the growth of local mobile payments businesses. With 80 percent of the population
“unbanked,” this has made such payments all the more important. Today, Malawi has a higher penetration of mobile payments among mobile phone users than many developed countries.
Likewise, Rwanda’s Vision 2020 initiative has helped to turn around the nation’s economy and education system by promoting greater access to mobile connectivity and the cloud. Startups like TextIt, which enables companies worldwide to engage with their customers through cloud-based SMS and voice apps, represents new hope for growth in this troubled nation.
This question of technology diffusion—the spread of technology—and its impact on economic outcomes has always fascinated me. How can we make technology available to everyone—and then how can we ensure that it works to benefit everyone?
In my quest for an answer, I invited Dartmouth economist Diego Comin to spend an afternoon with me at my office in Redmond, Washington. Professor Comin is soft-spoken and weighs his words carefully, relying on the precision and thoroughness of his knowledge to carry conviction. He has painstakingly studied the evolution of technology diffusion over the last two centuries in countries throughout the world. Comin and economist Bart Hobijn spent years producing the Cross-country Historical Adoption of Technology (CHAT) data set, which examines the time frame over which 161 countries adopted 104 technologies from steam power to PCs. They found that, on average, countries tend to adopt a new technology about forty-five years after its invention, although this time lag has shortened in recent years.
Based on this analysis, Comin agrees that differences between rich and poor nations can largely be explained by the speed at which they adopted industrial technologies. But equally important, he says, is the intensity they employ in putting new technologies to work. Even when countries that were slow to adopt new technologies eventually catch up, it’s the intensity of how they use the technology—not simply the access—that creates economic opportunity. Are the technologies just sitting there or is the workforce trained to get the most productivity out of them? That’s intensity. “The question is not just when these technologies arrive, but the intensity of their use,” Professor Comin told me.