Solomon's Code

Home > Other > Solomon's Code > Page 5
Solomon's Code Page 5

by Olaf Groth


  Yet we need not consider apocalyptic scenarios to recognize the profound influence that even narrow AI has on our lives. Headline-grabbing innovations in narrow applications are more and more frequent. Yes, AI still lives in a quirky adolescence. The field is still dominated by models that employ heuristics—rules of thumb or educated guesses—rather than deep innovation in new theoretical frameworks for machine intelligence. And even the most advanced AI today remains a far cry from a science-fiction robot overlord. But the gaps between reality and possibility continue to close.

  This steady progress already facilitates remarkable advances in a variety of fields—sometimes concerning, sometimes convenient, and sometimes life-saving. Cisco, for instance, has researched algorithms that would learn from network traffic and identify Internet users who might be more valuable to service providers, and thus qualify for faster service or other perks. Speech recognition technology in call centers could help improve services by routing customers to agents with a similar personality type.‡‡ Startups and large e-commerce conglomerates are working on machine learning capabilities that facilitate differentiated pricing based on data about playing behavior, location, in-game purchases, spending patterns, and social interactions. Amazon uses deep learning, which clusters shoppers by similar purchase behavior, to cross-sell products, with results that amount to millions of dollars in sales per hour. Upcoming pilot projects for “flying car” passenger drones are planned for Dubai (by Chinese micro-multinational E-Hang) and possibly Dallas (by Uber). These will require air-traffic control AI to avoid collisions and regulations to monitor overbooking, payments, and the like.

  Notwithstanding the fatal collisions that involved autonomous car systems in 2018, proponents note that even regular ground-based autonomous vehicles could produce vastly safer roadways by removing error-prone human drivers from the equation. But guiding and controlling these complex traffic and trading spaces might require AI systems as well. The human brain can’t track millions of drones in the air or autonomous cars in an inner city. So, we also need to consider what happens when an autopilot system or an AI-powered national command-and-control center has to choose between two potentially fatal options. How does a car balance the welfare of its driver against the pedestrians and other drivers around it? And who ought to make that decision?

  WHAT COUNTS AS ARTIFICIAL INTELLIGENCE?

  We’ve adopted a broad view of artificial intelligence for this book, one that covers a range of human cognitive and physical function. Of course, many key subfields fall within that definition, anything from traditional knowledge representation and problem solving to the cutting-edge machine learning, perception, and robotics innovation we see today. But few of the key developments in AI fit snugly into one small category, instead overlapping or combining to create capable systems, a blend that includes the points at which AI interacts with human beings. Effective machine learning might rely on perception to gather data, for example, but then it might also employ forms of social intelligence to output what it learns in emotionally palatable ways humans will embrace and find useful.

  But at their core, all the various types of AI technologies share a common goal—to procure, process, and learn from data, the exponential growth of which enables increasingly powerful AI breakthroughs. Geysers of data are springing forth from our billions of smartphones, millions of cars, satellites, shipping containers, toy dolls, electric meters, refrigerators, toothbrushes, and toilets. Virtually anything we can put a microchip in could become a new source of data. And all of it can feed into and train machine-learning algorithms, including deep networks, which use layered data structures that enable some of the most powerful applications of machine learning.§§ Together with reinforcement learning—a method by which a machine processes huge troves of raw data and, through trial and error, confirms or rejects its existing assumptions and learns to perform a task on its own—these models of AI decision-making can lead to extraordinary achievements. Google put the power of its Google Brain deep-learning model to work on foreign-language translations, and virtually overnight it produced a leap in performance that was greater in magnitude than the old Google Translate system had achieved during its prior ten-year existence. In one translation test called a BLEU score, the best English-French translation ratings were in the high twenties. At that range, a two-point improvement would be outstanding. The new AI system outscored the old by seven points.¶¶

  These and other AI developments will radically change lives and economies in the next few years alone. As with the major economic transformations of the past, this AI-powered “Fourth Industrial Revolution” will destroy and create millions of jobs worldwide. Occupations we can’t even imagine today will materialize, potentially boosting productivity and lifting our quality of life, but they will also render many other types of jobs obsolete, and we must be prepared for that social fallout. That means our individual power to negotiate with society for our livelihoods and identities will change, and we don’t yet know exactly how and when. We will face significant turmoil as change comes faster than we can adapt—whether as individuals adjusting our life patterns and personal outlooks, or as societies retraining large numbers of people for new skills and new jobs.

  In their December 2016 report on the future impact of AI, former president Barack Obama’s Council of Economic Advisers (CEA) took a stab at what a few of these soon-to-emerge occupations might look like. They projected employment growth in four main areas: people who engage with AI systems (e.g., a new medical professional who guides patients through AI-directed treatment plans); workers who help develop new machines (e.g., computational sociologists or cognitive neuroscientists who study the impact of machine learning on specific groups of people and then work with engineers to improve existing systems or develop new ones); those who supervise existing systems (e.g., monitoring systems that ensure safety and adjudicate ethical conflicts); and an emerging field of workers who “facilitate societal shifts that accompany new AI technologies” (e.g., a new breed of civil engineer who redesigns physical infrastructure for an automated world).

  Ultimately, the transformations spurred by artificial intelligence “will open up new opportunities for individuals, the economy, and society, but they have the potential to disrupt the current livelihoods of millions of Americans,” the report said. In trucking and transportation, where automated vehicles promise to make the roads vastly safer, millions of jobs are on the line. The CEA estimated that automated vehicles could threaten or substantially alter 2.2 million to 3.1 million part- and full-time jobs—not including the ripple effect a decimated transportation industry would have on truck stops, warehouses, and other affiliated industries.

  It’s not just the routine low- and middle-skill tasks that are susceptible to AI disruption, either. Machines could make moot the traditional starting point for freshly minted law school graduates, who typically launch their careers by digging through case law and precedent to support partners further up the food chain. What will a new attorney’s entry-level work look like in ten years, when firms use more reliable and capable AI systems to conduct that research? To be sure, routine low- and middle-wage jobs are most susceptible to AI displacement in the near term, but white-collar jobs are now in the crosshairs of many applications, too.

  We don’t have to look very far down the road to see this upheaval, either. AI already guides so much of what we read, think, buy, and consume. It helps move us and, in the case of health-care AI, keeps us healthy. It’s already pervasive in our devices and increasingly pervasive in our very lives. The potential of our humanity when augmented by artificial intelligence is thrilling. But we need to think now about how humanity will shape its relationship with artificial intelligence—and how much we want AI to shape our lives—in the decades to come.

  THE MESSY HUMAN, THE CLEAN MACHINE

  The official party line of AI developers is that artificial intelligence will augment human capability, intuition, and emotion. IBM Watson for
Oncology will complement physicians and experts, not replace them. But as noted by Joe Marks, executive director of Carnegie Mellon University’s Center for Machine Learning and Health, technology development teams almost always focus on the technology first. Consideration of a machine’s interaction with humans comes later.

  Joi Ito, director of the renowned MIT Media Lab, said as much during an October 2016 Wired Q&A with President Obama: “This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than human beings. A lot of them feel that if they could just make that science fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us. . . . But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence, because the question is how do we build societal values into AI?”##

  The high-tech geeks want to get rid of the human element because humans make things messy. And to be fair, theirs is not just a knee-jerk, antisocial inclination; it’s based on legitimate motivations. The volatile optimization processes involved in climate change, energy flows, or other extremely complex systems might stabilize if we removed the vicious political and psychological conflicts that humans interject. But those considerations represent only the first-order effects of AI solutions to complex problems. These machines generate substantial second- and third-order effects that too few of the tech geeks contemplate. These require an open, inclusive, and interdisciplinary discourse.

  This becomes ever-more important as AI begins to collide with value systems around the world. Machines developed by Western scientists will embody biases that might cause undue harm in other societies. Powerful systems developed in China and sent around the world might not reflect the same level of privacy protections and freedom US citizens prefer. How well will the machines integrate the myriad social and cultural health practices that are implicit in the ways social groups interact, especially when those practices haven’t been codified in digital data streams yet? How might values about medical treatment, how it’s delivered, and to whom differ between those who build the system and those subject to its recommendations?

  These considerations will affect our life patterns. So much has been written about the 25 to 50 percent of jobs that AI and automation might eradicate, but economic disruption happens long before we reach those percentages. AI will change millions of jobs before it eliminates them. It will transform what it means to add value. It will reshuffle the match between occupations and the workers best suited for them, requiring new forms of retraining and realignment. For the future doctor advising Ava on her breast cancer options, job requirements might not include annual patient checkups or other routine visits, leaving that instead to the ever-watchful eye of an AI health manager. Rather than basic health analyses, doctors will design broader health solutions and programs—a fundamental shift in primary-care practices that could ripple through the profession in a relatively short ten to fifteen year span. Would aspiring doctors, currently selected and groomed for their diagnostic prowess, thrive in a new world of program design and socioemotional coaching? Can today’s doctors re-equip themselves for this emerging reality?

  Regardless of how extensively machines replace human labor, their effects will raise these and similar questions for most occupations. One might imagine a job-matching AI for each profession, or each industry, or even one broad algorithmic powerhouse to optimize the economy of an entire city, state, or country. In our globally connected societies and economies, how will we ensure all these occupational, economic, and cultural systems interact to combat climate change, promote peace, and help citizens live richer, healthier lives? How much of our imperfect, idiosyncratic selves are we prepared to give up to reap the benefits of being perfectly coordinated and orchestrated?

  Whatever the answers, the geeks are correct that artificial intelligence will play a transformative role in virtually every human endeavor in the decades to come—even if their current focus doesn’t yet create an AI that embraces the messiness that makes our humanity both precious and precarious. That will have to happen, however, if we want to have a chance to shape the ways AI will influence human power, values, and trust.

  *Heather Murphy, “Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine,” New York Times, Oct. 9, 2017.

  †Jennifer Stark and Nicholas Diakopoulos, “Uber seems to offer better service in areas with more white people. That raises some tough questions,” The Washington Post, March 10, 2016.

  ‡Casey Ross and Ike Swetlitz, “IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close,” STAT, Sept. 5, 2017.

  §Megan Molteni, “Thanks to AI, Computers Can Now See Your Health Problems,” Wired, Jan. 9, 2017.

  ¶Machine Learning Market to 2025—Global Analysis and Forecasts by Services and Vertical, The Insight Partners, February 2018.

  #Richard Evans and Jim Gao, DeepMind AI Reduces Google Data Centre Cooling Bill by 40%, DeepMind blog, July 20, 2016.

  **H.A. Haenssle et al, “Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists,” Annals of Oncology, May 28, 2018.

  ††Joon Son Chung et al, Lip Reading Sentences in the Wild, eprint arXiv:1611.05358, Nov. 16, 2016 (also published at the 2017 IEEE Conference on Computer Vision and Pattern Recognition).

  ‡‡Luke Dormehl, “Algorithms: AI’s creepy control must be open to inspection,” The Guardian, Jan. 1, 2017.

  §§Each layer in a deep network holds a set of numbers used to process the data from the layer beneath it. Training the network is a matter of adjusting the layer-to-layer factors each time new data is presented. In the case of object recognition, this is modeled roughly on the neural architecture of the human visual system, the bottom level is the raw data (like pixels in a photo) and the top layer has one node for each “object” to identify, like a cat or a flower. These deep networks—called “deep” because they have more than the two or three layers that researchers could model on limited computers when first conceived in the 1960s—continually improve the system.

  ¶¶Gideon Lewis-Kraus, “The Great A.I. Awakening,” New York Times Magazine, Dec. 14, 2016.

  ##Scott Dadich, “Barack Obama, Neural Nets, Self-Driving Cars and the Future of the World,” Wired, November 2016 (Q&A with Barack Obama and Joi Ito).

  2

  A New Power Balance

  The YouTube clip seems innocent enough—no traffic coming down the street as a middle-aged woman in white slacks and a blue-and-gray windbreaker crosses the street despite the don’t walk signal. This time, though, her name flashes up on a digital billboard posted near the crosswalk, along with a brief video feed of her jaywalking. These systems, already installed in multiple Chinese cities, will feed her information and a report of the violation back to the authorities. She might have a twenty yuan (about $3) fine eventually come her way, and it could ding her social credit score. Soon enough, she might get a citation in the form of an instant text message.* It’s all part of a campaign to cut down on traffic-related and jaywalking accidents. (At least one Chinese city has even installed short metal posts along the curbside that spray bursts of water at pedestrians who step into the street against traffic.)

  Yet, the surveillance goes far beyond crosswalks at major city intersections. By December 2017, Chinese authorities had installed around 170 million cameras in cities across the country, each one capturing data and feeding them into systems that conduct facial and gait recognition, threat surveillance, and a range of other behavioral tracking.† The public-facing result might have sta
rted with identifying and shaming jaywalkers, an effort by cities to reduce high levels of traffic deaths. From April 2017 to February 2018, the systems caught almost 14,000 jaywalkers in Shenzhen alone, authorities there said.‡ But the country already has trumpeted plans to establish a comprehensive social credit system, its nascent combination of surveillance and credit history that will reward people who do good and dock points for everything from jaywalking, skipped payments, and more serious infractions. Citizens with low scores might find themselves barred from travel, business loans, or other amenities. Lucy Peng, the CEO of Ant Financial, an Alibaba division that rolled out an early version of the credit scoring system, said the program “will ensure that the bad people in society don’t have a place to go, while good people can move freely and without obstruction.”§ By the end of April 2018, the program had already blocked people from taking 11.1 million flights and almost 4.3 million high-speed train rides, in addition to all the public shaming of jaywalkers, public notice boards showing faces of debtors, and even cartoons played in movie theaters, according to the Chinese publication Global Times.¶

  No one bats an eye at jaywalkers in Los Angeles. Yet, law enforcement authorities there employ their own sophisticated AI-enabled systems designed to help police identify potential hot spots and identify individuals of interest. One facet of the LAPD’s predictive policing system works up a score for different people. Have a gang affiliation or violent offense in your past? Add five points to your score. Every time an officer stops you and fills out a brief field interview card—even for, say, jaywalking—add another point. In this case, more points bring more scrutiny, and perhaps a greater likelihood of run-ins with police, more field interview cards, and more points.

 

‹ Prev