Book Read Free

Solomon's Code

Page 26

by Olaf Groth


  The context needed to consistently and accurately identify these unlikeliest of cases does not yet exist in consumer-facing AI applications, either. However, as of this writing, some breakthroughs appeared imminent in enterprise applications, including at Intel Saffron. According to Gayle Sheppard, the head of the chipmaker’s Saffron AI Enterprise Software unit, most fault detection in manufacturing identifies the common or probable causes of flaws and failures. Saffron’s memory-based learning and reasoning solutions, a complement to machine learning/deep learning, don’t just identify those likely occurrences, she says, they can drill down to find and explain the outliers—the one-off weaknesses that might cause a part to break or a system to malfunction. If successful, Intel’s one-shot learning AI competency could have huge implications for higher-quality manufacturing, but might also eventually improve AI platforms that assist individual people in their everyday lives.

  Yet, the tension between what’s fair and just for the individual might still clash with a broader community’s interests, a friction Cynthia Dwork attempts to address with her concept of “fairness through awareness,” which seeks to guarantee statistical parity while “treating similar individuals as similarly as possible.”¶ It sounds simple enough as an everyday part of life; we simply feel something is fair or unfair, and then try to construct a valid justification for that sense. But putting those concepts into the computer code has proven far more difficult. Policies of quotas for women in politics and business in Germany, affirmative action policies for minorities in the United States, and the recruitment of lower castes into governmental jobs in India all attempt to do right by a group that is treated unfairly. And yet, each creates tricky dynamics at the individual level, particularly for those who don’t belong to the group of people who directly benefit from such a program.

  These sorts of attempts to balance group and individual fairness often trigger cries of injustice, such as the US Supreme Court cases argued against the affirmative action or diversity initiatives at the University of Michigan and the University of Texas at Austin. In both cases, the court generally upheld the universities’ policies, although not without caveats—in part because the policies might not seem fair from an individual perspective. Often, we also conflate that sense of individual fairness with justice. Such is the case with the popular conception of John Rawls’s theory of justice, in which the American philosopher argued that true justice might emerge from a balance of fairness and self-interest. Rawls put his hypothetical people behind what he called “a veil of ignorance.” In this “original position,” they couldn’t know their own attributes nor how they might match up against the rest of the people in their society. So, as they designed principles of justice for the entire community to live by, they couldn’t stack the deck in their favor. No one could know where the distribution of resources and abilities would leave them on the socioeconomic ladder, so everyone would tend to design a system that treats every individual fairly.

  Rawls makes regular appearances in AI writing and punditry, often in relation to the moral status of AI agents as they become increasingly intelligent. But his “veil of ignorance” is also worth noting as a metaphor for the role artificial intelligence might play in justice systems. Humans can never go back and place themselves behind Rawls’s veil of ignorance, but in theory they might develop a set of AI systems that simulate his concepts of “original position.” Such a system, if feasible, might suggest a most just weighting of interests between the accuser, the accused, and the communities in which they live. Of course, many obstacles block this theoretical pathway, not least of which is the unintentional bias that lurks in nearly every data set we compile. Yet, perhaps even an imperfect simulation might help guide a collaboration between law enforcement, victims-rights groups, watchdogs such as the American Civil Liberties Union, and community activists.

  Regardless, questions of justice and fairness will only get more complicated as artificial intelligence parses out more intricate discrepancies between groups and individuals. Social backgrounds, behavioral traits, academic and job performance, responsible conduct, and past disadvantages—all of these types of data can feed into AI systems. As this mix of information and algorithm reaches levels of complexity beyond human understanding, how will we reassess who really deserves society’s support? As our lives become more digital and AI analyses get more granular, will we lose the subjective context that influences our sense of justice and fairness? In the United States, for example, juries consider all sorts of contextual information about a criminal case, from prior criminal records to witness credibility on the stand. Add the constant evolution of US case law, and even our cut-and-dried notion of strict legal justice is immersed in subjectivity.

  So, at the least, we might start with policies that hold the people, companies, and government entities accountable for the fair and just use of artificial intelligence that directly affects people’s lives. “Generally speaking, the job of algorithmic accountability should start with the companies that develop and deploy the algorithms,” Cathy O’Neil writes in her book, Weapons of Math Destruction. “They should accept responsibility for their influence and develop evidence that what they’re doing isn’t causing harm, just as chemical companies need to provide evidence that they are not destroying the rivers and watersheds around them. . . . The burden of proof rests on companies, which should be required to audit their algorithms regularly for legality, fairness, and accuracy.”#

  “We are just arms merchants,” one tech executive said in a casual conversation with us. He said it in jest, but it illustrates the tragedy of the tech sector today—the people who set out to “change the world” now slide deeper into ethical conundrums and need much stronger governance on these sorts of issues than any time or any industry before. It’s incumbent upon us to make that happen.

  ECONOMIC GROWTH VERSUS HUMAN GROWTH

  Let’s not pretend otherwise: The development of AI systems will transform work tasks and displace jobs at unprecedented speed. It also will spur demand for new skills we have yet to imagine. What we don’t know is how people, governments, economies, and corporations will react during this turbulent process. Some countries might opt for an embargo on robotics, others for a tax on them. Still others will limit the amount of analysis and data that companies can keep or the depth at which AI-powered applications can permeate our daily lives. Labor organizations might revolt, with walkouts the likes of which we’ve seen historically among farmers in France or autoworkers in the United States. A new generation of Luddites might seek to destroy the machine or to drop off the grid and withdraw into analog safe havens. Some economists and politicians suggest a universal basic income (UBI), providing a threshold amount of income to every citizen to help support people who lose their jobs to AI and other automation. Others argue workers could use part of their income to take an ownership stake in the machines that disrupt their jobs. Truck drivers, for example, could own and profit from the autonomous rig that replaces them.

  But alongside the anxious reactions and calls for safety nets against job loss, we’ll also start to realize that cognitive machines are taking on many of the tedious tasks that make today’s work so banal for so many. About 85 percent of workers worldwide say they felt “emotionally disconnected from their workplaces,” according to a 2017 Gallup survey. What exactly are we trying to preserve? Perhaps it makes more sense to design and train workers for AI-augmented tasks that lead to increased productivity, greater stimulation, and higher purpose. Perhaps a symbio-intelligent relationship with AI systems affords workers more time to pursue the types of work that fulfill them and drive better results for their employers, or simply gives them more free time to be happy, sipping daiquiris on the beach.

  Across all cultures and societies, people love to create things and express themselves. The rules and conventions for that self-expression might vary, but creativity and fulfillment often manifest themselves in work. Tapping that potential energy could unleash a new
wave of productivity and development, lifting standards of living and innovation around the globe. Adobe Systems, the company behind Photoshop, Illustrator, and a range of software used by artists and designers the world over, has deployed AI to help remove the tedium of the creative process. From systems that automatically eliminate the “red eye” in photos to tools that can realistically swap photo backgrounds to suit the needs of an advertising campaign, Adobe’s advancements allow people to spend more time on the creative parts of their jobs. Changes that once took weeks might now take a few minutes. “This enables productivity. This is the efficiency we’re talking about,” Dana Rao, the company’s vice president of intellectual property and litigation told California lawmakers in March 2018. Creative professionals are “still using their skill,” Rao said. “They’re still using their intelligence, and their job just got a lot easier.”

  Naturally, there will be a dark side to all this. People already use AI-based creativity tools to misrepresent, defraud, or mislead. (We even say doctored photos were “photoshopped,” after all.) By the spring of 2018, several digitally manipulated videos of former president Barack Obama had gone viral, all of them looking fairly realistic but none using anything he’d actually said. Some were good-natured, some not so benign, but each had manipulated his facial expressions, mouth movements, and voice to look authentic to a casual observer. Skilled digital craftspeople can put together many kinds of visual evidence, such as fake news videos, that make us believe things that never happened. They act as movie directors, but with the sole goal of changing our views, mindsets, discourse, and decisions. However, there are technologies to counter such fakes, including Digimarc digital watermarks in images or frames of video. And forgery will get harder as technologies emerge to make changes trackable. A filmmaker might put packets of movie scenes into a blockchain, which cannot be edited without leaving a trace. The use of such verification and ledger technologies—distributing confirmation and record keeping across a wide group of users—will limit the chances of manipulation and fraud.

  Still, the threats of nefarious use will do nothing to slow the uptake of all kinds of AI-powered tools. Historically, and by their nature, corporations have an incentive to use machines operated by fewer workers who are highly skilled. Many nations, particularly the United States, lack the critical economic and policy incentives that would spur companies to educate and transform their workforces. From the corporate perspective, AI increases incentives to spend less on human labor by increasing worker productivity. Already, virtually every company of any appreciable size is thinking about how to integrate AI into its operations. Global venture capital investment in artificial intelligence and machine learning doubled to $12 billion in 2017 from $6 billion the prior year, according to KPMG.**

  If historical trends are any indication, a significant portion of that investment will transform the types of skills companies demand. In 2017, an Oxford University study grabbed headlines, noting that 47 percent of US jobs are “at risk” of being automated in the next twenty years. A McKinsey report released that December said about half the world’s jobs already were “technically automatable” with existing technologies. It estimated as many as 375 million workers would have to shift to new occupations by 2030, just over a decade later.†† Other researchers and think tanks take a more granular view of job disruption, in some cases with different results. The Berkeley Work and Intelligent Tools and Systems (WITS) working group, in which we participate, explores how we can go about shaping the world and the workplace in an age of intelligent tools. The interdisciplinary collaboration takes a task-based view, among other things, of the technological transformation of work. A separate German study suggests smart technologies will not lead to dramatic unemployment, but will have a structural impact on task composition and employment in certain types of jobs. The impact will be negative on manufacturing, potentially positive on services, and will likely affect men more than women.‡‡ A few think tanks and consultancies take a more optimistic view. For example, a January 2018 report from Accenture suggests businesses, by investing in state-of-the-art human-machine collaboration, could boost revenues 38 percent and increase employment levels 10 percent by 2022.

  Ultimately, the question isn’t whether jobs will change and workers will be displaced—many undoubtedly will. And it won’t even take a superintelligence to do it; the evolution of the narrow AI agents we already see in 2018 will automate many more workplace tasks. The question is how quickly these transformations will occur, whether we can keep up with them (especially when it comes to education and workforce training), and whether we can develop the imagination to see what sorts of new opportunities will arise with the changes. We won’t run out of work, though. As Tim O’Reilly, the founder and CEO of O’Reilly Media, says in his video called “Why We’ll Never Run Out of Jobs,” we’ll always have more problems to solve.§§ But adapting to the new nature of work will require imagination and preparation. The Luddites were right that industrialization threatened their lives and well-being, but they didn’t have the imagination to see beyond the initial disruption. Most companies see the work before them and what a thinking machine can do better and more cost-effectively, but they don’t look ahead at the skills the workplace will need tomorrow—things like man-machine teaming manager, data detective, chief trust officer, or the eighteen other “jobs of the future” that Cognizant laid out in a 2018 report.¶¶

  Yet, imagination only goes so far, even when equipped with stockpiles of resources to facilitate it. If US companies needed nothing more than cash to prepare themselves and the American workforce for this future, they would’ve already been doing it with the $1.8 trillion in cash reserves they had on hand at the start of 2017.## Companies have valid strategic rationales for retaining this much liquidity, which allows them to respond quickly to disruption and fuel research and development for new products and services. However, investments in innovative concepts for the future of working, earning, and learning will require a long-term focus on the needs of global and national economies. Driven by short-term shareholder interests, few companies have an effective incentive to imagine an undefined future.

  Policy makers could choose a wiser course aimed at achieving both greater productivity and competitiveness for corporations while getting our workforce ready for the Fourth Industrial Revolution. For starters, they could create incentives and encourage public-private partnerships that spur corporate investment in the development of and training for defensible jobs of the future—in fields such as clean energy, technical design, and 3-D manufacturing. Governments could consider similar incentives for investment in civil and business infrastructure, including innovative transportation solutions and the revival of older manufacturing hubs for a new economy. And they could apply the same incentive logic to investments in affordable housing, so San Francisco, Shanghai, Berlin, Mumbai, and the other global economic hotspots are welcoming to more aspiring workers.

  Unfortunately, in their current forms, few national strategies are doing anything to get workers there. So, we also need to educate people for those jobs, many of which will require skills or combinations of skills unheard of in today’s workplace. Public-private partnerships could define the outlines of future job categories, build hybrid online/offline training models with project-based learning, and offer credit-based upskilling programs with nano-courses and education certificates. They might create integrated corporate apprenticeship programs of the sort that German companies, such as BMW and Volkswagen, have developed at home and have brought to the United States. Workers with lower skill backgrounds could earn extra tax credits for participating in these programs and taking the courageous step to upgrade their skills, perhaps even receiving a universal basic income to support them as they go.

  With a combined corporate-and-labor “relief and reskill” program, governments could transform the existing mindset of competition between humans and technologies and move toward greater, integrated productivity throug
h future-resilient jobs—and thus establish themselves as trailblazers for a high-tech future that enables human potential. But even within the structures of today’s workplace, cognitive technologies might help spur greater productivity for companies and greater rewards for workers by providing more insight into the murky recesses of our motivations and intentions. They might even put our own subconscious to work on our behalf. For example, One2Tribe, a Polish start-up led by Wojciech “Wojtek” Ozimek, helps clients motivate employees with an AI platform that analyzes personalities and then provides rewards to encourage more sales or better call resolutions. The firm employs a mix of psychology and computer science expertise, but one of the biggest insights came by simple trial and error. Unless workers can opt in or out, they object to a system that nudges behavior in such a personal way, Ozimek says. So, One2Tribe requires that its clients only use the system on a voluntary basis. The rewards typically get about 60 percent of eligible employees to participate, he says.

  The platform works much like a flow model in video games, carefully balancing the challenge with the reward, Ozimek explains. But it goes a step further, with its psychology experts testing everything from real-time responses to actual brain function, so they can better understand the challenge-reward relationship, identify the most effective approach for each individual, and then update it on the fly. The timing between challenge and reward is especially crucial, he says. One worker on a task might produce better results with a larger weekly goal, while another might produce more when getting smaller rewards on a daily basis. The system typically distributes a sort of virtual currency employees can exchange for other items. “We create AI to balance goals with demands,” Ozimek says. “We take into consideration the skills of the person, their personality traits, and then we try to create a motivational scenario.”

 

‹ Prev