Book Read Free

Solomon's Code

Page 12

by Olaf Groth


  Both Behaivior and Woebot intercede in some of the most intimate and “human” aspects of our lives—our mental health and our most powerful cravings. Those interactions can work only on a foundation of trust, so both firms take pains to make sure the user has control over the experience. But both firms also prove that these systems can dredge up ever deeper insights into ourselves—our cognition, our intelligence, and even our emotion. It’s easy enough to dismiss the idea of a machine understanding our emotions, says Pamela Pavliscak, the founder of Change Sciences and a faculty member at the Pratt Institute. However, Pavliscak, who studies the human-machine relationship using a combination of ethnography and data science, notes that humans don’t do an especially good job at identifying emotion in others, either. “The more I looked into emotion AI,” she says, “the more I saw potential for it to help us humans learn more about emotions we don’t know.”

  Currently, technology only skates on the surface of emotion, partly because our emotions don’t always manifest themselves in physically measurable ways. We portray our state of mind in an array of signals. Some are clearly recognizable, such as speech or body language; some only send subconscious hints of what we feel; and others never get expressed at all. Furthermore, emotion has a cultural component, accruing layers of meaning with the accretion of memory and within specific contexts over time. The technology still has its own limits, as well. “As someone who gets motion sickness,” Pavliscak says, “I still wish GPS knew not to give me winding route.” In fact, one might imagine an AI-powered app that could provide more than just a preferred physical route. For example, Pavliscak imagines one that could sense the emotional climate of a particular roadway, giving you a choice between a faster route whose drivers emit an angry vibe, or another option that takes a few more minutes but has a peaceful, calm sensibility about it.

  An AI system’s ability to process that vastly varied data could help us learn about emotion in new ways, Pavliscak suggests, and it could be manipulated to exploit us. Either way, it will influence our behavior, and often without us really knowing why. Inside that block box, how does the machine really regard us? Can we create a machine the steps beyond intelligence into a more humanlike meta-reflection about us, some kind of quasi-consciousness? And if so, could we ever know what it thinks of us? Would we have a right to know, or would our access to that be limited?

  HOW I LEARNED TO STOP WORRYING AND TO LOVE THE MACHINE

  The immersive experience that keeps students so engaged with John Beck’s Interactive Learning Experience spins off mounds of data that his firm can use to improve the program. The developers of video games, the initial inspiration for Beck’s role-playing education platform, harvest the same data and insights. Users generate tremendous amounts of data just by playing, disclosing their location, employing strategies, winning and losing, and paying for the experience. Developers track user decisions and the context within which they make them. They test various in-game experiences to study player reactions. And they use the combination to move closer to the Holy Grail of entertainment—a gaming experience tailored to an individual’s desires, current situation, and financial capacity. Players get a game that can fill and fulfill their day, speak to their cognitive ability to absorb and enjoy it, and take their minds off the burdens of life.

  That loop of deep engagement, data generation, testing, and measuring reactions to improve the experience—and thus heighten engagement and/or spending—has spawned an entire cottage industry, and Bill Grosso sits in the middle of it. He took a circuitous route to get there, twice dodging the siren song of academia. The first time, as a young mathematician in his mid-twenties, he decided a comfortable professorial career at a middle American college wouldn’t do. He hopped over to the software start-up world, where he became intrigued by what he might do in AI. A few years of research at Stanford, and he found himself right back on the academic track again. So, he skipped back out and led a variety of start-ups for the next decade or so.

  By 2012, Grosso noticed an “enormous revolution” emerging. “I was helping run a financial payment processing company with huge data,” he says, “and I realized almost everything we took for granted about our behavior was going to come into question.” He’d recognized three major trends emerging: ubiquitous mobile phones pumping out data; the cloud providing high power computing for low cost; and increasingly robust machine learning. “I’d realized the science of measuring behavior just became possible,” he says. “You can measure, can run an experiment and see how people subtly change their behavior, and you can do it all from infrastructure in the cloud.”

  Grosso launched a start-up called Scientific Revenue to help clients increase in-app purchases with a dynamic pricing engine for mobile games. It works entirely with digital goods, like the types of small-value purchases popular in video and online games. If a game has millions of players, and you can capture fine-grain measurements on their play, transactions, and context, then the company can change prices to optimize sales. Offering a player a new weapon or bag of gold at an opportune time sounds simple enough, but it gets much deeper than that, Grosso explains. “We’re collecting somewhere between 750 and 1,000 facts about you every time you play the game—what device, battery level, source level, time, etc.,” Grosso says. “Then, we also capture the information about what you do in the game itself. Did you spend coin? Did you level up or try to level up? How long [was] this session going?”

  The depth of information allows Scientific Revenue to create what Grosso calls “an insanely detailed graph of your behavior.” But he insists the firm doesn’t care about individual data—and, in fact, the firm never stores personally identifiable information. What really matters, he says, is the patterns and irregularities they can draw out of the data of, say, 500,000 players. That allows game companies to shift prices for larger groups of players who might be easily enticed to pay for a virtual item or upgrade. And after that, they can measure the reaction from a granular level up to a collective level, to better understand the most potent causes of consumer decisions. So, players in the game for seventy-three minutes a day—a time that corresponds to a certain level of addiction—might get higher cost, bundled offerings rather than a low-price, one-off item designed for a novice.

  But what sort of concerns does Grosso have about gathering that much data on so many individuals? Enough, he says, to have a blanket rule to not store any data that might allow an expert hacker to draw out individual information. If someone stole the firm’s data, they could learn a tremendous amount about gaming and consumer behavior, but nothing that shows John Doe playing at 11:32 P.M. on Tuesday night at an all-night coffee shop in Chicago. “I’m not claiming we’re super virtuous,” he says, “but we don’t store ethnicity or legal categorizations, and we don’t store any information that can be used to contact you.”

  Grosso acknowledges the broader risks. As companies classify customers into more and more nuanced buckets, they could start to microslice society with differential pricing on anything for anybody. The possibility of discrimination based on a variety of factors, whether critical or mundane, becomes increasingly easy and likely. “We’re already doing this though,” Grosso says. We love that Amazon suggests just the right product at just the right price, but we freak out when Target starts identifying a young woman’s pregnancy before her father is even aware of it.*** There’s no nirvana of fairness as we are assessed minute by minute through the triangulation of our data streams. Companies always will create unequal classes of customers, like airlines providing preferential treatment to frequent, often more affluent, flyers. However, taken to extremes those discriminations could splinter communities because no one rides in the same boat anymore. The glue between us, the loyalties and allegiances of being in the same situations, walking through life facing similar challenges, may be lessened as we get microsegmented in pseudoscientific ways based on granular differences.

  As the Internet of Things (IoT) becomes more pervasive
in the coming years, with more sensors and greater processing power in virtually every kind of device—all streaming our data back to the cloud—the corporations who operate and control the entire network will have an intimate view of our “life patterns.” The AI systems in these networks will make more microdecisions for us. They’ll pick a route to our destination, automatically adjust our calendars, book restaurant reservations, restock our refrigerators, and pick out just the right anniversary gift for our spouse. The air conditioning system in a future BMW will let the thermostat in your home know that you’ve been feeling a bit chilly today, so your home is at the temperature you prefer. Your toilet might let your fridge know it’s time to order more vegetables so you get the right dose of fiber or vitamins. And your phone’s facial and voice recognition software will instruct your home stereo the right tune to play for the mood you’re in.

  All of this could make for more enjoyable and, hopefully, less transactional lives. But like simple Amazon and Target contrast above, it cuts both ways. The machine will need to learn which decisions one would happily outsource, which decisions they prefer to retain, and how that differs from one person to the next. And as we experiment, we might hit some wobbly stretches of human-machine interaction. John Kao, the man dubbed “Mr. Creativity” by The Economist, said it well: “What will be the collaboration space between human and machine while we figure each other’s intelligence out? My Tesla today does a good job driving autonomously on the highway, but its preferences for where to put itself in the lane, when to bypass a car or take the foot off the throttle do not match my preferences and it doesn’t really ask about them either.”††† As Kao suggests, we will have to adjust to the idea of giving up more control, and doing that requires a higher level of trust in the systems to which we delegate authority. If history is any guide, we will eventually have to grant more trust in the symphony of intelligences that Kevin Kelly describes, and the concert of instruments around us will play a sweeter and richer melody.

  Yet our trust must run deeper than the AI systems that govern more of our daily details and decisions. If the scandals that enveloped Facebook after the 2016 US presidential elections and the revelation of Cambridge Analytica’s inappropriate use of user data scraped from the site tell us anything, it’s that we need to be able to trust those who control the entire system. Who or what will monitor integrity, fairness, and equity in the back rooms and far off data centers we’ll never see?

  MACHINE JUDGMENT

  Consider, for a moment, how much your life has changed from ten years ago. Today, you might scoff at the things that caused such stress back then. Perhaps a growing family has fundamentally changed your priorities. Hopefully, your expectations and experiences have become richer and more fulfilling. Regardless, they’ve changed. Now look ahead ten years. Can a world increasingly pervaded by thinking machines fully grasp your changing preferences and values? In the past, you might have set the preferences on your autonomous driving system to do whatever it can to dodge a dog that runs onto the road. Now, you have your spouse, two kids in the back seat, and elderly parents—a cargo of expectations and responsibilities riding along with you. Does the car know the dog no longer matters nearly as much as it once did? Compared with the responsibility you still feel for the dog, your responsibility for your kids and your partner carries far more weight. And while the AI system that powers the cars’ navigation can track changes in your life, how does it crosswalk those behaviors into an accurate map of your values?

  In every area of life, machines are making more decisions without our conscious involvement. Machines recognize our existing patterns and those of apparently similar people across the world. So, we receive news that shapes our opinions, outlooks, and actions based on inclinations we expressed in past actions, or the actions of others in our bubbles. While driving our cars, we share our behavioral patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous driving technologies, which promise to provide us new convenience and safer transportation. And as we continue to opt into more and more conveniences, we choose to trust the machines to “get us right.” In many instances, the machine might get to know us in more honest ways than we know ourselves. But the machine might not readily account for cognitive disconnects between that which we purport to be and that which we actually are. Reliant on real data from our real actions, the machine constrains us to what we have been, rather than what we wish we were or what we hope to become. So, what does the machine really know? Enough to really make an actual judgment about who we are and what we believe?

  These days, even the humblest thermostat makes a sort of very simple judgment, regulating the temperature of the home even when it’s devoid of its human inhabitants. Residents set the temperature range they consider comfortable, and then delegate the decision to turn the heater on and off. Yet, it’s a completely physical machine. A spiral of metal, called the bimetallic strip, curls tighter when warm and loosens when cool, tilting a small bulb to the left or right depending on the temperature. If the bulb moves far enough, the bead of liquid mercury inside it connects two bits of metal, completing a circuit and kicking on the heater or air conditioner. More sophisticated thermostats have a clock and calendar integrated in them, regulating a home’s HVAC system based on both temperature and time of day. Still, they’re mechanical, so it’s hard to think of them as making a judgment. Yet, in the simplest terms, that’s what it’s doing: making low-level decisions on one’s behalf. We’re delegating a decision, however routine or trivial, to a machine.

  A smart thermostat moves a step up the chain. It processes the same factors as its predecessors but can amalgamate a wide array of additional factors—whether residents are active in the house, the weather, past heating and cooling patterns. It’s not hard to imagine the Nest thermostat on the wall considering the spot-price of natural gas, the near-future effects of a preheating oven, or even the perishable groceries mistakenly left on the counter. From a technical standpoint, the idea of a smart thermostat making a decision for you remains a straightforward optimization process: weigh comfort against cost, or find ways to maximize one’s well-being while minimizing, say, grocery spoilage. This doesn’t require vast amounts of data and appears to involve rational judgments that we can understand, perhaps making it more palatable for us to say the smart thermostat makes judgments on our behalf. After all, if a homeowner sits on the couch all day and does nothing but optimize the balance between comfy temperatures and heating bill costs, he or she might make the same decisions the smart thermostat does.

  But what happens when the machine starts incorporating factors beyond a person’s sphere of awareness? Now it reports a home’s autumn heating activity to the cloud, where the local natural gas company uses the data to more accurately predict demand for the winter. The regional geopolitical picture looks a little shaky, and the country that produces natural gas has threatened to put restrictions on its exports. The local gas company decides to reduce demand in autumn to build reserves for winter, and it opts to crank back everyone’s comfort just a little bit—hardly noticeable to you, but valuable for the region as a whole. Now, the thermostat and the network of thinking machines to which it connects optimizes for an entire community, perhaps nudging the temperature of your home lower than you’d prefer.

  Artificial intelligence drives the automated infrastructure of our lives. Increasingly, devices in our cars or our homes are connected to one another. They communicate with us, with each other, and with servers in the far reaches of the Internet. Over the next twenty years, those networked devices will interconnect with “smart” infrastructure, such as highways, airports and train stations, and city control centers for security and environmental protection. This Internet of Things, which started as innocuous machine-to-machine communication in factories, has steadily made its way into all kinds of public spaces. Virtually everything will have an IP address at some point, from your mattress to your trashcan t
o your shoes—all woven into one big network that fuses the physical and digital worlds.

  AI-powered technology will make it all tick, ideally enabling more productive, richer, and safer lives. It will control the augmented reality (AR) glasses you will wear on a future vacation to Rome, showing you information and interactive graphics in the Colosseum and then recommending a nearby restaurant. Based on your spending patterns and a mood analysis, a future home-care robot will know whether you want (or deserve) a bottle of that expensive Chateau Lafite Rothschild wine that’s been on your bucket list for a while, and have it delivered if you do. All these large and small decisions require the integration of personal data with supply chain information, infrastructure updates, and commercial availability. As such, these AI systems will exercise a certain economic power, itself loaded with inherent value judgments—some right, some wrong.

  Companies already see the potential, of course, and many are rushing in with new technologies to better integrate all the disparate systems we use today. A firm called Brilliant has created a product that coordinates many of the smart systems in the home and makes them available in a more seamless fashion, integrating them into a panel it describes as “the world’s smartest light switch.” Most home systems in 2018 interface with humans through computer or smartphone apps, and the growing use of the Amazon Echo or Google Home has moved home automation to a new phase, says Aaron Emigh, Brilliant’s cofounder and CEO. The next step that Emigh and his colleagues hope to drive is a smart system that’s built in and native to the home. “It’s a naturally evolutionary process with technology after technology, when it becomes part of what you expect for what constitutes a home,” he says. “What we consider a home today is different from what our grandparents valued in theirs.” Given that we interact more with light switches than almost everything else in our homes, they seemed like the most obvious place to integrate the new technology.

 

‹ Prev