Solomon's Code

Home > Other > Solomon's Code > Page 23
Solomon's Code Page 23

by Olaf Groth


  So, we move steadily toward a society of augmented alertness and awareness, one in which AI-powered systems see and can focus on what human eyes and brains are too limited to detect or process. They can detect subtle shifts in presence or patterns, while we are stuck within our narrow field of vision. So, the machine and its artificial awareness of our context could become man’s best friend, like a dog able to sense an earthquake before it happens. We might in fact be seeing a new type of AI hive consciousness that enhances both our well-being and that of society.

  Not long from now, an AI platform might combine traffic, weather, infrastructure, and other users’ information into a guide far more comprehensive and convenient than Google Maps steering you around a traffic jam. A cloudy evening portends downpours the next morning. Runoff and infrastructure data suggest an 82 percent chance that the heavy rains will overload the sewer lines under repair along your typical route to work. You have several calls but no urgent in-person meetings on the calendar. So, the platform automatically adjusts your schedule to give you the option of staying home or taking an alternate way to the office. Google already aggregates much of this data in 2018; it wouldn’t take a significant technological leap to combine those streams into this sort of analysis.

  With access to more and more data and the imagination to integrate those streams in different ways, companies and developers would deliver even more convenience. The same system easily could spot a new conflict that arises on your spouse’s calendar and adjust yours to pick up the kids and take them to soccer practice. It might capture the news of a likely airline strike in France and suggest you rebook your overnight flight from Paris to Frankfurt, where it can secure a train ticket that would allow you to see your goddaughter on her birthday. And it might adjust your nutrition and sleep plans for the transatlantic flight, ordering you heart-healthy meals and resetting your dose of melatonin to help maintain your blood pressure, cholesterol, and rest regimens.

  None of this should happen without a user’s control over settings for privacy and personal agency, whether for individual services or when agreeing to share data between friends or strangers. We already give up most data ownership to the government, the Digital Barons, and so many other service providers, usually without recourse or understanding of what they do with it. Efforts to secure personal data ownership and control have emerged around the world, and they likely will gain traction as abuses emerge and risks become more apparent. But even if we opt out, we face a different sort of hazard—sleepwalking into a surveilling AI that puts more trust into people with larger digital footprints. That rift will widen over the next decade as more people demand the right to opt in and out of AI-based services. We could see a bifurcation of populations into more and less participatory groups. That could lead to privileges for highly active users or those who abstain, creating deep new divides that will stir up legal and civil discord. Large portions of the global population would fall slowly but steadily behind, as advantaged markets and their AI systems overlook the needs of the less fortunate. Politicians will see social stability, economic growth, and voter confidence waver. Our institutions will face an entirely new breed of inequality.

  Being predictable and calculable will generate power, while the Luddites of the digital era might need to think hard and fast about entering urban areas—or create digital cloaks or twins to avoid detection. Will we have a moral right to remain ambiguous and unassessed, and how many of us would even want that? It’s not hard to imagine more people opting out of certain AI-powered platforms to avoid what they see as malicious use. Facebook and other social media sites regularly face defections, some more serious than others. It’s also easy to imagine technological countermeasures people might adopt to avoid attacks or unwarranted use of personal data.

  Ultimately, like any new technology, AI provides us with another double-edged sword. The distributed ledger of an established blockchain platform makes transactions more secure and extremely difficult to forge, but it also allows users to remain anonymous and hinders law-enforcement efforts to track criminal activity. Similarly, advances in AI technologies might one day allow school officials to identify a troubled student and intervene before they bring a gun to school, but the same innovations could help students hide from accountability and responsibility.

  THE BOUNCER BOT

  New applications will emerge to safeguard our privacy, make us less transparent and readable, and protect our digital personas. Some of these ventures exist already. “Controlio, for instance, acts as an intermediary between you and the larger internet platforms, issuing RFPs for products or services on your behalf,” says Peter Schwartz, founder of scenario planning consultancy Global Business Network and now Salesforce’s resident chief futurist. “When you transact, you still give up data, but you’re in charge on a case-by-case basis.”

  As AI evolves, we could see a whole new layer of a personalized-data economy, including personal-data vaults that open only when we want them to, rather than the constant data giveaway model of today as soon as you step into the web. However, we can’t optimize for choice and security simultaneously; we need to strike a balance, one that shifts depending on context—giving mom and dad access to photos of the grandkids, but barring social media sites from manipulating data about those children. Companies and governments already try digital strong-arming, wrapping their wants together with the products and services we crave to pull more out of us than we’d prefer to share. So, perhaps we’ll create new protective agents, what we might call “Bouncer Bots.”

  Bouncer Bots would patrol the velvet ropes around our data, allowing immediate access to those we want, holding off some requests for a more-thorough review, and rejecting certain others. They could act on our behalf, looking for what we desire, offering short bursts of data or virtual currencies in exchange for those things, and then reraising the wall of personal protection. The Internet giants and most companies would reject the concept, perhaps even refusing services to consumers who bring their own Bouncer Bots along to their platforms. From their perspective, that’s a valid objection. After all, for the better part of twenty years we have benefited from services that were deemed “free of charge” to users. Lately, however, the concept of “free” has morphed into a clearer understanding of what users pay when they turn over their data, and AI will drive that further.

  But past examples, such as the radical transformation of the music industry by digital technologies, have taught us that fighting customers instead of giving them what they want never works for long. The virtual standoff between data-driven companies and individual consumer privacy will settle into some sort of parity with a workable business model. Still, we find regular instances of these technologies crossing the lines of what we find acceptable. We rarely object, but, when we do, the transgression often seems outrageous, like Mattel’s efforts to help learn from and improve a child’s experience with an advanced version of its iconic Barbie doll. Since withdrawn from the market, the enhanced doll collected and stored information about how children played, responded, and spoke with their Barbie, shipping all that information back to Mattel’s servers. From the data, the company could glean insights about behavior and development while offering other targeted services, such as child monitoring for parents. The idea that Mattel might track children and use that data to target products at them chilled many parents, especially as a commercial interest reaping the data of an unknowing minor. Parents might share a lot of data about themselves and their children; they don’t want Barbie collecting it.

  Yet, they already supply that sort of information in ways that are far more intimate. Every minute of every day, smartphones collect our locations and behaviors. With the right accessories, they even monitor our sleeping patterns. Companies can use those data streams to provide an array of services. Already, Apple and Android phones identify when you stopped driving and started walking, so it can remind you where you parked your car. Or they can identify that you’re driving and
block incoming texts until you stop. Yet, smartphones can gather much more granular data to generate powerful insights, including into someone’s medical state, a use that has allowed one company to push mental-health services and monitoring into participating patients’ everyday lives.

  Serious mental illness produces a cyclical pattern of inpatient care, release, relapse, and re-admittance. About a third of patients treated for mental illness return within a year. It’s a vicious cycle that few technologies or treatment methods have managed to break in any comprehensive manner. Paul Dagum and his team at Mindstrong hope to change that. Mindstrong uses extremely fine measurements of activity on cell phones and certain other devices to track patient conditions. All told, Dagum says, they generate about 1,000 markers of cognitive capability from things as straightforward as millisecond-response times or patterns of finger flicks across a smartphone.

  Machine learning helps compile and analyze the fine-grained patterns, which can show when a patient’s cognitive capabilities start to weaken and alert caregivers or family members who can intervene before the deterioration goes too far. The patient and care-team apps allow the two sides to interact and look at cognitive markers together, identifying potential triggers, recalibrating medications, or just stepping up outpatient therapy. “Now, we’re mostly focused on patients with a mental disorder or at high risk for developing one,” Dagum says. “The response is largely positive because people feel vulnerable, and this gives them a sense of comfort. But that only works because we approach them as health care providers within the health care system.”

  Eventually, the cognitive monitoring on the Mindstrong platform could expand to track dementing illnesses, such as Alzheimer’s, or a host of other ailments that affect cognitive function. Eventually, Dagum hopes, mental health care will become part of everyone’s general health maintenance. Pharmaceutical companies already have approached Mindstrong to use its markers to generate more insight into drug testing. But for now, it starts with serious mental health issues and meeting them where they live their everyday lives. Dagum says he expects the technology to cut the readmission rate for patients by half. “This moves care out into the community,” he says. “That will significantly affect the outcome for these patients.”

  We still have to ask ourselves what AI platforms and their owners can measure with our hundreds of keyboard strokes, mouse clicks, and smartphone swipes each day. The technology research firm IDC estimates that people connected to the Internet will increase their average digital interactions from 218 a day in 2015 to almost 4,800 a day by 2025.† Will each of these be subject to psychoanalysis? Many could be, as new medical and well-being applications proliferate. And as the processors and sensors in our mobile phones become increasingly sophisticated, more and more physiological data can be cross-referenced and correlated with digital behaviors. With something so intimate as our physical and mental health, we will want to ensure that such a system can make users aware of the data it’s mining and how it will use that information. We might require that service providers alert users, their families, primary care physicians, or public health officials when new illnesses present themselves, especially when those ailments pose a risk to others. As those choice moments occur, thinking machines can improve health and well-being on both an individual and community level. But in so doing, they will force us to make difficult decisions about the balance between individual privacy and public safety.

  No doubt, there will be breaches of confidence as companies and AI platforms collect too much information and pressure patients and providers with new burdens of responsibility. Paranoia might arise as users see more of their biometric data harvested each day by their smartphones, watches, and other devices but don’t immediately realize that the benefits of that information-sharing might not come for years. Medical AI providers will harvest the riches of data troves for drug recommendations and advertisements, and some might try to push the regulatory and ethical boundaries established by the Food and Drug Administration or other entities. Those wounds will cut deeply, because medical information is harder to recover than financial information, but we have a base of prior experience in enforcement of data portability, privacy, and insurance regulations. And, as with Facebook and Twitter today, public awareness will rise, popular backlash will increase, and enforcement agencies will start to address the violations.

  CONSTANT BECOMING OR DIGITAL REWINDING?

  As human beings we are notoriously biased, too often unaware of our intellectual and emotional blind spots. A well-crafted artificial intelligence, even with its own shortcomings, could help us make richer, more-objective decisions that improve our lives and communities. Such a system might provide an alternative option for your work commute, offering a plan that balances a sharp reduction in your carbon footprint with enough convenience that you don’t quickly abandon the new travel plan. A look at the divorce rates in most industrialized countries might lead one to believe that a few objective, analytical pointers about partner selection might not hurt. Teachers could use thinking machines to craft more effective curricula tailored for students with different learning profiles that update in real time. American AI experts already are working on systems that can help us avoid food shortages and famines by integrating changes in factors like weather, soil, infrastructure, and markets into complex models to mitigate scarcity.

  Beaconforce may not be solving world hunger, but they’ve found a way to help alleviate something most of us deal with on a regular basis—the types of workplace stress that keep us from performing at our best. The company’s system tracks clients’ workers along seven pillars that contribute to “intrinsic motivation,” the type of drive we feel when we’re immersed in an engaging and rewarding activity. These pillars, which include feedback, social interaction, and a sense of control, feed into a worker’s ability to stay “in the flow,” as CEO Luca Rosetti describes it. When the balance of a worker’s abilities and the challenges they face tips too far, the Beaconforce dashboard can alert managers.

  It does this with an AI-powered analysis of worker sentiment and certain vital signs measured by a Fitbit, Apple watch, or similar wearable device. The program asks a client’s employees a couple quick questions each day on their smartphone, and then correlates those answers with information about their current work environment and their heart rate. A manager can’t see their workers’ individual answers or heart rate, but they can see the Beaconforce dashboard, which signals when a worker is starting to feel out of sorts about a project, coworker, or environment.

  Rosetti shared four testimonials, including a story from a partner at one of the Big Four accounting and consulting firms. The partner at the company noticed that three of his employees had shifted suddenly into the stress range of the dashboard Beaconforce provides. He didn’t pay much attention at first, because moments of stress are commonplace in their line of work. But then a human resources officer came in and said one of the consultants had an anxiety attack on that day, breaking down and crying in a meeting but refusing to say anything about why. The manager immediately guessed who it was and started to investigate the issue.

  The Beaconforce platform showed when all three workers’ readings initially started to deteriorate, and it aligned with their assignment to the same project leader. It turned out the project leader was consulting for another company and had pressured all three to join him, threatening to make life miserable on their current project if they refused. So, the partner swapped out the project leader and immediately saw the workers’ scores recover. The partner even managed to retain the project leader, who turned out to be extremely talented, according to the case study Rosetti provided.

  That sort of AI deployment can help facilitate greater achievement if we design it well, but that blade cuts both ways. Cognitive computing can increase or decrease our freedom of choice, but the risk of the latter increases with the large-scale collection and manipulation of personal data. We risk tipping the parity between what we kn
ow and what the machine knows about us. Artificial intelligence might enhance our abilities, but without a basic parity of awareness between individuals and the entities that control our data it might also limit our fullest potential as we sacrifice our own self-determination. Similarly, replacing human-curated judgment with machine-curated judgment might broaden or narrow our field of vision, and it might reduce or expand our social and economic choices—often without our knowing which way and by how much. Taken individually, the nudges of mercantile and political interests might have little consequence. Collectively, they can transform our lives in powerful ways, like Cambridge Analytica’s deployment of targeted messages to sway millions of US voters in 2016.

  The mere push toward a more equitable balance of awareness will help expose many of the hard-to-define tipping points between the beneficial and manipulative uses of AI systems. This effort should begin with a reset of data transparency and control, allowing each person access to the information collected on them and the ability to expunge it or port it to, say, a new job or health care provider. We might develop a new structure for opt-in agreements that includes temporary opt-out rights, giving users a chance to step back out of the bubble. Workers might have an option to pause productivity nudges, for example, or someone with high cholesterol might stop their alerts and enjoy a nice steak on their birthday. Measurements and data insights, in and of themselves, might not manipulate, abuse, or intrude. But if we ultimately hope to empower each person to live a more potent, productive, and fulfilling life, they must retain the agency to decide how much is enough, without the possible alienation that might come with opting out of a service.

 

‹ Prev