Solomon's Code

Home > Other > Solomon's Code > Page 27
Solomon's Code Page 27

by Olaf Groth


  Of course, the subtlety and depth of One2Tribe’s influence on worker behavior naturally spark concerns about manipulation. The platform worked poorly before Ozimek and his colleagues realized it had to be a voluntary option for workers. But even on a voluntary basis, safeguards are needed to ensure that companies don’t deploy similar AI-powered systems without checks and balances. Our future need not include extrinsic incentives and the gamification of rewards that play us like organs—or worse, treat us like machines that produce without fulfillment and purpose. Individuals, society, and the planet need us to think about what’s right from the inside out, not just the outside in.

  Yet, we also need acknowledge that rank-and-file workers will play a key role in creating safeguards, as well. A company that tries to engineer its employees’ mindsets will not create a desirable workplace that draws the best talent. It’s not for nothing that Glassdoor has become a go-to for workers to rate and review workplaces, or that businesses tout their rankings on Fortune’s 100 Best Companies to Work For. But we undoubtedly will experience a lot of push and pull on worker-related AI systems as we calibrate the different types of employee stimulation. Companies will risk crossing the line at certain points, whether purposely or inadvertently.

  Responsible organizations will want to track performance and identify areas of potential abuse. Progressive businesses will want to make that as transparent as possible. This might begin with a deliberate cooperation between labor and employer, collectively creating the rules for systems that influence worker behavior. This might resemble the existing joint efforts of labor and business in Germany, where both sides are working together to guide the deployment of robotics and worker training programs. It might eventually include certification of worker-related AI platforms by a professional group, such as IEEE, or internal and industry-specific labor review boards that can audit such systems. Regardless, companies and government need to take into account morality and professional codes of conduct to mitigate bottom-line myopathy, especially in economies driven by short-term quarterly results and the stock option packages tied to them. Maybe then, as we prepare for this unfolding cognitive revolution, we can assess whether we should include stakeholder ethics in calculations of performance-based pay.

  HUMAN GROWTH IN HEALTH CARE

  For many people, the subtle nudging of our motivation and mindset might seem just a little too intimate, even when only offered in limited settings and on an opt-in basis. The use of AI in our physical and mental health care might feel even more invasive, one of the reasons human doctors have remained central in most of the health-related systems to date. The power of AI in health care lies in the fact that cognitive machines can process myriad data streams and recognize complex patterns much faster and with greater accuracy than human brains can. Image recognition systems now surpass expert human performance on many radiological tests. IBM Watson’s ability to process reams of cancer-research literature in a week, and then learn virtually all the potential therapeutic techniques the next week, lies far beyond any collective human capability. Putting those sorts of cognitive machines into symbiotic partnership with trained doctors and other AI-powered systems—cognitive networks that understand the vast multitude of pharmaceutical treatments and their side effects, for example—could advance medical care and human well-being to unprecedented levels.

  Yet, all of this still maintains the central role of human doctors, nurses, and medical lab professionals, who combine scientific meaning, socioemotional structures, and the mindsets of patients into a harmonious and effective delivery of care. “We’ll still need radiologists to explain things, to take those findings and explain them,” says Clay Johnston, dean of the Dell Medical School at the University of Texas at Austin. “But the vast majority of hours spent by radiologists today will be taken over by machines.” Similarly, smart machines might eventually recognize emotional states by way of facial or voice recognition algorithms, notes Jonathan Gratch, a computer science professor at USC, but accurately interpreting human goals and agendas is harder than many researchers realize. Most current approaches assume that recognizing surface-level expressions of emotion, such as vocal tone or facial expression, will be sufficient for understanding a subject’s mental state. However, Gratch says, systems will need to interpret those surface cues in context, because people often mask or misrepresent their expressions. If a poker player smiles, it doesn’t say much about his or her mental state. If that smile appears after a new card arrives, it might suggest a lot more about how he or she is thinking—or it all might be part of an elaborate bluff. It’s a messy and complex problem.

  Nor can machines truly experience empathy, perhaps one of the most critical dynamics for a successful doctor-patient relationship. An AI might simulate empathy, and sometimes that’s enough to stimulate more frank and open responses from patients, particularly in cases of mental health. But ultimately, doctor-patient relationships rely on a reciprocal trust that patients will honestly explain their ailments and that doctors will maintain the highest practical threshold of care in diagnosis and treatment. That mutual trust embodies the shared experience of millions of years of human evolution. A good doctor knows just how keenly pain, ignorance, and embarrassment might warp a patient’s recitation of symptoms. Most patients rely on the knowledge that physicians share those human foibles, understand them, and know how to dig beyond them to find the core problem at hand. That common human experience allows a deeper person-to-person understanding that a narrow AI—with its findings based on patterns across groups—can’t share.

  Yet, by identifying far more complex or subtle patterns across groups, AI systems can find problems human doctors, radiologists, and other health care professionals can’t spot. Integrating that penetrating objective analysis with human empathy can generate far deeper insights into our health and well-being. But that combination will not happen until AI systems are accepted and put into use by medical professionals. That’s no easy task. First, there are so many human variables that play into our health-related decisions. An adult might decide to just treat their moderate fever at home, because we know how to manage it, but a first-time parent might reasonably run to the ER when their child’s temperature rises. “There’s an infinite number of ways that a finding can be misinterpreted by an individual,” Johnston says. “I think it will take a long time, or a longer time, for computers to grapple with all the nuances of the human interactions with the facts.”

  That might also include the biases and misinterpretations that doctors bring, being humans themselves. Placed into the workflow of a clinical setting, an advanced AI system might notice anomalies in how a physician approaches different patients—anything from a different conception of patients’ pain tolerances to a different expectation for their adherence to a treatment plan. If such a platform was available when Ann and I (Olaf) met with the first cancer specialist in Berlin, it might have recognized that his advice to terminate her pregnancy was based on his experience with losing his own wife to breast cancer. That background might have convinced him of the need to act decisively and quickly, rather than considering a riskier alternative approach.

  Such a platform will take years of additional innovation, but it could take years before doctors fully integrate today’s emerging AI technologies into their daily workflows. It might seem simple enough, but in complex and heavily regulated health care environments, doctors have shown extreme reticence to work provably better technologies into common use. Johnston still recalls when oxygen saturation devices came out—simple devices that fit on the end of a finger and measure oxygen levels in the blood. At the time, many physicians said it wasn’t enough. They said doctors needed full blood-gas workups, which you could only do once every day or two, to properly track gases. Yet, the oxygen monitors have had a dramatic impact on saving patients in the years since they were introduced.

  The reticence gets even more mundane, Johnston says. Something as simple as email remains radically underutilize
d for patient-doctor communications. That will only begin to change when nonuse hits physicians in the wallet, or when an advanced technology fits into their workflows, rather than requiring that they adapt to it. For all the work on health-care AI systems today, precious little work is being done to make sure these systems work for the physicians who would use them. Johnston and his colleagues at the Dell Medical School have piloted a language-processing system that allows doctors to talk with patients, and then not only convert the speech to text but properly populate the information into standard insurance or clinic forms.

  Other companies have taken similar approaches to making the machine fit the human, including an Israeli start-up called Aidoc (pronounced “aid doc,” despite the unintentional play on words, says CEO Elad Walach). The company has jumped into the rapidly transforming radiological imaging space, but two factors set it apart from competitors, Walach says. First, its approach is comprehensive, identifying a wide range of abnormalities rather than analyzing scans for a single or small set of ailments. Those focused approaches work well, but they don’t allow for Aidoc’s second advantage—that its results are easily integrated into a radiologist’s or physician’s day-to-day activities. Rather than different tools to identify different ailments, this one system red flags a wide range of problems, so it’s easier to integrate into the day-to-day workflow, Walach explains. A decade from now, the health care environment will be far different, and perhaps then doctors in Israel, Europe, and North America will readily adapt to different AI and other advanced technologies. “But now, to penetrate and get traction in the market,” he says, “we have to respect the place of physician and give him added value in his work.”

  THE CYBER BLUES

  We have seen unprecedented waves of cybercrime and online terrorism in recent years. From the National Security Agency’s Stuxnet virus, which brought down an Iranian nuclear plant by getting its uranium centrifuges to overheat, to the theft of millions of personal records at Target and Equifax, these incidents have become an almost commonplace occurrence in our lives. As we brace ourselves for a big, infrastructure-disabling breach of our digital economy, we seek security in an opaque and often shady arms race with illicit individuals and organizations around the world.

  To be sure, the concept of AI safety involves several related pursuits.*** But while the cybersecurity experts who work on the front lines of this battle believe we can limit the damage and try to reduce the ripple effects, hackers have always had the upper hand and always will. Networks have so many potential access points. The hacker only needs to find one weakness one time; a company or individual has to protect all those points of entry every second of the day. The development of AI-powered cybersecurity applications might provide faster reactions and better coverage, but it’s by no means comprehensively effective, experts say. And, meanwhile, nefarious actors will be developing and enhancing their own AI-powered hacks.

  “We realized very early on that the industry as a whole was thinking about preventing attacks from happening,” says Yossi Naar, cofounder of Cybereason, an Israeli cybersecurity start-up. “We realized experienced attackers can always get in, and the trick is finding them inside the environment.” Cybereason patrols the “endpoint” of the network, the outer edges of human-computer interfaces where attacks enter. By watching those edges and using machine learning to analyze fine usage patterns down to an individual and device level, Cybereason can identify anomalies that don’t fit within a chain of events or typical patterns of behavior and more quickly react to them. Watching the far edges of the network provides the most comprehensive data set on usage, but the massive wave of information this produced made such monitoring impractical before cheaper, more-powerful computing emerged to help machine learning systems process the torrent.

  At the Silicon Valley start-up DataVisor, cofounder and CTO Fang Yu doesn’t really expect a bigger or better wall to keep hackers out. But by creating an unsupervised learning system that analyzes the millions of legitimate transactions conducted across a client’s network—and then using that knowledge to identify oddities, including previously unseen types of attacks—the company’s technology can stop more of the bad guys, Yu explains. Other systems require training data or labels to teach the system what to watch for, and then it looks only for just those things. Hackers will launch “massive attacks all of a sudden because they can hit many accounts at once,” Yu explains. “An unsupervised algorithm is able to detect the new pattern forming and say this group of accounts is very similar in terms of behavior and it’s very different from normal users.”

  Hackers can mimic one account or a small group of them, but the large-scale fraud that can take down a company creates its own pattern. Identifying those without prior training allows DataVisor to react more quickly and eliminate the false positives that can run up costs and decrease the effectiveness of cybersecurity measures. In one case study, the firm’s system helped increase detection of account takeovers at one of the world’s largest online payments platforms by 45 percent. False positives dropped to 0.7 percent.

  Still, the sad truth of the matter remains that hackers have so many options to get in, both digital and analog. Simple human deviancy and disenchantment will work just fine. Given the asymmetry between potential attack avenues and the difficulty of defending them, a much larger crash is almost inevitable, says Ivan Novikov, CEO of Wallarm, a San Francisco-based cybersecurity firm. Companies typically take two approaches to detecting attacks. Traditionally, they would hire security analysts to analyze samples of malware or malicious traffic and create “signatures” based on those examples. The signature might include a unique section of code or combination of elements that identify it as toxic. When new attacks arrive, the process begins anew.

  Wallarm uses neural networks to do something similar, but it creates statistical profiles instead of signatures, and it can develop and deploy them in real time, Novikov says. That provides greater security, but given the unending and largely unwinnable battle against malicious hackers, he also has resigned himself to the fact that we’ll see far worse attacks in the future. “I can’t predict when and how,” he says, “but I expect to see a global Internet shutdown in the next five to ten years. People already tried last year with botnets. So yeah, we’ll see a global Internet shutdown with a significant amount of Internet service being unavailable to a significant number of users.”

  Extend that sort disruption to the interconnected infrastructure, and serious crises could emerge. A blacked-out power grid doesn’t come back up with the flip of a switch. A widespread outage that stretched into a week or more threatens hospital care, food availability, security, heat and air conditioning, and the maintenance of so many other critical systems. Air traffic grinds to a complete halt, and road traffic descends into gridlock. But while nobody is served by minimizing the dangers, we shouldn’t forget the saving graces available to us. DARPA’s initial iteration of the Internet established a naturally resilient communication infrastructure, one that could survive an attack in one part and automatically reroute traffic to another. Every computer connected to it can function as a node—if a million computers are infected, millions of others could pick up the slack.

  Researchers also have started developing defensive AIs that could neutralize malware as it enters our most important network nodes. That said, the threat to the Internet and infrastructure should prompt us, as individuals, to create redundancies on both the digital and institutional facets of lives, spreading our assets and critical life functions across wider areas of personal networks. Rather than staring into the abyss, we can prepare and make our own, personal infrastructures more resilient. Many people already do this, if for other reasons, for their own homes, adding solar panels and rainwater capture systems. We might need to build the same backstops for our digital lives, setting kill switches that trigger when the “Trojans” appear, and then rebuilding our bridges after we’ve run them off.

  For example, it might make
sense for every smart home to have an “analog island mode” that switches off all its digital connections and seamlessly transitions the home to a safe operating mode without interrupting critical functions. This could safeguard the family’s life, protecting critical hardware such as sleep apnea machines, baby monitors, refrigerators, and home alarms. This type of safety switch might also protect the broader power grid, helping balance electricity load in the event of an attack and avoiding costly blackouts. Communication and cooperation between grid operators and the many homes, commercial spaces, and factories that rely on a secure electricity supply might help contain outages or more quickly recover from those that occur, whether sparked by overgrown vegetation or a nefarious attack.

  THE POWER OF CREATING A BETTER TOMORROW

  Israeli entrepreneur Yaron Segal spent the better part of the past decade searching for a better way to help his son, who suffers from familial dysautonomia—a debilitating syndrome that affects the types of nerve cells that control involuntary actions, such as digesting, breathing, and producing tears. As a father and a scientist, Segal felt compelled to discover the fundamental aspects of his son’s malady. That search took him to other brain injuries and neurological disorders that share some of the same basic properties as dysautonomia. And, having identified those, he began to figure out ways to decrease the effects of those problems. BrainQ, the Israel-based start-up, was born.

 

‹ Prev