Solomon's Code

Home > Other > Solomon's Code > Page 4
Solomon's Code Page 4

by Olaf Groth


  A couple days later, we met a second specialist at a small Catholic clinic, and we immediately felt his desire to save both Ann and the baby. He told us about a small pool of evidence that showed some pregnant women had chemotherapy that didn’t hurt the fetus. That renewed our hopes, which rose again after a successful mastectomy and an initial pathology test that indicated a hormone-driven cancer. That can be an awful diagnosis, but in this case it meant Ann might not need chemotherapy at all. So, together with the doctor, we laid out our plan: She would deliver the baby about a month early and then start a hormone therapy that would combat the cancer but also put her into temporary menopause. Our daughter, Hannah, was born on August 17, 2005.

  Looking back on it now, it’s hard to imagine Watson or any other artificial intelligence suggesting the path we ultimately took. Our radiologist was widely known in Berlin for his ability to find a needle in a haystack, but machines today have far surpassed human ability to detect certain anomalies in radiological images. Had IBM Watson or a similar AI platform existed then, our first doctor might have shown us the array of options it would’ve weighed. He might’ve shown us less-certain options to persuade us to follow his medical advice, but doing so might have given us more information and greater hope about alternative paths, as well. If nothing else, we would have had better questions and responses after his prognosis threw us for a loop.

  On the other hand, if our second doctor supplemented his advice with a statistical analysis produced by a reliable AI, would we still have decided to go with our hearts and take the riskier course we chose? Even after choosing our direction, an AI might have given the doctor and us more resources to help along the way. Ann’s predisposition, her academic background, and her ability to conduct deep research helped turn up several new tests and therapies our doctor had not encountered before. To his credit, he was happy to acknowledge the limits of his considerable expertise and embraced some of them.

  Many of the things that happened outside the objective, analytical framework of modern medicine ended up making a huge difference in Ann’s recovery. She reframed the disease to make it seem winnable, visualizing the cancer as misbehaving cells being overtaken by white blood cells. She thrived on the prayers and support she received from friends around the world. And her instincts as a mother to fiercely protect her child’s well-being strengthened her. Can artificial intelligence ever capture these inherently human elements and motivations, especially when they lead to statistical outliers such as Ann’s battle against cancer? Despite the scientific notion of objectivity and truth that lies in data, an AI can offer no guarantee that any treatment will work. Its recommendations are based on past results, and can only predict the future based on statistical generalizations. Sometimes the gut reaction could be the better one.

  For a while after Hannah’s birth, Ann still pored over the various probabilities, trying to find ways to nudge the numbers in her favor. Despite the certainty of her decision, she couldn’t help but occasionally wonder if she’d made a fatal mistake by avoiding chemotherapy. Yet in the end, we once again decided to go against the advice of many experts, who recommended a five-year course of hormone therapy, and instead relied on trial-based studies and other alternative research conducted by a doctor who blended traditional Chinese therapies with modern Western medicine. Armed with his expertise and the knowledge that the hormones produced during a pregnancy typically prevent breast cancers, Ann decided to stop her hormone therapy so we could have another baby. Hannah’s little sister, Fiona, was born on October 24, 2008.

  More than thirteen years after the initial diagnosis, Ann remains cancer free. Her cancer could return, no one knows, but the same risk would remain if she’d chosen a treatment path more closely aligned with the standard treatment protocols. Alternative paths don’t always work, and most standard treatments have become standards because they work as well or better than other options. But her experience illustrates the sheer breadth of people’s decisions and the possibilities that result, and it shows just how difficult it is to capture the depth of human complexity in a machine. Ann was willing to take a risk and thus became proof positive of a different outcome. An AI-powered platform probably would have been more risk averse—better a bird-in-hand with a probable way to save a life, rather than another way that was unproven and, in the consideration of a rational machine, hard to statistically quantify and support.

  THE MACHINE ELEMENT

  In 2016, a panel of doctors at Manipal Comprehensive Cancer Center in India conducted an experiment to compare their cancer treatment plans with recommendations provided by an AI machine. By then, IBM had launched partnerships with dozens of cancer treatment centers around the world—most notably Sloan Kettering Memorial in New York City—feeding their patient data and reams of medical studies, journals, and research into Watson in hopes of teaching it to learn about, diagnose, and recommend remedies for cancers. The specialists in India, who were part of IBM’s global Watson for Oncology network, wanted to see how often the machine would match the decisions of its tumor board, a group of twelve to fifteen oncologists who gathered weekly to review cases.

  In a double-blind study of 638 breast cancer files, Watson proposed a treatment similar to the panel’s recommendations 90 percent of the time, according to a paper released in December 2016 at the San Antonio Breast Cancer Symposium. The match rate dropped for more complicated cancers, including one similar to Ann’s, but the researchers noted that those types of cases open up many more treatment options, so disagreement on those was more common even among human doctors. What stood out, though, was the speed with which Watson generated its conclusions. By the time the system had learned about the types of case files and supporting research, it was able to capture patient data, analyze it, and return a therapeutic recommendation at a median pace of forty seconds. The human panel took an average of twelve minutes per case.

  Watson is by no means a panacea. A September 2017 report by STAT, a leading life sciences news site, questioned its ability to truly transform cancer care, at least in its current incarnation.‡ But despite the occasional press release or bold prediction, neither the developers at Watson nor the physicians who partner on this research claim that AI will replace physicians and their expertise. Rather, AI serves as a useful complement, a system that might learn from stacks of cancer research with the goal of helping doctors make better decisions. This notion that AI will augment, rather than replace, humans has become a common mantra among proponents of artificial intelligence, and it will hold true for the foreseeable future. While machines have reached or exceeded human abilities in certain diagnostic tasks, such as combing through mountains of medical reports or identifying abnormalities in radiological scans, these systems cannot yet render a trustworthy diagnosis or do so with the empathy required in a patient-doctor relationship. But it’s not hard to imagine the potential in a combined set of systems that better detect anomalies, deliver a concise summary of global research on the ailment, and then put both in the hands of the doctors who make the diagnosis and help the patient make informed decisions about their options.

  A robust AI or a series of such systems will provide a rich source of information to help both doctors and patients decide on the best approach. As patients, most people still need a deeply human interaction when discussing something as vital and intimate as our health. And, for now, few people put as much faith in machines as they do doctors—for good reason. Neither will change any time soon. But for those of us inclined toward the benefits of science and technology, artificial intelligence wields an intriguing power in those spaces beyond human expertise and ability. From this perspective, the Indian research report might offer fresh evidence for how a powerful AI begins to supplement and even replace human judgment, which is fraught with its own limits, errors, and biases. Watson might consume millions of patient records, millions more pages of journals and research studies, and integrate the efficacy of treatment options in virtually every case. And it co
uld learn from that mountain of information, improve and fine-tune its recommendations, and render them objectively.

  Regardless of the details of its direction, no expert doubts artificial intelligence will reshape the entire health-care industry—from pharmaceuticals, to payments and cost controls, to the doctor-patient relationship itself. AliveCor, for example, has created a device about the size of a stick of gum that can measure a person’s electrocardiogram (EKG) and other vital signs and then send the data for a doctor’s review. With it, users at risk of heart problems can check their EKG daily, rather than testing at a doctor’s office only periodically. And since the information gathered goes back into an ever-growing database, AliveCor’s machine-learning engines are trained and retrained to identify subtle heart-rate patterns within the noise of an EKG. These little quirks that human eyes would never notice might signal urgent problems about potassium levels, irregular heartbeats, and a range of other cardiac and health issues. And all that monitoring could eventually fit in the band of your wristwatch, available any time with the touch of a finger.

  These and similar advances portend extraordinary gains in health monitoring, diagnoses, and therapies but, like so many facets of medicine, they come with side effects. Big data does not mean big insights; Watson’s recommendations on cancer care are only as good as the existing data about survival rates, cancer mutations, and treatments. New discoveries can radically change the diagnosis and treatment of cancers and other illnesses. An AI system might offer a predictive element for personal and community health, but any prediction ultimately relies on the quality of the data and algorithms that feed into it, and nothing is perfect. And, as Ann’s story suggests, any number of personal and human preferences can influence care for better or worse.

  Furthermore, important ethical considerations arise as machines become more perceptive and gain broader knowledge of human biology, diseases, and symptoms, too. An application developed by Face2Gene has made some impressive advances in disease detection by comparing faces of patients with the combination of facial patterns associated with various ailments. It still relies on a physician and, in some cases, other tests to confirm a diagnosis, but in one validation study it predicted autism spectrum disorder in roughly 380 of 444 toddlers.§ But what’s to stop an insurance company from scanning facial images to identify potentially costly customers? Could employers begin requiring facial photos to weed out less healthy applicants, or could they start doing so surreptitiously with applicants’ Facebook photos? Could immigration officers scan travelers for their propensity to carry certain diseases? How does this impact our power to decide on medical treatments and life paths? And will we trust the medical establishment, insurers, and employers not to use the information inappropriately?

  These balances between tremendous opportunities and acute risks stretch well beyond health care. Some employers have started implementing AI to analyze 15-minute samples of prospective employees’ voices, scanning each one for indications that suggest a more collaborative worker or better leader. Other corporations now use AI to integrate disparate data streams from different parts of the organization to figure out who was a good hire, who needs to receive further training, and who should be fired. Workers get in on the act, too, demanding more purpose and creativity in the workplace and forcing employers to think more about how they will engage and stimulate a new generation of employees.

  Artificial intelligence and its cousins are transforming entire industries, as well. Cars learn to drive themselves and robots refine their ability to manufacture or react to the emotions of the person sitting next to them. Facebook and Baidu are developing increasingly sophisticated AI applications that feed customized news and commercial advertisements to users, hoping for greater customer satisfaction and spending but also blurring the lines between personalization and manipulation. Both platforms have been criticized for creating homogeneous “bubbles” of like-minded people, but the sites remain extraordinarily popular.

  For better and for worse, AI innovation has changed the way most of us live and work, and it will continue to do so in the years to come. But to understand what that means for the future, we first need to understand the present state of the art.

  AI TODAY

  A colloquial definition for artificial intelligence is simple enough—advanced technologies that mimic human cognitive and physical function—yet what qualifies as AI seems to change with every major breakthrough. In the broadest sense, artificial intelligence is the capacity of machines to learn, reason, plan, and perceive; the primary traits we identify with human cognition (but, notably, not with consciousness or conscience). AI systems not only process data, they learn from it and become smarter as they go, and their ability to adopt and refine newly developed skills has improved markedly since the turn of the century. Accuracy improvements in image recognition, natural language processing, and other pursuits have accelerated from a crawl to a sprint. New neural networks, the computational layers of which mimic the interconnections of neurons in a human brain, now process massive troves of data with vastly increased processing power, all combining to usher in another era of AI investment. Investors poured almost $1.3 billion into machine learning in 2016, but that figure is estimated to reach almost $40 billion by 2025, according to a report by Research and Markets.¶

  We’ve been here before, albeit not at the same scale. By most accounts, AI’s seminal moment came in 1956 with the Dartmouth Summer Research Project on Artificial Intelligence organized by John McCarthy. “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves,” the workshop proposal read. “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” The gathering eventually set off a burst of investment, research, and hype—the first AI bloom. However, by the early 1970s an “AI winter” set in as the hype dissipated and funding dried up. As it happened, cognitive machines could not aid the Cold War effort by automatically translating between English and Russian. The concept of “connectionism,” which aims to represent human mental phenomena in artificial neural networks, failed to capture knowledge in a universally accessible manner. However, the following decade brought the rise of “expert systems,” computers that used a body of knowledge and a set of if-then rules to mimic the decision-making ability of a human expert. But this spring also cooled into an AI winter, as expert systems proved too brittle and difficult to maintain. While important research continued, the investments and interest in AI cooled through much of the 1990s.

  Today, though, AI has deeply engrained itself in our everyday lives, even if we don’t always identify it as such. Advanced learning algorithms already power many of the massive, foundational activities that dictate our behavior, from the newsfeeds we see on Facebook to the results Google returns on our search queries. It powers the navigation apps on our phones. It recommends products on Amazon. It helps translate foreign languages quickly, accurately, and in increasingly natural language.

  The sharp rise in computing power, memory, and data availability laid the groundwork for the current revival. By early 2017, advanced algorithms could process a speaker’s voice for a few minutes, and then create a fabricated audio clip that sounded almost exactly like the same person. It wasn’t hard to imagine that complete, untraceable video manipulation would arrive before long. Google applied deep-learning techniques from its DeepMind unit to slash 40 percent off the cost of cooling its massive data centers. The data centers generate massive amounts of heat and, because their configurations and conditions vary, each one needs a system that can learn, customize, and optimize cooling systems for its own environment. Some of the same techniques the researchers discovered during the development of AlphaGo, the AI system that defeated world Go champion Lee Sedol in 2016, have since improved the data centers’ energy efficiency by 15 percent, saving million
s of dollars annually. If those techniques can scale and work with large industrial systems, the DeepMind division noted, “there’s real potential for significant global environmental and cost benefits.”#

  Machines that continually learn, improve themselves, and optimize toward their goal have become more capable than humans at certain tasks, such as identifying skin cancer from photos and lip-reading.**†† As remarkable as these advances have become, though, they remain distinctly limited to the function at hand. The notable gains have come only in “narrow AI.” While a machine can beat the world’s greatest grandmaster at chess, the same system can’t distinguish between a horse and the armor-clad knight who’s riding it. In fact, it’s largely because these advances occur in such narrowly defined pursuits that we get what’s often called the “AI effect”—abilities we once thought of as artificial intelligence we now consider to be nothing more than simple data processing and not “intelligence,” per se. We move the goal posts, and then we move them again, and soon enough we’re on an entirely different playing field.

  Those lines will stop moving when a machine develops “artificial general intelligence,” the point at which machines, like humans, display intelligence across an array of fields and take over the job of successively improving their own code. Several major technological breakthroughs will have to occur before AI reaches this point; yet the possibility of artificial general intelligence and what it could produce both fascinates and scares people. It conjures up depictions in science-fiction movies, where the super-intelligent robot overlords enslave humans or, in a thought experiment described by Nick Bostrom in 2003, run roughshod over everyone and everything in a single-minded effort to make more and more paper clips.

 

‹ Prev