Solomon's Code

Home > Other > Solomon's Code > Page 28
Solomon's Code Page 28

by Olaf Groth


  Hundreds of millions of people around the world suffer from neural disorders, and the cost of care keeps escalating, typically in the form of physical therapy and living assistance costs, says CEO Yotam Drechsler. The problem with many of the current treatments, Drechsler says, is the fact that they treat the symptom, addressing a partial paralysis of an arm or leg instead of addressing the injury or limitation that stems from the brain or spine itself. Since the initial team consisted of both traditional and data scientists, they looked at the problem in a fresh way, combining cutting-edge machine learning methods and tools with scientific theory, particularly Hebian theory, which claims: “Cells that fire together, wire together.”

  The company put the Hebian theory to work in a novel way, hoping to re-create neuroconnectivity in damaged or limited portions of the brain. “Why don’t we do physical therapy directly to the brain?” Drechsler says. “We believe there are specific patterns in the brain that are associated with every movement. And once we identify them, our goal is to imitate them with a non-invasive device based on a low-electromagnetic field, and hopefully achieve this neuroplasticity. We learn to imitate the ‘firing’ patterns with advanced AI tools in order to facilitate the ‘wiring.’” While neuroscientists could understand how certain networks controlled certain bodily functions, measuring precise firing rates had been all but impossible with traditional technologies. BrainQ developed a machine learning algorithm that, it says, provides clarity on the disrupted frequencies. The company now claims to have one of the world’s largest sets of electroencephalography (EEG) motor task data for Brain-Computer Interface (BCI) applications. “The firing rate is not the new story,” he explains. “The problem is people were trying to learn it for years and did not find much because the signal-to-noise ratio is so low. The data is extremely noisy, so it requires sophisticated tools such as BrainQ’s to interpret it.”

  With those signals isolated, the company uses a coil, which looks like a salon’s hair dryer, to provide a low-intensity signal at the right frequency, reigniting neural firing in hopes of re-creating neural wiring. The company has a video that shows a partially paralyzed rat that can’t move its back end and drags itself around by its front legs. After a month of treatment, tests showed a partial connection of the damaged tissue and the mouse is moving its back legs, albeit slowly. By day fifty-eight, the rat is climbing up boxes and its back legs appear fully functional again. The treatment uses low-intensity energy, so it poses no known physical risk, and it can treat a variety of brain maladies. BrainQ has run some limited human trials in Israel, and those tests have shown promising results. And, as of this writing, the company was beginning to explore opportunities in the United States, too. Could this help treat the 250,000 to 500,000 people around the world who suffer spinal cord injuries each year?†††

  Developers, scientists, and researchers are using artificial intelligence to tackle some of humanity’s and the planet’s most confounding challenges. Some of them, like BrainQ, are using these powerful AI systems to engineer treatments heretofore impossible, discovering new avenues to solve known challenges. Others, like Yasuo Kuniyoshi, are seeking a deeper understanding of intelligence and learning itself. Kuniyoshi is the director of the Intelligent Systems and Informatics Laboratory at the University of Tokyo, where he and his colleagues experiment with complex neural networks and robotics in hopes of developing more capable AI agents—and they’re learning a little more about human development, as well.

  Kuniyoshi has developed a robot fetus, a complete robot and computer simulation of a developing human body and nervous system. The extremely precise model even floats in a liquid-filled “womb,” in which it wiggles and moves spontaneously. Through sensors and its neural network, the robotic fetus begins to learn about itself and its environment. So, if its limbs touch each other, he explains, the robot recognizes and learns about that physical relationship. “That, in return, changes the output,” he says. “As the baby learns it changes its behavior and changes its input, because the movement changes. If this continues on, we think we can call it a spontaneous development.”

  It’s still in the early stages of development and understanding. The current systems are still relatively small, with a few million neurons. But no one knows what level of complexity would trigger, say, self-reflection or similar human abilities. That would require massive amounts of compute power and further research, but Kuniyoshi is convinced the relationship between the neural and physical—the cognitive computer and the robot—is indispensable to understanding how and when those attributes develop. “I firmly believe that for the future of AI, human likeness is really important,” he says. “A humanlike mind is really grounded in a humanlike body and a humanlike environment, and the initial trajectory is very important.”

  Shreya Nallapati might not choose to follow Kuniyoshi into the depths of advanced AI research, but she is clearly on to something else that is important to our well-being. Before we spoke with her, she’d settled into her senior year of high school in Colorado, thinking about where to go to college and pursue her interests in computer science and cybersecurity. Then, on February 14, 2018, nineteen-year-old Nikolas Cruz killed seventeen of his classmates at Marjory Stoneman Douglas High School in Parkland, Florida. Like previous mass shootings, this one reignited the debate about gun control, but it also sparked a new and larger movement—one that was launched by the victims’ classmates but quickly spread to high-school students and young adults across the country. Inspired by Emma González, one of faces of the emerging #NeverAgain movement, Nallapati founded #NeverAgainTech, an initiative that applies machine learning, natural language processing, and other AI technologies to identify the root causes of mass shootings and, hopefully, prevent them in the future.

  Supported by organizations such as Amy Poehler’s Smart Girls, Forbes, and technological contributions from some Silicon Valley companies, including Splunk, Nallapati and her collaborators began to identify various factors that contribute to mass shootings. They looked at a variety of data that fit into five primary categories: mental illness in the shooter; their socioeconomic status and social/familial background; their motive; the firearm used; and any state policies on firearms. The goal of the all-girl team, Nallapati says, is to come up with an ironclad, evidence-backed argument for national or state policies that can overcome entrenched political views and bring together industry professionals, students, and policy makers. “At the end of the day,” she says, “we realize that although we cannot actively prevent the next shooting, we can do our best to identify potential hot spots for such activity and shed some light on a very complicated and emotional issue.”

  The mere mention of artificial intelligence can trigger anxiety about all the ways things could go wrong and about how these subtly powerful technologies could be used to harm humans. Yet, these AI systems are still tools, ones that can make the world a far better place if we proceed with care. We will need to strike a balance of power between individual agency, governments, and the world’s Digital Barons. We will need to shore up the existing institutions that provide the safe and secure flow of electricity, water, and data. And we will need to preserve trust and human-centric values.

  Segal, Drechsler, Kuniyoshi, Nallapati and thousands of like-minded people around the world are tackling some of these challenges already. Whether each one ultimately succeeds in his or her quest might not matter to the world at large, but the care and passion embodied in their visions for AI will make all the difference for the future of humanity.

  *James Vincent, “Moscow says its new facial recognition CCTV has already led to six arrests,” The Verge (September 28, 2017).

  †David Reinsel, John Gantz, and John Rydning, “Data Age 2025: The Evolution of Data to Life-Critical,” IDC white paper sponsored by Seagate Technology (April 2017).

  ‡Julia Angwin, Noam Scheiber and Ariana Tobin, “Facebook Job Ads Raise Concerns About Age Discrimination,” New York Times, Dec. 20, 2017.

&
nbsp; §Adapted from Thomas Hobbes quote “solitary, poor, nasty, brutish, and short” in Leviathan, 1651.

  ¶Cynthia Dwork, et al, “Fairness Through Awareness,” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (November 29, 2011).

  #Cathy O’Neil, Weapons of Math Destruction (New York: Crown, 2016).

  **“Venture Pulse Q4 2017,” KPMG Enterprise report (Jan. 16, 2018).

  ††“Jobs lost, jobs gained: Workforce transitions in a time of automation,” McKinsey & Company (December 2017).

  ‡‡Researchers studying in this vein include: Carl Frey and Michael Osborne (Oxford University); Wolfgang Dauth, Sebastian Findeisen, Jens Südekum, and Nicole Wössner (Institute for Employment IAB); and David Autor (MIT).

  §§https://www.oreilly.com/ideas/why-well-never-run-out-of-jobs-ai-2016

  ¶¶21 Jobs of the Future, Cognizant, Nov. 28, 2017.

  ##Moody’s: US corporate cash pile grows to $1.84 trillion, led by tech sector, Moody’s Investors Service, July 19, 2017.

  ***Safety in the AI context can refer to 3 different things: 1) Safety-purpose AI, or technology built to solve a safety problem, such as crash-avoidance for cars; 2) Cybersecure AI, or technology that resists adversarial compromise, unauthorized alteration, or control; and 3) safe AI that is built from the ground up to avoid flaws and undesirable outcomes.

  †††Spinal cord injury fact sheet, World Health Organization, Nov. 19, 2013.

  7

  Life and love in 2035

  CONNOR (THE EX-BOYFRIEND)

  Connor reset the plow and stopped to check the harnesses on Whoop and Holler. The mules had been at it all day, and now they lined up toward the dusk gathering beyond the far swath of birch and maple. Though they were tired, their ears pricked up as he came around, and Connor felt his usual compulsion to finish the job. So, he wiped his brow on the back of his leather glove, took a moment to admire the consistency of his lines, and then stepped back behind the plow.

  “Why can’t you just let it go?”

  The voice flittered out from the shadows of his mind, and the queasy mix of anger and longing settled into his stomach. Ava’s voice—he heard it more often lately. At first, the sounds of his ex-girlfriend’s voice infuriated him, spinning him into bilious frenzies of work. The community didn’t know what set him off, but they knew what happened whenever it did. After more than a year of doing as little as possible under community rules, Connor suddenly started checking off the entire list of daily errands by midafternoon, if not earlier. By now, the rest of the Siblings knew he wouldn’t stop until every last bit of each job was done, and done well. Whenever he set off on a fever, they just eased back and let it play itself out.

  It didn’t happen as much lately. Connor still worked as hard as anyone there, but never with the same sort of madness. Even now, as he exhorted Whoop and Holler up the next row, he spit his usual mantra at no one in particular—“Why the hell would you agree to do something and then not do it right?”—but he sighed as soon as he said it. He was back in his own head, and his head was back in the apartment he’d shared with Ava. He’d mutter the same thing to himself, just loud enough to make sure she knew something was annoying him: Why don’t you just put your shoes in the closet where they belong? It was never a big thing, until it was, and by then it was too late anyway.

  “Why can’t you just let it go?” Ava said when Connor brought it up on the way to Leo’s wake. They had very different reactions to their friend’s suicide. Ava saw it as a calling to express herself more honestly; Connor saw it as the worst outcome of the pervasive use of personal AI assistants. He blamed the PAL for aggravating Leo’s depression about his sexual identity. “Leo didn’t kill himself,” he thought, “the PAL did.” And, as they rode over to the celebration of Leo’s life, he couldn’t help himself. Bitter, he asked again: “Do you really think that goddamned thing is good for you?”

  Thinking about it now, behind the plow, the disgust returned. The first time Ava’s PAL suggested that her biometric and behavioral evidence hinted toward bisexuality, Connor laughed it off. As she started to muse more about it, he saw it as pure criticism. By the time the driver dropped them off at Leo’s wake, her two passengers were yelling at each other. “Why can’t you just let it go?” she asked again. It was on that sidewalk, on that horrible night, when he decided to leave.

  He ended up here, at the end of a quarter mile, ruler-straight row of plowed field in Alberta, Canada. The Community of People had emerged from a crossbreeding of anti-AI humanist intellectualism and the Amish adherence to austerity. Connor scoffed at the commune in its early days; it went too far to cut off all connections to the outside world, even electricity and phones lest some AI-influenced technology leak its way in. But he was always intrigued by the idea. By the time it had suffused through his mind enough to suggest it to Ava, the community’s modestly relaxed restrictions—it now allowed certain devices, but still restricted digital communications—made the switch more palatable, at least in Connor’s mind.

  He left the day after the wake, after Ava had gone to work. By the time he landed in Canada, he’d reset his PAL to Bouncer Bot mode, instructing it to alert him only when his ailing mother or autistic but self-reliant brother tried to contact him. For health reasons, he kept the medical and biophysical outputs on, measuring the tiniest nuances of his gestures and app usage, even though he knew it only could upload on the few occasions he got to town—and even though he knew it would send more information than he really wanted to share. And then he set out to convince himself that this was the only authentic, personal, and “real” way to live.

  He rarely left the community during the first, angry year. He’d bunked for a few months with a guy who helped him acclimate while he built his own cabin. The next few months, he hosted a new arrival, with whom he initially clashed but eventually came to trust. It was this guy who convinced him to reengage with PAL’s therapy-bot functions, and it was no coincidence that Connor’s tolerance of his temporary roommate improved afterward. By the time Connor had his cabin to himself permanently, he had settled into and had come to appreciate the routine of communal life. He found himself apologizing for fits of anger, but everyone seemed to shrug them off and laugh about how much work Connor would do when he got so agitated.

  That wasn’t going to happen tonight, though. The gloaming had settled across the plowed fields and the chorus of insects were humming their song of survival. The rest of it, maybe half a morning’s work, would have to wait until tomorrow. He would let it go this time, he said to himself with a chuckle that felt almost refreshing. He left the plow, led Whoop and Holler back to their feed and water, and, checking his watch, realized he had enough time to take a shower and still catch the end of the community dinner.

  He changed his mind about halfway through a cool shower, when he suddenly felt an urge to go into town the next morning. He’d missed connecting with his brother the prior week, and he figured a little rest would do him well during the hill climbs on the twenty-one-mile bike trek. So, rather than join the card games and the conversation, he noted his plowing progress on the errand board and went back to settle in for the night.

  As usual, his PAL kicked into action during his bike ride, about two miles outside of town. That “Welcome back, Connor, been awhile!” always jarred him out of the meditative state of rhythmic pedaling, but never enough to make him want to change it to something else. There was still something he craved about the virtual connection to the outside world, and as he cruised down the last half-mile stretch to the outskirts of town, his PAL would give him an update on the news of the world and the latest mundane details about his mother or brother. Connor stopped at a few stores to fill his pack with provisions and a few special requests from other Siblings, and then headed for the pub for his favorite indulgence—a buttered steak, a beer, and a piece of cheesecake.

  “Connor!” the waiter yelled when he walked in, and Connor strode over and gave him a be
ar hug. As they exchanged pleasantries, Connor sensed someone staring at him from across the room. The guy just didn’t quite fit, what with his expensive black leather pants a denim jacket that looked just bit too clean and pressed. Damn, Connor thought as Tony took the usual order, that guy seems familiar. Now he was walking right toward them.

  “Tony,” Connor said, the epiphany finally upon him, “this sonofabitch right here is Vladimir, one of my closest friends from undergrad. Holy cow, Vlad, what the hell are you even doing here?” Last Connor saw him, Vlad was working in IT at a San Francisco nonprofit. They both laughed and jumped into a deep, hearty embrace.

  “Your brother said I’d find you up here,” Vladimir said. “You’d all but dropped off the face of the earth a couple years ago.”

  “Well, there’s a heluva story about that,” Connor said, collecting his beer and nodding back to the table where Vladimir’s double-oaked bourbon awaited.

  The steaks gone, a third beer in hand, and almost two hours passed, Connor waffled between bitterness and despair as he told Vladimir about how things ended with Ava. Somewhere his inhibitions warned him that he no longer drinks as much as he once did and might want to cool it, but he drifted further into the window of darkness. He told Vladimir about how Ava’s PAL suggested she slow things down with Connor, and how they decided to push through—but it wasn’t the same after that. It got worse and worse, and, after the wake, he’d had enough. He just couldn’t let it go.

  His mind only got darker when Vladimir told him about Ava, who was now seeing Emily, a moderately famous conservative pundit. “A right winger, eh? Well that just figures. Maybe we need an AI to split them up, too, for everyone’s good.” It felt good to say it, but Connor still regretted it as soon as it left his mouth. He worried even more when Vladimir flashed a thin smile and quickly changed the subject.

 

‹ Prev