Solomon's Code

Home > Other > Solomon's Code > Page 10
Solomon's Code Page 10

by Olaf Groth


  Better yet, he expects to use more cognitive computing to enhance the ways I-L-X already injects different types of learning experiences into the scenarios it presents. Beck says that once students are stimulated by the interactive experience, providing them a range of learning styles helps them understand and retain knowledge far better than identifying one that suits them and going deep with it. “We can take somebody out of their comfort zone, their preferred thinking and learning style, and hit them with a situation that requires a different one,” he says. (In one game he’s designing there are eight to choose from.) This can be especially powerful when I-L-X interlaces business content of the case with a personal relationship narrative, just like in real life. “We can make students feel really uncomfortable and provoke them both intellectually and emotionally, and that’s when the most significant learning happens.”

  So, rather than the intuitive idea of just using AI to figure out the best technique to impart a lesson to each individual, I-L-X could also use it to, say, find the best order of a wide variety of techniques to deliver the same information. And that, Beck says, often depends on the context in which the students find themselves. Inject one methodology or another at the exact right time, based on what the game senses from the participant, and thus deliver the best and deepest educational experience.

  The power to change the context in a learning experience through a dynamic interaction between humans, cognitive machines, and the real external environment provides a tremendous opportunity to enhance education around the world. Enio Ohmaye, the chief experience officer of EF Education First and former chief scientist of Apple Japan, has put that kind of smart contextual simulation to work to help young Chinese students learn English. Ohmaye’s young son is growing up in an environment where he’s exposed to four languages—Portuguese, Japanese, English, and German. “There’s not necessarily anything unique about his brain physiologically speaking,” Ohmaye says; by which he means that most other kids in the same environment would pick up the same skills. His son just absorbs them from the surroundings—the Portuguese from his father, the Japanese from his mother, and the German and English from his school. Few youngsters in China have the same experience. “It’s not about tech,” Ohmaye says, “it’s about exposure to language in a way that is relevant, meaningful, and effective for them to learn the language.”

  And that’s where the interaction of AI systems, the young human student, and his environment comes into play. EF Education First develops systems that bridge between the physical and digital worlds. Using a variety of machine learning and image recognition techniques, in early 2018 the firm introduced a program in China that’s designed to more deeply permeate the life of the young user, integrating the English lessons with the human elements of his or her young life. The firm is patenting a little robotic pet that recognizes what’s going on in the student’s environment and adapts its English lessons to fit the situation. So, depending on the time of day, past behaviors, and the current situation, it might start interacting with the student by using words associated with bedtime routines—phrases associated with brushing teeth, saying goodnight, singing lullabies, or reading bedtime stories. “We begin to permeate the life of kids with English that’s really relevant,” Ohmaye explains. “The beauty of it is we can, in the process, make it a little fun, and we can involve the parents just by virtue of the fact we’re helping the parents create behaviors and structures they (the kids) need to abide by.”

  Yet, it’s not sufficient to just have the toy. EF Education First supplements this with programs that forge a symbiotic ecosystem between the pet, the child, the parents, and the environment. Parents, grandparents, and teachers all play a role in the interactions, often as drivers of content. It’s a combination of device, social support, and a human and machine ecosystem encompassing that community. The back-office AI system can track what kids listen to and what teachers say, and by doing this across thousands of students it can identify better techniques. That’s what gets EF Education First past the roadblocks of most online education programs, which get plenty of people to sign up but few to complete. “This marriage of physical and digital and the computational power of AI allows us to create a much more immersive experience,” Ohmaye says. “You send a 16-year-old to France, and they come back transformed. That’s what made me fall in love with this company, delivering these transformative experiences.”

  That sort of symbiotic relationship between artificial, human, and other types of natural intelligence can unlock incredible ways to enhance the capacity of humanity and environment around us. Yet, as humans, especially in Western cultures, we tend to order our existence in terms of hierarchies—a struggle to ascend the food chain, climb the corporate ladder, or remain atop the evolutionary pyramid. We apply the same conceit to artificial intelligence, constantly ranking it against our own intellect and fretting about when it exceeds our capabilities. Humans do this naturally when trying to figure out how to relate to something new, says Genevieve Bell, anthropologist and professor of computer science at Australia National University and senior fellow at Intel’s New Technologies Group. Bell likens it to the Buddhist concept of “dependent co-emergence.” By comparing artificial intelligence with ourselves, she says, our understanding of each is clarified by the other.

  Still, Kevin Kelly, the founding executive editor of Wired magazine, would appreciate it if we stopped thinking that way, at least in terms of intelligence. “Intelligence is not a single dimension,” Kelly says in an April 2017 column for his publication, “so ‘smarter than humans’ is a meaningless concept.”* Rather, he argues, the world is full of a wondrous array of intelligences, each having evolved over time to their currently refined state. Even without neurons, colonies of certain slime molds can solve mazes, balance their collective diet, and escape from traps.† Bees exhibit remarkably complex problem-solving capabilities through the collective intelligence of the hive. Whales have significant social intelligence. Humans generalize concepts and imagine new things in ways no other animals can match. And AI systems can process complex mathematical and memory feats that no human can ever hope to accomplish with their gray matter alone. Kelly likens this spectrum of intelligences to symphonies with myriad instruments that “vary not only in loudness, but pitch, melody, color, tempo, and so on. We could think of them as an ecosystem.”

  Stuart Russell, who leads the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, finds Kelly’s argument unconvincing. Human intelligence is complex, Russell says, but machines might eventually surpass it in some general manner—for example, by exhibiting the ability to perform almost all human professions as well as people can. Russell and Peter Norvig published the first edition of their now-standard textbook, Artificial Intelligence: A Modern Approach, in 1995.‡ Two decades later, and after thirty years of AI research, Russell began to wonder: “What if we succeed?” He and his colleagues at CHAI aim to reorient the field, essentially working to reestablish it with safety built into its foundations. They want to create what they call “provably beneficial systems” to make sure any new form of AI, regardless of its mundane or superintelligent capabilities, is created with the ideals of human safety and benefit at its core.

  Our fascination with and fear of artificial intelligence might stem in part from our hierarchical thinking of the world and the idea of a superintelligence subjugating us. But it also arises from the fact that we long have thought of and modeled AI systems after our own human brains—that the super in that intelligence could mean our brains raised to an exponential power. Yet, despite the fact our particularly human capabilities have enabled us to dominate other species to the degree that we now live in what many call the Anthropocene Era—a time dominated by human decisions and technologies rather than other natural forces—we can’t really claim to have arrived in this advantageous position with a level of responsibility and care that’s commensurate with the power we currently wield.
For most of our history, humanity behaved as a ruthless, adaptive dominator of the natural world around us. Creating tools is one of our signature adaptive techniques; now, we find ourselves with a tool that thinks and talks back.

  In the 1970s and 1980s, when nature “talked back” and we recognized some of the irreversible damage we were causing, environmentalism represented an assertion of the power of nature over humanity. Yet neither environmentalism nor humanism provide a satisfying explanation for the evolution of cognitive machines in the 2010s and 2020s. Intelligence is neither the prerogative of humans nor nature. It is an evolutionary act of cross-fertilization, with different types of intelligences interacting and mutating into new types of intelligence altogether. Over the last 2.8 million years or so, humans have used nature’s resources, domesticated many animals, and impacted the evolution of a myriad of others, taking influence over different forms of intelligence in nature. Over the last seventy years, humans have used nature’s resources and manipulated physics to make ever smarter computers. We evolve, we interfere and meddle, we create, and we force other forms of intelligence to coevolve with us. So, why would machine intelligence be any different? Whether legitimately or not, we have come to view artificial intelligence as threat, rather than a new intelligence with which we can partner, enhance our ecosystems, and enrich our lives. Like the chess grandmasters using computers to raise their games, and like those same experts trying to figure out what AlphaZero will mean for the game they love, we currently struggle to embrace a new world of symbiotic intelligence.

  Symbio-intelligence represents a cohabitation and integration of multiple forms of intelligence—human, nature, and computational machine—into a coemergent and cocreative partnership that benefits all sides. It produces benefits to each contributing entity that it could not enjoy on its own, such that the new partnership exceeds the sum of its parts. Ken Goldberg, an artist and roboticist who leads a research lab at UC Berkeley, proposes the similar idea of “Multiplicity” as an inclusive alternative to the Singularity, or the hypothetical point in time when computers surpass human intelligence. Goldberg’s notion of Multiplicity emphasizes the potential for AI to diversify human thought, not replace it.§ “The important question is not when machines will surpass human intelligence, but how humans can work together with them in new ways,” Goldberg says in a Wall Street Journal op-ed. “Multiplicity is collaborative instead of combative. Rather than discourage the human workers of the world, this new frontier has the potential to empower them.”

  Even within the cognitive realm, human, natural, and machine intelligences display distinct powers thanks to their unique evolutionary pathways. Human brains, for example, process data and handle our complex balance of bodily operations with an unprecedented energy efficiency. Millions of years of evolution and “genetic intelligence” have led to an incredibly intricate human ability to manipulate objects with our hands, something that will take massive amounts of computing power and research to replicate in robots. And that doesn’t even begin to scratch the breadth of human processing and function. For example, consider a professional tennis player reacting to an opponent’s 150-mile-per-hour serve. As Stanford humanities, sciences, and neurobiology professor Liqun Luo explains, the thousands of interconnections in the player’s brain allow him to spot and process the flight of the ball, almost immediately identify its trajectory, and begin moving his legs, torso, shoulders, elbows, wrist, and hand into position to return the serve, all simultaneously and in concert.¶ “This massively parallel strategy is possible because each neuron collects input from and sends output to many other neurons—on the order of 1,000 on average for both input and output for a mammalian neuron,” Luo writes. “By contrast, each [computer] transistor has only three nodes for input and output all together.” This enables the considerable multifunctional dexterity of the human body that is so hard to achieve for most animals and robots to date.

  Yet, computers already surpass human performance in comparatively new endeavors, such as chess and Go, despite the remarkable efficiency of the gray matter in our heads. The human brain uses about twenty watts of energy, roughly double the amount of power delivered by an iPad’s minicharger, explains Sean Gourley, the CEO of Primer AI, which builds machines that can read and write.#** Today’s advanced computers consume far more energy, but the sheer speed of their serial, step-by-step processing abilities—combined with some parallel capabilities—allow them to easily outperform humans on a range of tasks, such as image processing, decision-making, and text recognition. Maybe the most severe limitation humans face is in the realm of dimensionality. We struggle with just three dimensions whereas computers routinely work with thousands of them. But for so many applications, a computer’s massive advantage in processing speed makes all the difference necessary. “That’s why we let computers trade on Wall Street today, rather than humans,” says Gourley, who has advised the US government on the mathematics of war, among other things. “Trading is a narrow task with just a few vectors for decision making, mostly driven by the speed of processing pure economic-financial metrics. That is much better suited to a narrow intelligence of a computer, whereas the human brain is a bit more holistic but a lot slower.”

  By contrast, human brains are easily diverted by predispositions, distractions, and confirmation bias, when we value information that confirms our beliefs and miss critical details in what we observe. The same focus mechanisms that block out sensory distraction so we can make simple decisions also cause most of us to miss the gorilla that walks through the basketball game.†† And yet, this same capability to think about facts from different angles and take a little longer to process decisions may equip humans with a better “provocateur intelligence,” Gourley says. We can critically reflect on decisions and put them into the larger context of societal needs, multiple stakeholders, and developments across multiple domains, not just economic and financial ones.

  Machine algorithms don’t assess the social consequences of a merger and acquisition activity, such as layoffs, and the potential externalities that these bring to society, unless they are specifically programmed to do so. The human brain can and often does, in part because we possess an empathetic element in our intelligence that leads to moral considerations and deliberation across a broader set of different factors. Developers concerned with creating effective and efficient machine algorithms to trade and maximize the value of our equities in our retirement portfolios might view empathy as a risk to their mission, a consideration that slows down critical decisions and muddles the picture.

  Building an environment of symbio-intelligence requires the ability to recognize the good and the bad parts of various types of intelligence so we can merge them into an optimized partnership. It relies on a deeper sense of trust in the potential of AI and a willingness to accept that human intelligence can be enhanced by the same intellectual powers we once reserved only for science fiction. Yet, around the world, we already are seeing powerful new applications of advanced technologies that automate not just human thinking, but virtually every facet of our identity, our experience, and our well-being.

  STAND-UP COMEDY, RUGBY, AND THE INEXTRICABLE RELATIONSHIP OF MIND, BODY, AND ENVIRONMENT

  Kevin Kelly likens this notion of symbio-intelligent relationships to instruments in a symphony. John Neal sees it in the elite athletes he helps train. Neal, the head of coach development at the England Cricket Board and a professor of sports business performance at Hult Ashridge Executive Education, serves as a performance coach for various English teams and individuals, including the royal household. Having left behind traditional psychology for a more symbiotic approach that merges emotional insights with physiological data, his goal is to bring out what he calls “flow” in his coaches and athletes, and he does so by recognizing the inseparable relationships between mind, body, and environment. It starts with the measurement of physiological and neurological signals, determining how people learn, reflect, recover, and perform. The brain an
d body, Neal says, provide an honest accounting of their state. An athlete might say one thing, but the data are “absolutely binary, like an autistic response,” he says, meaning black and white, lacking nuance and empathy.‡‡

  That autonomic data becomes a powerful instrument with which to gear training programs for the fractional improvements that often decide contests played at the most elite levels. But those athletes, whether teams or individuals, don’t play in a vacuum. Neal and his colleagues also train athletes to prepare for the types of environments that might keep them from peak performance. When athletes feel confident, he explains, they go into that state of “flow.” The game moves in slow motion. They anticipate what will happen next, staying ahead of their opponents. They typically feel great and perform well. However, if they tip out of flow into a state of challenge or threat, Neal says, actual blood flow to the brain starts to change, and they begin to spend more neurological and physiological effort on strategy and decision-making. They fall back to old patterns.

  Coaches can see it happening in their athletes, sensing not just a drop off in performance but a change in the baseline state of a player. The best coaches know when to intervene and how they might snap individuals back into flow. Sir Clive Woodward and his fellow coaches noticed it during the 2003 Rugby World Championship semifinal, Neal recalls. Jonny Wilkinson’s play had dropped off, but one of the coaches realized he’d started moving differently, had changed his usual body language, and appeared almost in a panicked state. Many coaches would’ve taken Wilkinson off the field, but in a quick huddle—maybe forty seconds or so, Neal says—the coaches collectively decided that they needed to keep him in and preserve his confidence and mindset for future matches. So, they sent Matt Catt in, substituting him for another player. On his way on, Catt ran by Wilkinson, slapped him on the backside, said something upbeat to him and laughed. It was enough to snap Wilkinson back into the game, and he went on to contribute to the winning score. “It was the most remarkable piece of coaching I’ve ever seen,” Neal said. “There was a lot of debate between the coaches, and it was heated, but the decision was made in about forty seconds. It was a perfect example of intuition, intelligence, and cognition coming together in a very short period, and it was all on TV.”

 

‹ Prev