Solomon's Code

Home > Other > Solomon's Code > Page 14
Solomon's Code Page 14

by Olaf Groth


  He accedes the idea of symbio-intelligence, but even there adds a limitation on how far one should go with the concept of intelligence in machines. Narrow AI is quite real, he says, and programs designed for specific tasks can easily and distantly exceed human abilities. “If you want to call that intelligence, fine, but it’s not relatable to human intelligence that generates abstractions and representations,” he says. “We wouldn’t build these if they didn’t exceed human capabilities, reduce costs and generally do things better. I’m not threatened by that. That’s what we do. That’s why we built airplanes and they’re not like birds. They’re ‘smart’ enough that once switched to autopilot they can keep themselves up in the air, much better than humans, but that doesn’t mean they have consciousness.”

  Christof Koch might be willing to grant some basic level of consciousness in a machine, but he remains, as he quipped on a 2018 South by Southwest panel, “a biological chauvinist.” Echoing Tononi, Koch, the president and chief scientist at the Allen Institute for Brain Science, suggests that consciousness is not an emergent property, but inherent in the brain and other systems. Take away the concept of the soul or the romantic exceptionalism of the human being, and the idea of consciousness in a machine or another organism might not sound quite so fantastical. But the depth of conscious experience varies widely, he argues. Current neuroscience research suggests that consciousness derives from fundamental causal relationships in the brain, and those relationships are exceptionally vast and complex—far more so than the comparatively simple binary relationships embedded in a silicon chip. And while higher levels of complexity might be simulated in a deep neural network, they can only be simulated. “You can’t compute your way to consciousness,” Koch said at the Austin, Texas, conference.

  To produce something closer to human consciousness, he argues, one would need to create a technology with far more causal relationships embedded in its core architecture, perhaps some future successor to the current quantum computing technologies. “Once a computer is complex enough to begin to rival the human brain, then in principle, why should it not also have conscious experience?” he asks.### As his self-proclaimed biological chauvinism at SXSW attests, Koch believes we’re far from any such technology. A vast gap remains between machines and the complex consciousness experienced by humans. “If it’s not wet,” he proclaimed at SXSW, “it’s not conscious!”

  The tongue-in-cheek line got a chorus of laughs from the SXSW crowd, including a chuckle from Chalmers, who joined Koch on the same panel. Chalmers is more circumspect about it all, of course, but he also stresses the distinction between articulating subjectivity and experiencing it. An advanced AI system might process different representations of the world and have the ability to communicate them, but that’s not something it’s feeling from the inside as a conscious system. And as researchers develop increasingly powerful and complex AI models, humans almost certainly will start to ascribe higher levels of consciousness to them, especially since their increasingly complex internal machinations make it harder for humans to understand what’s happening within them. Intuition says the more sophisticated their behavior, the more likely we are to see them as conscious, Chalmers says. And as these machines ascend some perceived spectrum of consciousness—at least in the popular imagination—perhaps we begin to wonder what sorts of moral, legal, and ethical forbearance they might deserve.

  So, why do these esoteric debates matter in our daily lives? As we turn over more decisions to machines, granting to them the power to make judgments about us or on our behalf, we must stop to remember that AI systems can approximate our experience but can’t yet actually know the human experience. Empathy arises from consciousness and our ability to reflect on ourselves, to understand that others reflect upon themselves as well, and to construct a fundamental, mutual bond in that shared awareness. The completely mechanistic program can capture and reflect a theory of the mind, simulating emotion, identifying near-imperceptible physiological signs of frustration or satisfaction, and showing inexhaustible patience.

  It might empirically understand where your mind is and meet you there, and it will do so without bias or emotional pushback, but it won’t arrive with any true empathy for your condition in the context of the shared human condition.

  THE SPACE BETWEEN MACHINE CONSCIOUSNESS AND THE HUMAN CONDITION

  As professors, consultants, and directors of leading economic and technological initiatives, we regularly find ourselves behind a podium or on a dais. But in 2017, when asked to deliver a major broadcast presentation on AI and its influence on society, I (Olaf) felt a different sort of pressure. Coaches offered advice on content and optimal presentation techniques, but the greatest training and support came from my wife, Ann, an educational professional who knows how to reach an audience of laypeople. She had little experience with the topic, but she’s intellectually curious, and that combination made her a great sounding board. More importantly, though, she also knew how to handle my mental state as I prepared. Early on, Ann pushed me to clarify my message and make it accessible, challenging me repeatedly on both key and mundane points in my presentation. “When you use the word ‘machine,’ it makes me think of motorcycles or washers, but not AI,” she provoked, and rightly so.

  Yet, the full depth of her support stemmed from her empathy, knowing when to switch from test audience to coach to cheerleader and becoming a motivational amplifier as the day of the performance approached. The beauty of the situation was that Ann could gauge when to switch from a critical to a supportive mode without me fully realizing that was what I needed. As an educator with training in literature and music, she could modulate between the hard data of the content flow and the softer behavioral aspects of preparing for a live stage performance. A spouse can be one’s toughest critic and fiercest supporter, and Ann lived up to both.

  It requires a certain depth of emotional intelligence and empathy—born of a complex human consciousness—to recognize when and how to modulate between those modes. For the foreseeable future, AI will not be able to do this. No system today exhibits a sense of when and how to switch between objective digital data and subjective information, such as the irrationality of emotions involved in performance, stepping to our side, accepting all facets of a tricky and often explosive mix, and then applying the subtle and smooth dance of the minds. They cannot discern the fine but critical line between the all-out preparation and the confidence necessary to produce at one’s highest level, whether presenting a major business initiative to the C-suite executives or diving off the starting blocks for the 100-meter Olympic freestyle. The secret sauce of success includes a healthy dash of unwavering belief in one’s ability to perform and to outperform others who might have the same level of aptitude.

  Ann needed to provide the necessary critique to make my talk better, but she also needed the metareflection and the empathy to know when pumping me up would result in the better overall performance. Like most spouses, she could understand the tipping points of my frustration and weigh a range of contextual information, including the passing of my father a little over a year earlier. All these factors can be fed into a machine, weighted, and tweaked to provide improved outcomes, but the AI system can never share the dizzying sense of intellectual stimulation, opportunity, enthusiasm, and adrenaline that mixed together with my feelings of loss and stress to shape my emotional condition at the time. Yet, that was something another human, especially a close friend, knows instinctively and can mold into motivation.

  Empathy amplified and converted the signals I sent to Ann, creating a shared experience and a productive, winning situation. What converts those signals in an AI system?

  THE SPIRIT AND THE MACHINE

  The concepts of spirituality and human exceptionalism don’t go over so well in some AI circles. Many researchers don’t buy the idea that humans embody something more than their constituent particles, which, they argue, could be rebuilt or re-created once we figure it all out. But in a world of
2.3 billion Christians, 1.8 billion Muslims, 1.1 billion Hindus, and hundreds of millions of other religious adherents,**** any conversation about the similarities of and differences between humans and machines is incomplete without acknowledging the possibility a spiritual and unknowable something beyond our current understanding. While much of the technical conversation about AI steers away from religious or spiritual values, plenty of philosophers and theologians have taken a keen interest in the reemergence of AI and what it portends for the church, mosque, or temple.

  In some cases, these values shape unique paths in AI development. In Japan, where traditional beliefs don’t draw major distinctions between humans, animals, and other entities, the concept of extremely lifelike robots seems perfectly normal. The same sort of machine disturbs many people in Western society. Humans don’t control the world or hold some higher plane of existence, explains Yasuo Kuniyoshi, director of the Intelligent Systems and Informatics Laboratory at the University of Tokyo. “We are part of many things—animals [are] just like us, or even non-animals, plants and stones and things like that,” Kuniyoshi says. “It’s just sort of an equal member of the world.”

  Christianity, Islam, Judaism, and many other religions hold human beings separate, as an exceptional part of a special creator-creation relationship. This concept then casts artificial intelligence in an intriguing theological light, viewing it as a new humanlike intelligence that people have created in their own image. Adding these highly capable AI systems alongside birds and dogs and humans doesn’t diminish the value of any of those creatures, says Noreen Herzfeld, a theology and computer science professor at College of St. Benedict at St. John’s University and author of Technology and Religions: Remaining Human in a Co-created World. The difference is that AI was created by humans in our image, so while “we believe we’re passing along the things we value, we’re also passing along our faults,” Herzfeld says. “We bear some responsibility for that.”

  The duty toward values, power, and trust that’s intrinsic in a creator-creation relationship run through most of the Christian theological writing on AI, especially among those who, like Herzfeld, have backgrounds in theology and computer science. Russell Bjork studied electrical engineering at MIT before enrolling at the Gordon Conwell Theological Seminary back in the late 1970s. With a young family to support, he went over to see if he could get some work teaching at Gordon College, a Christian liberal arts and sciences college in nearby Wenham, Massachusetts. They invited him to teach one quarter of computer science and, as Bjork says today, that one quarter had turned into thirty-eight years by early 2018.

  Bjork is hesitant to extend the Christian understanding of personhood to include smart machines, but he also takes issue with the idea of “locating personhood in a soul that’s implanted in human being at some point between conception and birth, as if it’s a separate creation distinct from the formation of the body,” he says. Bjork sees personhood as something that emerges in the course of human development, so it’s not inconceivable in his mind that a mechanical system could, in fact, obtain it. “I don’t anticipate that in the near future,” he says, “but it’s not a theologically impossible idea.” He recalls a time when the Artificial Intelligence Lab at MIT had a resident theologian, and notes that he shares concerns about the act of creation and the values, trust, and power relationships embedded within it. Will we create technologies that treat humans as valuable in and of themselves, or will AI systems discriminate against disabled, disadvantaged, or digitally disconnected people? “That which you value is what you embody in the things you produce,” Bjork says.

  Yet, a symbio-intelligent partnership between humans, nature, and now AI might produce far more than we realize from the outset, says the Reverend Dr. Christian Benek. Benek is a pastor, the CEO of The CoCreators Network, and a graduate of the world’s first doctor of ministry program focused on theology and science, based at Pittsburgh Theological Seminary. He wonders whether AI systems might help people discover “the wheat in the weeds” that enhances their humanity. In the Bible, Jesus Christ uses the parable of the wheat and the weeds to explain how God finds the sacred among the profane. Perhaps, Benek says, a complementary form of artificial intelligence might help us better understand what happens when a person senses the presence of God or something beyond their observable understanding. “You can look at this a lot of different ways, but why are we dismissing vast amounts of revelatory experiences that is potential data?” he asks. “We can’t reproduce that data, but that information points to something beyond ourselves. Maybe with AI we can start to gather that data and put together information we haven’t been able to quantify in some way. We might be just on front end of what it means to be human.”

  Benek’s sense of discovery and possibility extends from his deep belief in a participatory, rather than a supremacy or escapist, form of theology. Participatory theology is based on a redemptive process in which anyone can participate, including humans and machines alike, he explains. In contrast, supremacy theology would provide no place for debate, critique, or new discovery, manifesting itself in developers who are unwilling to consider the ripple effects of their technologies. And escapist theology, he says, “has been demonstrated through some of the actions of Elon Musk and the late Stephen Hawking when they suggest humanity must flee to preserve its existence.”

  Whatever the possibilities for AI and human spirituality, the concept of symbio-intelligence echoes Benek’s participatory theology and the spirit of discovery it embodies. We don’t know much more about cognition and consciousness in AI than we do about cognition and consciousness in octopi or whales, but both animals clearly display intelligence and have things to teach us about our own values, trust, and power. What might we learn from artificial intelligence and the many pathways it takes around the world?

  *Kevin Kelly, “The Myth of the Superhuman AI,” Wired (April 25, 2017).

  †Ed Yong, “A Brainless Slime That Shares Memories by Fusing,” The Atlantic, Dec. 21, 2016.

  ‡Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (New York: Prentice-Hall, 2009).

  §Ken Goldberg, “The Robot-Human Alliance,” Wall Street Journal (June 11, 2017).

  ¶Liqun Luo, “Why Is the Human Brain So Efficient?,” Nautilus, April 12, 2018.

  #Interview with the authors in San Francisco, January 29, 2018

  **Christina Bonnington, “Choose the Right Charger and Power Your Gadgets Properly,” Wired, Dec. 18, 2013.

  ††With this test, Chris Chabris and Daniel Simons showed that humans who focus too hard on a narrow task will miss critical elements in their field of observation. The test shows two groups of people playing basketball, and it asks the audience to count how often the ball gets passed between members of one team. Focused on counting, most observers miss the man in a gorilla suit who moonwalks through the scene. (Daniel Simons, The Invisible Gorilla, (New York: Harmony, 2010); and https://en.wikipedia.org/wiki/The_Invisible_Gorilla)

  ‡‡Interview with the authors via video conference, November 14, 2018

  §§Alan Jasanoff, The Biological Mind, (New York: Basic Books, 2018).

  ¶¶IBM Watson Personality Insights (http://watson-pi-twitter-demo.mybluemix.net)

  ##Dave Zielinski, Artificial Intelligence and Employee Feedback, Society for Human Resource Management, May 15, 2017.

  ***Kashmir Hill, “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did,” Forbes, Feb. 16, 2012.

  †††John Kao, “The Nature of Innovation Through AI,” (lecture, HULT-Ashridge Executive MBA Learning Journey Silicon Valley, San Francisco, CA, Nov 2017).

  ‡‡‡Terence Tse, Mark Esposito, and Olaf Groth, “Resumes Are Messing Up Hiring,” Harvard Business Review (July 14, 2014).

  §§§Daniel Karp, “Delivering the Next Level of Sales Efficacy with A.I.,” The Source (Cisco Investments blog), April 16, 2018.

  ¶¶¶Interview with the authors via video conference, F
ebruary 20, 2018

  ###Kevin Berger, “Ingenious: Christof Koch,” Nautilus, Nov. 6, 2014.

  ****Conrad Hackett and David McClendon, Christians remain world’s largest religious group, but they are declining in Europe, Pew Research Center, April 5, 2017.

  4

  Frontiers of a Smarter World

  Xianqiao Tong sleeps in Silicon Valley, but in his dreams he cruises the streets of Shenzhen.

  Tong leads a young company called Roadstar.ai, one of the more intriguing, albeit lesser known, autonomous car start-ups in the United States. Founded in May 2017 by three engineers who previously conducted autonomous driving research at Google, Tesla, Apple, Nvidia, and Baidu, the team has set an ambitious plan to have a fleet of driverless taxis covering much of Shenzhen by 2020. They expect to have the first of their “robo-taxis” in service as early as the end of 2018, albeit with human backups behind the wheel, and then remove that person a couple years later and use a remote operations center to steer cars through situations they can’t process autonomously, Tong says. They’re already thinking about how to design the user experience for the remote drivers and the passengers alike.

  Despite the fierce competition over autonomous vehicle technologies and the potential geopolitical sensitivities of a US company founded by residents of Chinese descent, Tong speaks openly about Roadstar.ai’s vision and prospects. He guards the recipe to the secret sauce, of course, but he’s happy to explain how the start-up’s platform fuses data at the sensor level, rather than soaking up all the data feeds and processing them on the back end, a process that reduces latency and allows more accurate identification of cars, bicycles, and other objects in the surrounding streetscape. He speaks freely because he and his colleagues, unlike their competitors in the field, enjoy the best of both worlds: access to the top tier of talent in the United States, and the government support and infrastructure development in China.

 

‹ Prev