Despite David Deutsch’s optimism about the capacity of machines to feel boredom and kindness, it is hard for me to see how felt emotional states can be programmed into a computer without some kind of sensual body and experienced sensations. In good, old-fashioned artificial intelligence, feeling boredom, joy, fear, or irritation must be turned into a rational process that can be translated into symbols and then fed into the computer. Emotion has to be lifted out of a feeling bodily self. This is not easy to do. When I’m sad, can that feeling be parsed purely through logic?
One may also ask if it is possible to reason well in our everyday lives without feelings. It is now widely acknowledged that emotion plays an important role in human reasoning. Without feeling, we aren’t good at understanding what is at stake in our lives. And therefore psychopaths and some frontal lobe patients who lose the ability to feel much for others are profoundly handicapped, despite the fact that a number of them can pass tests that show they have no “cognitive” impairment and can “compute” just fine. They may well be able to follow the sequence of a logical argument, for example, but they suffer from an imaginative emotional deficit, which results in their inability to plan for the future and protect themselves and others accordingly.
And if this affective imagination is not a conscious act that requires I tell a story to myself about what it would be like to be you or how I will feel tomorrow if I scream at you today, although such thoughts may accompany my gut sense of what to do or not to do, then how can it be programmed into a machine? When this emotional imaginative ability is missing, people often suffer ruinous consequences. Reasoning, it seems, does include more than Hobbes’s addition and subtraction, more than step-by-step calculations. Reasoning is not a pure state of logical calculation but one mixed with emotion.
In his book Descartes’ Error, Antonio Damasio criticizes the modern forms of Cartesian dualism that live on in science. He lashes into the concept “that mind and brain are related but only in the sense that the mind is the software program run in a piece of computer hardware called brain; or that brain and body are related, but only in the sense that the former cannot survive without the life support of the latter.”243 Damasio is invested in understanding the self and human consciousness through biological processes. In Self Comes to Mind, he addresses the engineering and computational metaphors for the brain and writes, “But the real problem of these metaphors comes from their neglect of the fundamentally different statuses of the material components of living organisms and engineered machines.” The difference, he argues, is fundamental: “Any living organism is naturally equipped with global homeostatic rules and devices; in case they malfunction, the living organism’s body perishes; even more important, every component of the living organism’s body (by which I mean every cell) is, in itself, a living organism, naturally equipped with . . . the same risk of perishability in case of malfunction.”244 He goes on to compare this organic reality to a plane, the 777: “The high-level ‘homeostatics’ of the 777, shared by its bank of intelligent on-board computers and the two pilots needed to fly the aircraft, aim at preserving its entire, one-piece structure, not its micro and macro physical subcomponents.”245 The idea of “homeostasis,” a word coined by Walter B. Cannon in the 1920s, predates Cannon’s term. Both Claude Bernard and Freud argued that organisms are equipped to maintain a physiological equilibrium. Unlike cognitive psychologists, with their emphasis on a computational “mind,” Damasio seeks to explain what we call mind and consciousness through our organic brains, without forgetting that our brains are also in our bodies, which are in the world. He is keenly aware of a difference between cellular and machine structures. “Homeostatics” in a plane and homeostasis in an organism do not function in the same way.
Can a machine feel anything? Could human subjectivity be simulated in cogs and wheels? Research on the role emotion plays in cognition has entered computation. Mostly these researchers are tweaking the computational model of mind, not overturning it. They are working apace to create better simulations of minds with emotion or affect as part of them. The authors of a book titled Emotional Cognitive Neural Algorithms with Engineering Applications: Dynamic Logic; From Vague to Crisp (2011), who have obviously not given up on computational methods and algorithms, tell the story of AI as a blinkered one:
For a long time people believed that intelligence is equivalent to conceptual understanding and reasoning. A part of this belief was that the mind works according to logic. Although it is obvious that the mind is not logical, over the course of the two millennia since Aristotle, and two hundred years since Newton, many people have identified the power of intelligence with logic. Founders of artificial intelligence in the 1950s and 60s . . . believed that by relying on rules of logic they would soon develop computers with intelligence far exceeding the human mind246 (my italics).
This, they acknowledge, did not happen. Therefore these authors are describing alternative computational methods and a dynamic form of logic in the hope that machine intelligence will begin to mimic human thinking more closely.
After all, even Descartes worked hard in The Passions of the Soul to show how mind and body interact and how our emotions are useful to us in living a good life. The very idea of CTM, however, isolates cognition and information processing from bodily movement and the senses. A machine mind can always be given a body, but the idea that a particular body has no effect on the mind’s essential algorithms perpetuates the mind-body divide. There are AI scientists who have abandoned the computational model altogether and turned to bodies for answers.
In his wonderfully titled book Passionate Engines: What Emotions Reveal About the Mind and Artificial Intelligence (2001), Craig DeLancey, in tune with a growing number of others, bemoans the failures of AI and notes that the best work in the field cannot even begin to “aspire to imitate some few features of an ant’s capabilities and accomplishments.” He goes on to argue that beginning “with pure symbol or proposition manipulation” will not result in “autonomous behavior” and declares the approach “a failure.” Not only that, he argues, “it reveals very deep prejudices about the mind . . . that are conceptually confused, unrealistic, and conflict with our best scientific understanding.”247 The man hopes to bring what he calls “deep affect” to AI by other means.
Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory, rejected GOFAI in the 1980s by insisting that intelligence requires a body. In his 1991 essay, “Intelligence Without Reason,” Brooks emphasizes “situatedness” and “embodiment” rather than symbolic representations, a strategy that closely echoes Merleau-Ponty’s phenomenology.248 Brooks is emphatic about where GOFAI went wrong: “Real biological systems are not rational agents that take inputs, compute logically, and produce outputs.”249 Using an embodied model, the scientist has created “mobots” or “creatures” in the MIT artificial intelligence lab. Interestingly, these artificial beings have no “I” or “self” model, no central guiding intelligence. They navigate the environment around them by responding “intelligently” through sensors.
These creatures resemble insects more than human beings. He describes them as “a collection of competing behaviors without a central control.” Indeed, they make me think of Diderot’s swarm of bees and the philosopher’s meditation in D’Alembert’s Dream on whether the swarm is a single being or a mass of separate beings acting in concert. Brooks has created a collection of capabilities in motion. He has read Dreyfus because he mentions the philosopher, who was heavily influenced by Heidegger, in a book, but Brooks is somewhat allergic to philosophy in general. He contends that he is not interested in “the philosophical implications” of his creatures and, despite the resemblance his thought may have to Heidegger’s, he claims his work “was not so inspired” and is based “purely on engineering considerations.”250 Without any direct or perhaps without any acknowledged relation to philosophical ideas, Brooks has made significant progress in AI by thinking about artific
ial bodies and their role in “intelligence.”
In Flesh and Machines: How Robots Will Change Us (2002), he restates Freud’s famous comments about Copernicus and Darwin as assaults on human “self-love.” In Brooks’s rephrasing, self-love becomes human “specialness.” He suggests that the third assault is being delivered (not by psychoanalysis—Freud is not mentioned as the originator of this comment) but by machines: “We humans are being challenged by machines.”251 He makes this claim despite the fact that he repeatedly contends that human beings are also machines. “Anything that’s living is a machine. I’m a machine; my children are machines. I can step back and see them as being a bag of skin full of biomolecules that are interacting according to some laws.”252 For Brooks, we biological machines are threatened by nonbiological machines.
I vividly remember my lesson in the fifth grade on simple machines: the lever, wheel-axle, screw, pulley, wedge, and inclined plane. Each one was pictured on the filmstrip the class watched. A simple machine was a device that could alter the magnitude or direction of a force. From these machines one could build complex ones. They were machine building blocks. Can the nervous system as a whole be characterized as a machine? Is the placenta a temporary machine? What about the endocrine system? Harvey’s use of the hydraulic system to characterize the working of the heart was unusually effective. The machine allowed him to understand the organ. But is this true of all anatomical functions? Isn’t Damasio right that there is a difference between the living cell and the machinery involved in building a plane or a car?
There is a continual elision at work—not only in Brooks’s writing, but in many of the texts I have read on AI—between one kind of machine and another and between the living and the simulated. The question is: What is alive and how do we know when something is alive? I liked to pretend my dolls were alive, half wished they would come alive, but I knew they wouldn’t. I know my computer isn’t alive, even though it is a marvelous machine. I don’t worry that it hasn’t the right to vote in elections, despite its “intelligence” and “memory.” If it gets a virus, I don’t sit beside it worrying about how it feels. Distinguishing the living from the nonliving, however, is a genuine philosophical problem. Saint Augustine famously noted that he knew what time was but when he was asked to explain it, he found it impossible to put his knowledge into words. I think I recognize what life is, but can I explain what it is? After all, we now have people who are brain-dead, which is not the same as dead-dead. Rodney Brooks is well aware that his “creatures” don’t feel the way human beings do, but then he contends that the difference between “us” and “them” may be insignificant. “Birds can fly. Airplanes can fly. Airplanes do not fly exactly as birds do, but when we look at them from the point of view of fluid mechanics, there are common underlying physical mechanisms that both utilize.”253 Well, yes, but there is still a difference between the nervous system of a bird with its wings open in flight and a jet with a motor and wings that becomes airborne. Isn’t the bird alive and the plane dead, despite the fact that they both move through the air?
Work on simulating emotional responses in robots has been a significant project in the MIT lab. Again, Brooks seems to be uncertain about where to draw the line between internal feeling and the external appearance of feeling. Writing about simulated emotions in robots, he asks, “Are they real emotions or are they only simulated emotions? And even if they are only simulated emotions today, will the robots we build over the next few years come with real emotions?”254 (italics in original) He does not explain how that will happen, but he is inclined to leave the question of simulation of emotion and actual emotion blurry. In his preface, he mentions HAL with admiration and admits that for now the robots of science fiction and the machines in our daily life remain distant from each other. This gap, however, will soon be closed: “My thesis is that in just twenty years the boundary between fantasy and reality will be rent asunder. Just five years from now that boundary will be breached in ways that are as unimaginable to most people today as daily use of the World Wide Web was ten years ago.”255 I am writing this in 2015. In 2007, I did not see the fantasy/reality border collapse in “unimaginable” ways. Prediction, however, has become something of a sport in AI circles.
One of Brooks’s younger colleagues, Cynthia Breazeal, drew inspiration from developmental psychology and infant research to produce her robot Kismet, a big-eyed interactive robot head. In her book Designing Sociable Robots, she cites research on infant development and the infant-mother couple or “dyad” as inspiration for Kismet. She is clearly aware of the research, which has demonstrated that newborns are capable of imitating an adult’s facial expressions. She knows that every child is dependent on interactions with another person to develop normally. Breazeal cites Colwyn Trevarthen, an important infant researcher who coined the term “primary intersubjectivity,” now used to describe an infant’s earliest social interactions with other people.256 She credits this theory as essential to her design of Kismet. And yet, how does one go about giving feelings to computers, metal, wiring, and transistors?
Cynthia Breazeal had her own favorite fictional robots as a girl—not HAL, whom she found “creepy,” but R2-D2 and C-3PO from Star Wars. Breazeal designed Kismet to simulate infantile emotional facial responses to others. The machine does not talk but makes expressive facial movements and sounds that are pitched to mimic emotion in relation to its interlocutor. The robot is an amazing feat of interactive engineering. Kismet, she writes, “connects to people on a physical level, on a social level, and on an emotional level. It is jarring for people to play with Kismet and then see it turned off, suddenly becoming an inanimate object.”257 Along with its camera eyes and sensors, Kismet’s parts have been given the names of physiological systems. The robot has a “synthetic nervous system” and a “motivational system” complete with “drives.” Her use of the word “drive” appears to be more influenced by its use in psychology than in engineering, but Breazeal seems wholly unaware of the history of the word, which comes from the German Trieb, has its roots in philosophy, and played such a prominent role in Freud’s theory. Kismet’s parts are not organic but mechanical, and although the machine has been programmed to simulate “six basic emotions”—anger, disgust, fear, happiness, sadness, and surprise, each of which results in a facial expression—can this moving head be called an emotional machine?258
The question is not purely one of semantics but of vocabulary, similar to Damasio’s comment about “homeostasis” in relation to organisms and planes, as well as the problem Karl Pribram identified as crucial to the way his colleagues in neuroscience received Freud’s Project. To give a mechanical system a biological name—“synthetic nervous system”—obfuscates the fact that it does not function at all like an organic human nervous system. There is no artificial brain in Kismet with billions of neurons, no limbic system in that brain, no enteric nervous system, no nerve endings, no endocrine system, but by assigning it a biological name preceded by the word “synthetic,” it becomes a kind of nervous system.
If Rodney Brooks seems unsure about the difference between felt emotions and the appearance of emotions, so does Breazeal. The goals she envisions for her creature have a tendency to merge with what she has actually produced:
Humans are the most socially advanced of all species. As one might imagine, an autonomous humanoid robot that could interpret, respond, and deliver human-style social cues even at the level of a human infant is quite a sophisticated machine. Hence, this book explores the simplest kind of human-style social interaction and learning, that which occurs between a human infant with its caregiver. My primary interest in building this kind of sociable, infant-like robot is to explore the challenges of building a socially intelligent machine that can communicate with and learn from people.259
First, characterizing interactions between an infant and “caregiver” as “the simplest kind of human-style social interaction and learning” glosses over what is actually involved in the
exchanges between them. Researchers continue to analyze the enormously complex relations that take place within the parent-infant dyad, and the intricate physiology of these interactions is by no means simple, nor is it by any means fully understood. The parent-infant dyad consists of two sentient beings who are engaged with each other. While an infant may not be reflectively self-conscious, she is certainly prereflectively conscious. She possesses precisely what Kismet does not—experienced feelings.
“Synchrony” is a word used to identify the dynamic and reciprocal physiological and behavioral adaptations that take place between a parent and baby over time. Scientists research gaze, vocal, and affective or emotional synchronies. For example, Ruth Feldman and her colleagues studied infants and mothers who engaged in face-to-face interactions but did not touch each other. During these “episodes of interaction synchrony,” mothers and infants had coordinated heart rhythms: “Results of the present study demonstrate that human mothers and infants engage in a process of bio-behavioral synchrony as it was initially defined in other mammals—the regulation of infant physiology by means of social contact. During face-to-face interactions mothers adapt their heart rhythms to those of their infant’s [sic] and infants, in turn, adapt their rhythms to those of the mother’s within lags of less than 1 s [one second], forming biological synchrony in the acceleration and deceleration of heart rate.”260 It has become clear that such synchronies are vital to an infant’s development, including brain development. Obviously, neither mother nor baby is aware of coordinating their heartbeats, but how does one go about imitating such subtle interactions if one of the partners has no heart—in fact, no biological systems whatsoever?
A Woman Looking at Men Looking at Women Page 33