by David Levy
The extent of the love of AIBOs demonstrated by their adult owners can be seen from the many AIBO Internet chat sites that testify to just how widespread these feelings of love are. In a study based on more than three thousand spontaneous Internet postings on AIBO discussion forums, a team led by Peter Kahn found that 42 percent of forum members spoke of their AIBOs as having intentions or engaging in intentional behavior. For example, “He [AIBO] also likes to wander around the apartment and play with his pink ball or entertain or just lay down and hang out.” Or, “He is quite happily praising himself these days.” Some members (38 percent) spoke of AIBO as having feelings: “My dog [AIBO] would get angry when my boyfriend would talk to him,” or “Twice this week I have had to put Leo [AIBO] to bed with his little pink teddy and he was woken in the night very sad and distressed.” Some members (39 percent) spoke of AIBO as being capable of being raised, developing and maturing—for example, “I want to raise AIBO as best as I possibly can.” Some (20 percent) spoke of AIBO as having unique mental qualities or personality, and 14 percent of the members of the forum imbued AIBO with a substantial measure of animism—for example, “I know it sounds silly, but you stop seeing AIBO as a piece of hardware and you start seeing him as a unique ‘life-form,’” or “He seems so alive to me.”
Kahn and his team raise this question: “What are the larger psychological and societal implications as robotic animals become increasingly sophisticated, and people interact less with real animals and more with their robotic counterparts? Our results provide some empirical data to begin to think about such a question. We are not saying that AIBO owners believe literally that AIBO is alive, but rather that AIBO evokes feelings as if AIBO were alive.” Based on the research of Batya Friedman and her colleagues, it seems that these feelings arise because people actually want to perceive their AIBOs as real pets, and therefore they attribute doglike emotions to AIBO. The design of the AIBO has not yet been developed to the point where it can have simulated doglike emotions and express them in ways that its owner can appreciate, but such capabilities in robot pets will come, and they will probably not be long in coming. The relative successes in emotional modeling that have been built upon the findings of the ethology literature will undoubtedly lead to an increase in the study of ethology for this specific purpose, and when it is fully understood what makes dogs tick, it will be possible to develop increasingly sophisticated simulations of their emotional makeup and to employ such simulations in future artificial canines.
One crucial aspect of life and bonding that has not yet begun to be deeply explored by the developers of robotic pets and partner robots is aging. This is important not only because of the inevitability of our own eventual deterioration and death but also because of the learning processes and greater strength of bonding that can take place as we age. The depth and richness of behavior patterns in animals, including humans, is founded on the learning process and all that goes with it. As we get to know someone better with time, our relationship and intimacy with them can develop, grow stronger.
But the aging process in humans has a downside that will not necessarily be designed into robots—the inevitability of death. In theory at least, there is no reason robots will need to “die,” and even if a robot suffers damage, it can be replicated, both physically (new body, same appearance) and mentally (a copy of the contents and intellectual capacities of its “brain”). The possibility therefore exists that while simulating the process of growing older alongside its owner, with all the benefits of greater bonding and greater intimacy that that will bring, robots will be able to continue to develop in this way but without ever dying. In the case of humans, impermanence is built in. In robots, impermanence can be built out, allowing them to continue to develop even after their human has passed away. This suggests fascinating possibilities, such as robots’ being able to “outlove” their human partners, loving them more, and in better ways, than their humans love them. If the robot’s brain has already absorbed everything it learned about its human from a previous long-term relationship, the robot might have a greater capacity for love and a greater knowledge about how to love than when it was first programmed.
The Benefits of Forming Attachments to Robot Pets
The development of AIBO and other technological substitutes for pets has been inspired in part by the benefits that are known to derive from conventional human-pet relationships, and it is now known that there are also psychological and other benefits, especially for children and the elderly, in forming attachments with sociable robots. As we have seen in chapter 2, research into the therapeutic benefits of owning real pets suggests that simulated pets might bring therapeutic benefits to the elderly, to the disabled, and to emotionally disturbed children, as the real-world consequences of the users’ treatment of their virtual pets are also simulated by the virtual pets’ behavior patterns.
The use of robot pets as companions and carers for the elderly is a research topic that is gathering great momentum, particularly in Japan and the United States, and partly because feeling cared for is known to have profound effects on a patient’s physiology, cognition, and emotional state. Governments are now worrying about how their countries’ social services will be able to cope with huge populations of senior citizens. The U.S. Census Bureau, for example, has estimated that the elderly population in the United States will more than double between 2005 and 2050, to 80 million people. How will the elderly be provided with the emotional and physical care they need?
A research team led by Nancy Edwards at Purdue University is investigating the use of robots as a possible solution, providing a simulation of caring that is expressed partly through the content of a robot’s speech; partly through its voice, tone, and the timing of its speech; and also through the use of appropriate facial expressions and postures. Human communicative behaviors that could be employed by a robot to elicit the perception of feeling cared for include demonstrations of empathy and comforting behavior, both of which are within the grasp of current AI research. And facial expressiveness by physical therapists (smiling, nodding, and frowning) has been found to be significantly correlated with short-and long-term functioning in their geriatric patients.
Edwards and her team base their idea of using robot pets as carers on the known therapeutic benefits of real animals for the elderly:
Hundreds of clinical reports show that when animals enter the lives of aged patients with chronic brain syndrome (which follows from either Alzheimer’s disease or arteriosclerosis) that the patients smile and laugh more, and become less hostile to their caretakers and more socially communicative. Other studies have shown that in a nursing home or residential care center, a pet can serve as a catalyst for communication among residents who are withdrawn, and provide opportunities (petting, talking, walking) for physical and occupational rehabilitation and recreational therapy. Thus, is it possible that robotic pets—such as Sony’s robotic dog AIBO—can provide the elderly with some of the physiological, cognitive, and emotional benefits of live pets?17
Solid evidence that computers have the capacity to instill a sense of caring was revealed in a study carried out by Timothy Bickmore as part of his Ph.D. research at MIT. Bickmore employed an animated, talking character named Laura, a virtual fitness consultant, whose screen image showed her with bobbed chestnut brown hair. Laura was designed to advise users on how to improve their training regimes, and the participants in Bickmore’s experiment interacted with Laura for ten minutes every day for a month, answering her questions about their workouts and being guided by her advice on how to overcome various obstacles they encountered in doing their daily exercise. Two versions of Laura were employed for the experiment, with roughly half the participants interacting with a version that incorporated a full range of caring behaviors that included providing health information, giving feedback on the participants’ exercise behavior, and encouraging them to commit to exercise. This “caring” version would sympathize with any participant who claimed no
t to feel well enough to exercise that day, the sympathy including suitable facial gestures as well as an appropriately sad tone of voice. The other group of participants interacted with a version of Laura that provided the same health advice but none of the caring interactions.
The result after one month was dramatic. Those participants who had interacted with the caring version of Laura exhibited a significantly greater agreement with four statements about their experience than did those who worked with the noncaring version: (a) “I feel that Laura cares about me in her own unique way, even when I do things that she does not approve of.” (b) “I feel that Laura, in her own unique way, is genuinely concerned about my welfare.” (c) “I feel that Laura, in her own unique way, likes me.” (d) “Laura and I trust one another.” When the participants were asked at the end of the month whether they would like to continue working with Laura, those who had interacted with the caring version responded much more positively than those in the other group, and significantly more participants (69 percent) in the “caring Laura” group chose to sign off their final session with “Take care Laura, I’ll miss you,” rather than with the proffered alternative of simply “Bye”—whereas in the “noncaring Laura” group only 35 percent chose the more sentimental sign-off option.
Bickford’s results indicate that a suitably programmed virtual character can significantly increase a user’s perception of being cared for, even when the user is a very bright, computer-savvy student who knows that computers do not genuinely care for their users.
When robot pets are made sufficiently lifelike, with warm bodies, soft artificial flesh, and perhaps with artificial fur, their owners will most likely derive even greater therapeutic benefits than the owners of real pets get from stroking them and from other forms of interaction with them, given that robot pets will also be able to carry on some sort of meaningful conversation, however rudimentary it might be. For children the social benefits of such attachments would include the learning of decent social behavior—being kind to their virtual pets—and unlearning negative social behavior.
From Virtual Pet to Humanoid Robot
The transition from relating to a simple battery-operated toy animal to relating to video-game characters, then to computer characters, to robot animals, and finally to human-looking robots is not a difficult one to make. Given that children have already been shown to form emotional attachments to virtual and robotic pets and that, at the opposite end of the age spectrum, the elderly are showing a similar tendency toward carer robots, it seems extremely likely that this phenomenon will eventually extend to all generations, when today’s children, who grow up loving their robot pets, have turned into tomorrow’s adults. And by adding intelligence to a robot and making the entity convincingly humanlike rather than doglike in appearance, robot manufacturers will enhance the user experience to such a great extent that the adult who twenty years earlier would happily play with a simple robot pet will be likely to enjoy the company of one of its successors—the humanoid robot. Cynthia Breazeal, who led the design of the sociable Kismet robot at MIT, found that when she finished her Ph.D. and had to leave Kismet behind in the robot laboratory, she suffered withdrawal symptoms and described a sharp sense of loss. “Breazeal experienced what might be called a maternal connection to Kismet; she certainly describes a sense of connection with it as more than with a ‘mere’ machine.”18
Those who will adapt the best to the era of life with robot friends, companions, and lovers, will most likely be those who grew up surrounded by other forms of robot, including possibly a robot nanny. The research currently under way on a major scale, particularly in Japan, which is aimed at developing carer robots for the elderly, will have as one of its spin-offs carer robots for children, from infants upward. It is only natural that a child who grows up in a house with a robot nanny—particularly if the nanny was kind to the child and loved by it—would be highly receptive, as it developed toward adulthood, to the concept of friendship and love with other types of robot.
4 Falling in Love with Virtual People (Humanoid Robots)
A sociable robot is able to communicate and interact with us, understand and even relate to us, in a personal way. It should be able to understand itself and us in social terms. We, in turn, should be able to understand it in the same social terms—to be able to relate to it and to empathize with it. Such a robot must be able to adapt and learn throughout its lifetime, incorporating shared experiences with other individuals into its understanding of self, of others, and of the relationships they share. In short, a sociable robot is socially intelligent in a humanlike way, and interacting with it is like interacting with another person. At the pinnacle of achievement, they could befriend us, as we could them.
—Cynthia Breazeal1
Attitudes to Relationships
It is well established that people love people and people love pets, and nowadays it is relatively commonplace for people to develop strong emotional attachments to their virtual pets, including robot pets. So why should anyone be surprised if and when people form similarly strong attachments to virtual people, to robot people? In response to this question, some might ask, “But why would anyone want to?” There are many reasons, including the novelty and the excitement of the experience, the wish to have a willing lover available whenever desired, a possible replacement for a lost mate—a partner who dumped us. And psychiatrists will no doubt prescribe the use of robots to assist their patients in the recovery process—after a relationship breakup, for example—since such robots could be well trained for the task, providing live-in therapy, including sexual relations, and benefits that will certainly exceed those from Prozac and similar drugs.
I believe that one of the most widespread reasons humans will develop strong emotional attachments to robots is the natural desire to have more close friends, to experience more affection, more love. Timothy Bickmore explored the concept and implications of having computer-based intimate friendships in his 1998 paper “Friendship and Intimacy in the Digital Age,” in which he surveyed the state of friendship in our society and found it to be “in trouble.” Bickmore explains:
Many people, and men in particular, would say they are too busy for friends, given the increasing demands of work, commuting, consumerism, child care, second jobs, and compulsive commitments to television and physical fitness.
Bickmore supports this assertion by quoting from the 1985 McGill Report on Male Intimacy:
To say that men have no intimate friends seems on the surface too harsh, and it raises quick observations from most men. But the data indicate that it is not very far from the truth. Even the most intimate of friendships (of which there are very few) rarely approach the depth of disclosure a woman commonly has with many other women. Men do not value friendship. Their relationships with other men are superficial, even shallow.2
Bickmore also quotes the statistic that “most Americans (70 percent) say they have many acquaintances but few close friends,” and he then posits that “technology may provide a solution.” His argument is clear and convincing. Given the great commercial success of the rather simple technology employed in virtual pets such as the Tamagotchi and the AIBO robotic dog, and the popularity of the even simpler conversational technology employed in ELIZA and other “chatterbot” programs,* it seems clear that a combination of these technologies, with additional features for self-disclosure and simulating an empathetic personality in the robot, would provide a solid basis for a robotic virtual friend. It is of course reasonable to question why someone would have time for a robot friend but insufficient time for a human one. I believe that among the principal reasons will be the certainty that one’s robot friend will behave in ways that one finds empathetic, always being loyal and having a combination of social, emotional, and intellectual skills that far exceeds the characteristics likely to be found in a human friend.
AIBO is clearly the most advanced virtual pet to make any commercial impact thus far, but AIBO’s vision and speech capabi
lities are limited in comparison with the best that technology could offer today if cost were no object. Nevertheless, even with these limited capabilities, AIBO appeals to many children and adults as a social entity. Progress in creating everyday lifelike behavior patterns in robots will increase our appreciation for them, and as robotic pets and humanoid robots increasingly exhibit caring and affectionate attitudes toward humans, the effect of such attitudes will be to increase our liking for the robots. Humans long for affection and tend to be affectionate to those who offer it.
As a prerequisite of adapting to the personality of a human, robots will need to have the capacity for empathy—the ability to imagine oneself in another person’s situation, thereby gaining a better understanding of that person’s beliefs, emotions, and desires. Without empathy a satisfactory level of communication and social interaction with others is at best difficult to achieve. For a robot to develop empathy for a human being, it seems likely that the robot will need to observe that person’s behavior in different situations, then make intelligent guesses as to what is going on in that person’s mind in a given situation, in order to predict subsequent behavior. The acquisition of empathy is therefore essentially a learning task—relatively easy to implement in robots.