by David Levy
Each of the participants in the experiment went through sessions with three computers, each running a different program. There was a tutor program, a program that tested the participants on the topics taught by the tutor program, and finally a program that evaluated both the participants’ test results and the teaching abilities of the tutor computer. Both groups, men and women, regarded the female-voiced evaluator as significantly less friendly than the male, supporting the stereotypical view that an evaluation by a man is more acceptable than exactly the same evaluation by a woman. In addition, both groups treated praise from the “male” computer more seriously than exactly the same praise from the “female” computer and believed the tutor computer to be significantly more competent after it had been praised by the “male” evaluator computer, compared to when it had been praised by the “female” evaluator. Finally, the “male” computer was perceived as being more informative than the “female” computer on the subject of computers (a “masculine” subject), while the “female” computer was considered to be the more informative when tutoring in love and relationships (a “feminine” topic).
The clear evidence from these experiments confirms that both men and women tend to carry over stereotypical views on human gender to their interactions with computers. Yet when they were questioned after the experiments, the participants uniformly agreed that there was no difference other than voice between the “male” and “female” computers and that it would be ludicrous to think of computers in gender stereotypes!
Another series of experiments was devoted to an investigation into whether people are polite to computers, as they are to other people. Research in social psychology has revealed that when someone is asked to comment on another person in a face-to-face social situation—for example “How do you like my new haircut?”—the resulting comments tend to be positively biased, even when the genuine evaluation might be negative. This is because people are inherently polite to other people. Nass and his team replicated this type of situation by having participants work with a computer on a task, then asking each participant to evaluate the computer’s performance. These evaluations were conveyed by a participant in one of three ways: to the computer itself; to another computer, which the participant knew to be another computer but which was identical for all practical purposes to the computer being evaluated; and as a pencil-and-paper questionnaire. The evaluations presented by the participants to the collaborating computer itself were found to be significantly more positive than the evaluations presented to the second computer and to those on paper (both of which produced identical, and presumably truthful, responses). The clear conclusion here is that people are polite to computers, this despite a uniform denial by the participants that computers have feelings or that they deserve to be treated politely.
In yet another series of experiments, Nass’s team investigated the psychological phenomenon of reciprocal self-disclosure. Research psychologists have confirmed something that is intuitively obvious—the general reluctance of people to talk about their innermost feelings to anyone other than their nearest and dearest. The one pronounced exception to this rule is that people will often disclose their secrets to strangers if the strangers first disclose secrets about themselves.9 Does this reciprocity of self-disclosure apply to people who are in conversation with a computer? In the experiment designed to answer this question, the participants were interviewed by a computer on a variety of topics. Where there was no self-disclosure by the computers, the interview questions were asked in a different manner, without suggesting in any way that the computer had feelings and without the computer’s referring to itself as “I.” Typical of the differences between these questions was
What has been your biggest disappointment in life?
in which there is no self-disclosure, and
This computer has been configured to run at speeds of up to 266 MHz. But 90% of computer users don’t use applications that require these speeds. So this computer rarely gets used to its full potential. What has been your biggest disappointment in life?
in which the computer’s question is preceded by an explanation of one of its “disappointments.” A less technically oriented example from the same experiment was:
Are you male or female?
and
This computer doesn’t really have a gender. How about you: are you male or female?
The results demonstrated that that when the computer reciprocated, by first disclosing something about itself before asking the question, the participants’ responses evidenced more intimacy, in terms of both the depth and the breadth of the participants’ self-disclosure, than when the computer disclosed nothing about its virtual persona.10 So once again the evidence points to a human tendency to relate to computers in much the same way as the same human would relate to other humans in comparable social situations.
The weight of the evidence found by Nass and his colleagues from these and other experiments* leads to the conclusion that people subconsciously employ the same social “rules” when interacting with computers as they do when interacting with other people. And this despite
the fact that the participants in our experiments were adult, experienced computer users. When debriefed, they insisted that they would never respond socially to a computer, and vehemently denied the specific behaviors they had in fact exhibited during the experiments.11
It seems perfectly reasonable to explain this phenomenon on the basis of a combination of attachment and anthropomorphism—more the latter in these experiments, because the participants did not interact with the computers for long enough for attachment to become the dominant factor. Nass and his group disagree, basing their arguments on a subtle but importantly different definition of anthropomorphism from the customary one.* Instead they prefer to treat such behavior by computer users as ethopoeia, responding to an entity as though it were human while knowing that the entity does not warrant human treatment or attribution. I feel that the line between subconscious anthropomorphism (as I and many others use the word) and ethopoeia is too fine, if it exists at all, to cause us any concern in this discussion.
The Development of Social Relationships with Computers
Computers are increasingly being regarded as our social partners, and with the evidence amassed by Nass and his group it is not difficult to understand why. In addition to the examples of their experimental research described above, Reeves and Nass have also discovered that people prefer interacting with computers that have identifiable personalities, more so when a computer’s personality matches their own and especially when the user actually experiences the process of the computer’s adapting its own personality and style of communication to be increasingly like that of the user.† Yet another supporting argument for the view of computers as social entities is the liking that people develop for computers that praise them, preferring these computers to ones that offer no such compliments.
One area in which social interaction between humans and computers is often evident is the realm of games. The history of game playing by computers is littered with evidence that many humans anthropomorphize when competing against a computer program—for example, Michael Stean’s exclamation “bloody iron monster” and his dubbing the computer “a genius.”* In an experiment designed to investigate the manner in which human game players are emotionally stimulated by computers, two social psychologists, Karl Scheibe and Margaret Erwin, arranged for forty students to play five different computer matching games against a machine,† while a tape recorder was left running to record the students’ comments. Almost all of the students referred to the computer as they might a human opponent, making comments such as, “It’s just waiting for me to do it.” Interestingly, the students’ vocabulary employed for the machine often included the words “he,” “you,” and “it,” but never “she.”
While game playing is perhaps one of the most sociable activities in which computers can participate and demonstrate their sociability, the breadth of com
puter applications in which software can be socially responsive is almost limitless. One increasingly common reason for interacting with computer technology is the availability for purchase, via the Internet, of just about every type of product. When we buy something from an Internet shop, the owners of that shop want us to return to buy more, so customer loyalty and commitment are important to them. In order to engender such feelings in us, these shops often use software designed to learn more about us from our shopping habits, information that might be used at a later date to engage our interest and encourage us to buy. A relatively simple example of this can be seen in the way that the Amazon site operates. When I buy a book from Amazon, it remembers my purchase and tells me what other books the software believes might interest me. The software on the site knows ‡ who else has bought the book I have just purchased, and it knows what other books those same people have purchased from Amazon, so it is able to deduce that I might have similar tastes to those other people and recommends to me the other books most often bought by that group. Translating this crude (but presumably effective) approach to a world with robots, when I ask my butler robot to bring me a glass of a particular chardonnay, it will remember, and in the future it might ask if I would like it to go to the wine store to buy a similar wine that it knows is on special offer. In this way my butler robot will endear itself to me, just as Amazon hopes to do. But relating to technology does not always bring its emotional rewards in the form of an interactive process, such as the way I might interact with my robot butler. We can love our Furby, but the Furby does not love us. We care about the Furby, but we do so without needing the relationship to become two-sided. In a sense this is analogous to sex with a prostitute—the needs of the client do not include the requirement that the prostitute love him.
Why, then, do some humans develop social relationships with their computers, and how will robots in future decades replicate the benefits of human-human relationships in their own relationships with humans? To help us answer this question, we should first consider exactly what emotional benefits human-human friendships provide and then determine whether these benefits might similarly be provided by computers.
In his book Understanding Relationships, Steve Duck has summarized the four key benefits of human friendships as:
(1) A sense of dependability, a bond that can be trusted to provide support for one of the partners when they need it.
A dramatic example of human trust in computers and dependability on them can be seen in the progress made during recent years in the field of computer psychotherapists. For four decades researchers attempted, without very much success, to replicate in software the experience of psychotherapy encounters, replacing a human therapist with a computer. But then a team at King’s College London, led by Judy Proudfoot, developed a successful therapy program called Beating the Blues, for dealing with anxiety and depression. Their most important finding was that computer therapy, using their software, reduced anxiety and depression in a sample of 170 patients “significantly and substantially,” to levels that were barely above normal.
The relevance of this progress to the subject of human-computer emotional relationships derives from the nature of the patient-psychotherapist relationship. In making the initial decision to visit a therapist, and in deciding to continue with the course of therapy after the first few visits, a patient places great trust in the therapist. This trust encourages the patient to divulge personal and intimate confidences to the therapist and to take the therapist’s advice on sensitive emotional and other intimate problems in their lives. The fact that patients willingly divulge the same confidences, and take the same advice, when interacting with a computer therapist demonstrates an inherent willingness to develop emotional relationships on a trusting and intimate level with computers.* Furthermore, as we saw in chapter 1, the act of divulging intimate confidences is one of the ingredients that can quickly turn a relationship into love.
(2) Emotional stability—reference points for opinions, beliefs and emotional responses.
Endowing a robot with opinions and beliefs is, at the simplest level, merely a matter of programming it with the necessary data, which could take a form such as this:
OPINION: The Red Sox will lose to the Yankees tomorrow.
EXPLANATION: Their top four players are ill with the flu. They have lost to the Yankees in the last seven games between them. The Yankees have recently purchased the two best players in the country.
And as software is developed that can argue a case logically—for use in robot lawyers, for example—it will become possible for robots to argue in defense of their opinions and beliefs by making use of such explanations.
Giving a robot the means to express appropriate emotional responses is a task that falls within the development of a software “emotion module.” Robot emotions are discussed briefly in the section “Emotions in Humans and in Robots” in chapter 4, and more fully in Robots Unlimited,* with the Oz emotion module, Juan Velasquez’s Cathexis program, and the work of Cynthia Breazeal’s group at MIT among the best-known examples created to date. Research and development in this field is burgeoning, within both the academic world and commercial robot manufacturers, and especially in Japan and the United States. I am convinced that by 2025 at the latest there will be artificial-emotion technologies that can not only simulate the full range of human emotions and their appropriate responses but also exhibit nonhuman emotions that are peculiar to robots. This will make it possible for robots to respond to some human emotions in interestingly different ways from those exhibited by humans, ways that some people will most likely find to be more appealing in some sense than the emotional responses they experience from humans.
(3) Providing physical support (doing favors), psychological support (showing appreciation of the other and letting the other person know that his or her opinion is valued), and emotional support (affection, attachment and intimacy).
Physical support from robots will be a question only of engineering, of designing and building robots to have the necessary physical capability to perform whatever task is being asked of them. If the favor consists of mowing the lawn or vacuuming the carpet, such robots are already on sale. As time goes on, more and more tasks will be undertaken by special-purpose robots, of which the lawn mower and vacuum cleaner are merely the first domestic examples. Eventually there will be not only a vast range of robots, each of which can perform its own specified task, but also robots that can operate these robots and others, making it possible for us to ask one robot to accomplish all manner of tasks simply by commanding the relevant special-purpose robots to do their own thing.
Psychological support from robots will most likely be provided by robot therapists, programmed with software akin to that employed in the program Beating the Blues.
Emotional support will be an ancillary by-product of a robot’s emotion module, one for which artificial empathy will be a prerequisite. It has been shown that so long as a computer appears to be empathetic—understanding and responding to the user’s expression of emotion and appropriate in the feedback it provides—it can engender significant behavioral effects in a user, similar to those that result from genuine human empathy. Empathy in robots will be achieved partly by measuring the user’s psychophysiological responses, as described in the next chapter. By converting this empathy into emotional support, robots will be laying the foundations for behavior patterns that will enhance their relationships with their users.
(4) Providing reassurance about one’s worth as a person.
Our friends contribute to our self-evaluation and self-esteem by giving us compliments and repeating to us the nice things that other people have said about us. Friends also raise our self-esteem by listening, asking our advice, and valuing our opinions. All of this will be accomplished by a robot’s conversational module, backed by scripts and other conversational technologies that teach a robot how to talk in a reassuring manner.
In considering the potential of robot
s to provide these various benefits of friendship, Yeager asks whether it is likely or even inevitable that we should entertain some doubts in the backs of our minds—to what extent will people in the middle of this century be saying to themselves, “But this thing is still only a machine”? To what extent will those whose strongest friendships are all or mostly with robots miss the angst of human-to-human relationships? It is my belief that such doubts and feelings will by then have dissipated almost entirely, partly because robots will be so convincing in their appearance and behavior and partly because people who grow up in an era in which robots are even more commonplace than pet cats and dogs will relate to robots as people nowadays relate to their friends.
Sustaining Social Relationships with Computers
Timothy Bickmore and Rosalind Picard have conducted an extensive review of the research into the social psychology of human-human relationships and human-human communication, research that is also relevant to human-computer relationships. They found that people use many different behaviors to establish and sustain relationships with one another and that most of these behaviors could be used by computer programs to manage their relationships with their users.
One of the key elements of relationships—an element that until recently at least has been missing from the software designed to create relationships between a computer and a human—is the importance of maintaining the interest, trust, and enjoyment of the human. Maintaining interest can be a side effect of doing everyday tasks together on a regular basis, the collaboration on these tasks acting as a bonding agent. Maintaining the trust in a relationship can be achieved by “metarelational communication”—talking about the relationship in order to establish the expectations of each partner and to ensure that all is well in the relationship. Other contributing factors to maintaining trust are: (a) confiding in one’s partner as to one’s innermost thoughts and feelings—this increases both trust and closeness; (b) emphasizing commonalities and deemphasizing differences—this behavior is associated with increasing solidarity and rapport with one’s partner; and (c) “lexical entrainment”—using a partner’s choice of words in conversation.