If Tara can “be herself” only with a robot, she may grow up believing that only an object can tolerate her truth. What Tara is doing is not “training” for relating to people. For that, Tara needs to learn that you can attach to people with trust, make some mistakes, and risk open conversations. Her talks with the inanimate are taking her in another direction: to a world without risk and without caring.
Automated Psychotherapy
We create machines that seem human enough that they tempt us into conversation and then we treat them as though they can do the things humans do. This is the explicit strategy of a research group at MIT that is trying to build an automated psychotherapist by “crowdsourcing” collective emotional intelligence. How does this work? Imagine that a young man enters a brief (one- to three-sentence) description of a stressful situation or painful emotion into a computer program. In response, the program divides up the tasks of therapy among “crowd workers.” The only requirement to be employed as a crowd worker is a command of basic English.
The authors of the program say they developed it because the conversations of psychotherapy are a good thing but are too expensive to be available to everyone who needs them. But in what sense is this system providing conversation? One worker sends a quick “empathic” response. Another checks if the problem statement distorts reality and may encourage a reframing of the problem. Or a reappraisal of the situation. These, too, are brief, no more than four sentences long. There are people in the system, but you can’t talk to them. Each crowd worker is simply given an isolated piece of a puzzle to solve. And indeed, the authors of the program hope that someday the entire process—already a well-oiled machine—will be fully automated and you won’t need people in the loop at all, not even piecemeal.
This automated psychotherapist, Tara’s conversations with Siri, and the psychiatrist who looks forward to the day when a “smarter” Siri could take over his job say a lot about our cultural moment. Missing in all of them is the notion that, in psychotherapy, conversation cures because of the relationship with the therapist. In that encounter, what therapist and patient share is that they both live human lives. All of us were once children, small and dependent. We all grow up and face decisions about intimacy, generativity, work, and life purpose. We face losses. We consider our mortality. We ask ourselves what legacy we want to leave to a next generation. When we run into trouble with these things—and that kind of trouble is a natural part of every life—that is something a human being would know how to talk to us about. Yet as we become increasingly willing to discuss these things with machines, we prepare ourselves, as a culture, for artificial psychotherapists and children laying out their troubles to their iPhones.
When I voice my misgivings about pursuing such conversations, I often get the reaction “If people say they would be happy talking to a robot, if they want a friend they can never disappoint, if they don’t want to face the embarrassment or vulnerability of telling their story to a person, why do you care?” But why not turn this question around and ask, “Why don’t we all care?” Why don’t we all care that when we pursue these conversations, we chase after a fantasy? Why don’t we think we deserve more? Don’t we think we can have more?
In part, we convince ourselves that we don’t need more—that we’re comfortable with what machines provide. And then we begin to see a life in which we never fear judgment or embarrassment or vulnerability as perhaps a good thing. Perhaps what machine talk provides is progress—on the path toward a better way of being in the world? Perhaps these machine “conversations” are not simply better than nothing but better than anything?
There Are No People for These Jobs
A cover story in Wired magazine, “Better than Human,” celebrated both the inevitability and the advantages of robots substituting for people in every domain of life. Its premise: Whenever robots take over a human function, the next thing that people get to do is a more human thing. The story was authored by Kevin Kelly, a self-declared techno-utopian, but his argument echoes how I’ve found people talking about this subject for decades. The argument has two parts. First, robots make us more human by increasing our relational options because now we get to relate to them, considered as a new “species.”
Second, whatever people do, if a robot can take over that role, it was, by definition, not specifically human. And over time, this has come to include the roles of conversation, companionship, and caretaking. We redefine what is human by what technology can’t do. But as Alan Turing put it, computer conversation is “an imitation game.” We declare computers intelligent if they can fool us into thinking they are people. But that doesn’t mean they are.
I work at one of the world’s great scientific and engineering institutions. This means that over the years, some of my most brilliant colleagues and students have worked on the problem of robot conversation and companionship. One of my students used his own two-year-old daughter’s voice as the voice of My Real Baby, a robot doll that was advertised as so responsive it could teach your child socialization skills. More recently, another student developed an artificial dialogue partner with whom you could practice job interviews.
At MIT, researchers imagine sociable robots—when improved—as teachers, home assistants, best friends to the lonely, both young and old. But particularly to the old. With the old, the necessity for robots is taken as self-evident. Because of demography, roboticists explain, “there are no people for these jobs.”
The trend line is clear: too many older people, not enough younger ones to take care of them. This is why, roboticists say, they need to produce “caretaker machines” or, as they are sometimes called, “caring machines.”
In fairness, it’s not only roboticists who talk this way. In the past twenty years, the years in which I’ve been studying sociable robotics, I’ve heard echoes of “There are no people for these jobs” in conversations with people who are not in the robot business at all—carpenters, lawyers, doctors, plumbers, schoolteachers, and office workers. When they say this, they often suggest that the people who are available for “these jobs” are not the right people. They might steal. They might be inept or even abusive. Machines would be less risky. People say things like, “I would rather have a robot take care of my mother than a high school dropout. I know who works in those nursing homes.” Or, “I would rather have a robot take care of my child than a teenager at some day-care center who really doesn’t know what she’s doing.”
So what are we talking about when we talk about conversations with machines? We are talking about our fears of each other, our disappointments with each other. Our lack of community. Our lack of time. People go straight from voicing reservations about a health-care worker who didn’t finish high school to a dream of inventing a robot to care for them, just in time. Again, we live at the robotic moment, not because the robots are ready for us, but because we are counting on them.
One sixteen-year-old considered having a robot as a friend and said it wasn’t for her, but thought she understood at least part of the appeal:
There are some people who have tried to make friends and stuff like that, but they’ve fallen through so badly that they give up. So when they hear this idea about robots being made to be companions, well, it’s not going to be like a human and have its own mind to walk away or ever leave you or anything like that.
Relationship-wise, you’re not going to be afraid of a robot cheating on you, because it’s a robot. It’s programmed to stay with you forever. So if someone heard the idea of this and they had past relationships where they’d always been cheated on and left, they’re going to decide to go with the robot idea because they know that nothing bad is going to happen from it.
The idea has passed to a new generation: Robots offer relationship without risk and “nothing bad is going to happen” from having a robot as a friend or, as this girl imagines it, a romantic partner. But it’s helpful to challenge the simple salvations of rob
ot companionship. We will surely confront a first problem: The time we spend with robots is time we’re not spending with each other. Or with our children. Or with ourselves.
And a second problem: Although always-available robot chatter is a way to never feel alone, we will be alone, engaged in “as-if” conversations. What if practice makes perfect and we forget what real conversation is and why it matters? That’s why I worry so much about the “crowdsourced” therapist. It is presented as a path toward an even more automated stand-in and is not afraid to use the word “therapist” or “conversation” to describe what it offers.
Smart Toys: Vulnerability to the As-If
In the late 1970s, when I began my studies of computers and people, I started with children. A first generation of electronic toys and games (with their assertive displays of smarts) were just entering the mass market. In children’s eyes, the new toys shared intelligence with people, but as the children saw it, people, in contrast to computers, had emotions. People were special because they had feelings.
A twelve-year-old said, “When there are computers who are just as smart as the people, the computers will do a lot of the jobs, but there will still be things for the people to do. They will run the restaurants, taste the food, and they will be the ones who will love each other, have families, and love each other. I guess they’ll still be the only ones who will go to church.” And in fact, in the mid-1980s and early 1990s, people of all ages found a way of saying that although simulated thinking might be thinking, simulated feeling is never feeling, simulated love is never love.
And then, in the late 1990s, there was a sea change. Now computer objects presented themselves as having feelings. Virtual pets such as Tamagotchis, Furbies, and AIBOs proposed themselves as playmates that asked to be cared for and behaved as though it mattered. And it was clear that it did matter to the children who cared for them. We are built to nurture what we love but also to love what we nurture.
Nurturance turns out to be a “killer app.” Once we take care of a digital creature or teach or amuse it, we become attached to it, and then behave “as if” the creature cares for us in return.
Children become so convinced that sociable robots have feelings that they are no longer willing to see people as special because of their emotional lives. I’ve interviewed many adults who say of children’s attachment to as-if relationships: “Well, that’s cute, they’ll grow out of it.” But it is just as likely, more likely in fact, that children are not growing out of patterns of attachment to the inanimate, but growing into them.
What are children learning when they turn toward machines as confidants? A fifteen-year-old boy remarks that every person is limited by his or her life experience, but “robots can be programmed with an unlimited amount of stories.” So in his mind, as confidants, the robots win on expertise. And, tellingly, they also win on reliability. His parents are divorced. He’s seen a lot of fighting at home. “People,” he says, are “risky.” Robots are “safe.” The kind of reliability they will provide is emotional reliability, which comes from their having no emotions at all.
An Artificial Mentor
To recall Marvin Minsky’s student, these days we’re not trying to create machines that souls would want to live in but machines that we would want to live with.
From earliest childhood, Thomas, now seventeen, says that he used video games as a place of emotional comfort, “a place to go.” Thomas came to the United States from Morocco when he was eight. His father had to stay behind, and now Thomas lives with his mother and sister in a town that is more than an hour from his suburban private school. He has family all over the world and he keeps up with them through email and messaging. His relationship with his mother is quite formal. She holds down several jobs and Thomas says he doesn’t want to upset her with his problems. Now, he says that when he has a problem, the characters in his video games offer concrete advice.
Thomas provides an example of how this works. One of his friends at school gave him a stolen collector’s card of considerable value. Thomas was tempted to keep it but remembered that a character in one of his favorite games was also given stolen goods. In the game, Thomas says, the character returned the stolen items and so he did too. “The character went and did the right thing and returned it. And in the end, it would turn out good. So I just said, ‘Yeah, that’s good. I should probably return it, yeah.’”
Inspired by the character’s actions, Thomas returned the stolen card to its rightful owner. The game helped Thomas do the right thing, but it did not offer a chance to talk about what had happened and how to move forward with his classmates, who steal with apparently no consequence and who now have reason to think he steals as well. Thomas says that at school he feels “surrounded by traitors.” It’s a terrible feeling and one where talking to a person might help. But Thomas doesn’t see that happening any time soon. On the contrary, in the future, he sees himself increasingly turning to machines for companionship and advice. When he says this, I feel that I’ve missed a beat. How did he make the leap to artificial friendship? Thomas explains: Online, he plays games where he sometimes can’t tell people and programs apart.
Thomas has a favorite computer game in which there are a lot of “non-player characters.” These are programmed agents that are designed to act as human characters in the game. These characters can be important: They can save your life, and sometimes, to proceed through the game, you have to save theirs. But every once in a while, those who designed Thomas’s game turn its world upside down: The programmers of the game take the roles of the programmed characters they’ve created. “So, on day one, you meet some characters and they’re just programs. On day two, they are people. . . . So, from day to day, you can’t keep the robots straight from the people.”
When we meet, Thomas is fresh from an experience of mistaking a program for a person. It’s left a big impression. He’s wondering how he would feel if a “true bot”—that is, a character played by a computer program—wanted to be his friend. He cannot articulate any objection. “If the true bot actually asked me things and acted like a natural person,” says Thomas, “then I would take it as a friend.”
In the Turing “imitation game,” to be considered intelligent, a computer had to communicate with a person (via keyboards and a teletype) and leave that person unable to tell if behind the words was a person or a machine. Turing’s test is all about behavior, the ability to perform humanness. Thomas lives in this behaviorist world. There is a “Thomas test” for friendship. To be a friend, you have to act like a friend, like a “natural person.”
For Thomas makes it clear: He is ready to take the performance of friendship for friendship itself. He tells me that if a bot asked him, “How are you? What are you feeling? What are you thinking?” he would answer. And from there Thomas has an elaborate fantasy of what personalities would be most pleasing in his machine friends. Unlike the kids he doesn’t get along with at school, his machine friends will be honest. They will offer companionship without tension and difficult moral choices. The prospect seems, as he puts it, “relaxing.”
This is the robotic moment, “relaxing” to a seventeen-year-old who has been befriended by young thugs. If Thomas accepts programs as confidants, it is because he has so degraded what he demands of conversation that he will accept what a game bot can offer: the performance of honesty and companionate interest.
And then there is the question of how much we value “information.” By the first decade of the 2000s, it was easy to find high school students who thought it would be better to talk to computer programs about the problems of high school dating than to talk to their parents. The programs, these students explained, would have larger databases to draw on than any parent could have. But giving advice about dating involves identifying with another person’s feelings. So that conversation with your father about girls might also be an occasion to discuss empathy and ethical behavior. If your father’s adv
ice about dating doesn’t work out, hopefully you’ll still learn things from talking to him that will help things go better when you have your next crush.
Saying that you’ll let a machine “take care” of a conversation about dating means that this larger conversation won’t take place. It can’t. And the more we talk about conversation as something machines can do, the more we can end up devaluing conversations with people—because they don’t offer what machines provide.
I hear adults and adolescents talk about infallible “advice machines” that will work with masses of data and well-tested algorithms. When we treat people’s lives as ready to be worked on by algorithm, when machine advice becomes the gold standard, we learn not to feel safe with fallible people.
When I hear young people talk about the advantages of turning to robots instead of their parents, I hear children whose parents have disappointed them. A disengaged parent leaves children less able to relate to others. And when parents retreat to their phones, they seem released from the anxieties that should come from ignoring their children. In this new world, adding a caretaker robot to the mix can start to seem like not that big a deal. It may even seem like a solution. Robots appeal to distracted parents because they are already disengaged. Robots appeal to lonely children because the robots will always be there.
Reclaiming Conversation Page 37