Socializing Artificial Intelligence

A DISCUSSION OF

Artificial Intelligence for a Social World

What is a social interaction? What is a relationship? What is a friendship? Justine Cassell, the author of “Artificial Intelligence for a Social World” (Issues, Summer 2019), built her research program in artificial intelligence around a theoretical model grounded in linguistics, psychology, and computer science. I’ve taken a different approach, examining these kinds of questions from a developmental psychology, communications, and computer science perspective. In particular, children’s longstanding experiences with media characters provide a window for understanding how children treat nonliving entities in terms of their feelings about them, called parasocial relationships, and their parasocial interactions with them, in which a “conversation” is created by having a character ask questions, pause for a reply, and then act as if they heard what a child said. For both of us, socially contingent interactions are a defining quality of what it means to be a virtual human, and are modelled after linguistic and behavioral exchanges with actual people, particularly children’s friends.

We create artificial beings based on who we are, on what our needs are. We model their actions and behaviors on us, and we, in turn, treat them as if they are human.

A key question for Cassell and colleagues involves how artificial intelligence can help us understand social interaction. They find that a virtual peer who is created to function interdependently with children and aligned with who children are (e.g., dialect patterns) leads to beneficial social and academic outcomes in science and math. Similarly, our team finds that virtual characters are effective learning companions when children interact with them and feel stronger parasocial relationships with them, defined as perceptions that a character is a trusted friend who makes them feel safe. Why would that not be the case? We create artificial beings based on who we are, on what our needs are. We model their actions and behaviors on us, and we, in turn, treat them as if they are human.

One place where our results differ is the role of rapport, Cassell’s “chit chat” that takes place at the “water cooler.” For Cassell and colleagues, rapport is linked to learning. For our team, parasocial “math talk” improves math skills, but “small talk” does not. Perhaps these outcomes differ because Cassell’s virtual peers are novel and our virtual character is well known to children, one with which they have already established a parasocial relationship. Indeed, the conversational patterns that children share with one another, which Cassell uses to build virtual peers, vary based on whether children are friends or not. Nevertheless, both research lines point to the importance of trust and friendship in children’s learning from virtual companions.

Although the promise of intelligent entities to teach social and academic skills is very real, so too is the risk that they will somehow replace something that is fundamentally human. In truth, intelligent beings reflect what is best and worst in us. A robot feigning to be afraid of the dark elicits empathy from children. A virtual peer who acts differently from children can elicit abuse. Our capacity to build virtual learning companions who respond contingently to children in socially sensitive ways is a challenge before us, one that will influence children’s developmental outcomes as their virtual and physical worlds become increasingly intertwined and interdependent.

Professor of Psychology
Georgetown University
Director, Children’s Digital Media Center

Cite this Article

“Socializing Artificial Intelligence.” Issues in Science and Technology 36, no. 1 (Fall 2019).

Vol. XXXVI, No. 1, Fall 2019