Future of Artificial Intelligence

A DISCUSSION OF

Artificial Intelligence for a Social World

In the midst of all the hype around artificial intelligence and the danger it may or may not pose, Justine Cassell’s article, “Artificial Intelligence for a Social World” (Issues, Summer 2019), clearly and intelligently makes the important point that AI will be what we choose it to be, and that autonomy, rather than being the only possible goal, is actually a distraction.

This fixation on autonomy has never made sense to me as an AI researcher focused on conversation. My colleagues and I work toward creating systems that can talk with us, carry on sensible conversations, provide easier access to information, help with tasks when it is not practical to have another human collaborate. Examples of this type of work include the NASA Clarissa Procedure Navigator, designed to assist astronauts on the understaffed International Space Station perform procedures more efficiently by acting like a human assistant, and the virtual coach for the Oakley RADAR Pace sports glasses that provide interactive conversational coaching to distance cyclists and runners. These types of systems are sophisticated AI but are focused on collaboration rather than autonomy. Why are our conversational AI systems helpful, friendly, and collaborative? We chose to apply our scientific investigation and technology to problems that required collaboration. We were interested in interaction rather than autonomy, as are Cassell and her colleagues.

My colleagues and I work toward creating systems that can talk with us, carry on sensible conversations, provide easier access to information, help with tasks when it is not practical to have another human collaborate.

In her article, Cassell shows what a different, and I think better, kind of AI we can have. She demonstrates, through her fascinating research on social interaction, how AI can be a scientific tool to answer social science questions and how social science results can feed back into the AI technology.

By using virtual humans Cassell and colleagues are able to control variables in conversation that would be impossible to control in human subjects. They were, for example, able to create virtual children that differed only in whether they spoke a marginalized dialect or the “standard school dialect.” With these controlled virtual children Cassell and colleagues were able to investigate the effect of using children’s home dialect while brainstorming about a science project compared with using the standard school dialect for brainstorming. The children who brainstormed with the virtual child in their home dialect had better discussions. This is an example of using AI technology, the virtual human, to answer a scientific question about children’s learning outcomes.

Then Cassell and colleagues discovered that it was not the dialect itself that mattered but the rapport that the dialect fostered. Studies of this difficult concept of rapport have led to data that are used to build a predictive model, and to algorithms that can be used in a system that attempts to build rapport with its human interlocutor—thus feeding the social science back into the technology.

Cassell’s article is filled with examples of solid and creative research, and she makes important points about the nature of artificial intelligence. It is recommended reading for anyone looking for a path to a positive and constructive AI future.

Chief Technology Officer
BAHRC Language Tech Consulting

Cite this Article

“Future of Artificial Intelligence.” Issues in Science and Technology 36, no. 2 (Winter 2020).

Vol. XXXVI, No. 2, Winter 2020