This group discussion activity helps students to explore how people socially respond to communication technology by explaining and applying the Media Equation and the Computers are Social Actors (CASA) paradigm for the study of human-technology interaction. Students will learn how to evaluate and apply CASA to human-technology interaction by discussing agents and technologies portrayed in science fiction movie trailers containing examples of virtual agents with social characteristics.
Instructors should read the longer and more detailed discussion of The Media Equation and Computers Are Social Actors, and if they are not familiar with The Media Equation and research on this topic, they should consult other resources in order to facilitate discussion. In order to be culturally competent, it is recommended that the instructor especially be familiar with findings pertaining to racial and gender stereotypes, and how these are reproduced in technology. Due to the sensitive nature of discussion about race, gender, and other stereotypes, instructors should refer to some of the resources provided by the National Education Association [3] as well as the Southern Poverty Law Center [5]. Instructors have much discretion on how long they want to spend on the activity and the science fiction trailers used. For instance, computer science students may struggle with applying social science findings to their more traditional engineering and design work, so the instructor may want to explain the roles of social science theory and/or experimental design; and students with a limited knowledge of technologies such as machine learning may need additional materials explaining the technologies that inform even fanciful and futuristic portrayals of robots and agents.
Reeves and Nass state in The Media Equation [4] that human- technology interaction is fundamentally social and natural: people tend to treat interactive technologies like they treat people. The Media Equation is tested using the Computers are Social Actors (CASA) paradigm. According to CASA, you first pick a social science finding regarding human behaviors or attitudes. Second, using a previous research study on this finding pertaining to human-human interaction, you substitute the word “computer” (or some other form of interactive media) for the second “human” in the Theory and Methods sections of the original study. Third, you attempt to replicate the human-human interaction study as a human-computer interaction study and demonstrate the rule still applies. Finally, you draw out implications for theory and design.
This activity will provide students an introduction to The Media Equation and how it bridges media psychology and computer science, through considering examples in science fiction. Learning objectives include explaining and applying how interactive media systems and devices are treated as humanlike entities, both in conscious and nonconscious ways, and reflecting on what this means for design. As computers, robots, and agents are often given a gendered voice and/or appearance, and often have a racial or ethnic coding to them, these issues are an integral part of the discussion plan. Students will take from this lesson an understanding of at least some of the issues inherent in interaction with agents, and considerations necessary for effective and ethical agent design.
Clifford Nass and colleagues summarized CASA research for a lay audience in three books: The Media Equation [4], focusing on early CASA experiments on basic social interaction; Wired for Speech [1], focusing on voice interfaces; and The Man Who Lied to His Laptop [2], focusing on later CASA experiments and what human-computer interaction can teach about human-human interaction. CASA findings ranged from looking at how consequences of different human attributes applied to computer interfaces (e.g. interface gender, interface personality, race-coding, etc.) and how different human social behaviors applied to computer interfaces (e.g. norm of reciprocity, social identity, etc.). Frequently, these findings dealt with issues of homophily (people like others like themselves, such as extroverts liking extroverts and introverts liking introverts) and consistency (people do not like contradictions, such as people prefer the gender of product’s spokesperson matching the stereotypical gender associated with the product).
This activity is relevant to communication, computer science, psychology, media studies, design, engineering, and information students as it references popular media to illustrate issues that are important to the design of human-computer interfaces and can also explain the social effects of technologies that are not on their face social. This lesson has been taught previously, contrasting the virtual agent Samantha from Her with the robot Baymax from Big Hero Six. These social agents differ in gender, specific knowledge and capacities, emotional depth, and the type of voice they have — Samantha is voiced by Scarlett Johansson, and Baymax has a synthetic male voice; and both Baymax and Samantha have white-sounding voices and American accents. In terms of visual representation Samantha has no physical representation (except in a scene where she guides a white, female human avatar) and Baymax is a white inflated semi-humanoid robot. These two characters exhibit a number of characteristics discussed in the CASA research and can foster a wide-ranging discussion. Issues of gender and race coding are also of great importance, and students should be engaged in discussing these issues with respect to the portrayal of agents and robots, and what these factors mean for interaction with humans. An example that can be used to illustrate gender stereotyping in agents is that of the choice of the feminine Samantha’s role as an assistant, emotional support figure, and romantic partner for the male lead: while this follows the current cultural stereotypes and thus would be considered appropriate in terms of the “consistency” tenet of the CASA framework mentioned above, this portrayal may further entrench these stereotypes. With respect to race, Samantha is clearly portrayed with a white American voice, while Baymax has a more ambiguous presentation considering his synthetic-sounding voice (albeit still with a stereotypically white American accent) and semi-anthropomorphic inflated body. This can be a jumping-off point for a discussion of race and gender issues in robotics and human- computer interaction, in terms of whether media portrayals support or subvert stereotypes that are widespread in popular culture.