This Interaction Metrics OER consists of two group projects focused on teaching students how to create validated metrics for measuring human-computer interactions. If we want to measure how good a team is at teamwork, we might count communication utterances by members and see if they’re equally distributed. But is that measure predictive of team success? Probably not. If we want to measure how much a person likes an app, we might count number of uses per day or number of taps per usage session. While these metrics are countable, there’re not accurate predictors of fondness for an app. These two projects ask students to create objective, useful metrics for real-world human-technology interactions and to validate them with predictive models and collected data. I tell students these projects are about “developing metrics for things that are hard to measure” and ask them to consider whether the proliferation of inexpensive sensors, AI, and IoT might make fuzzy constructs like “team trust” or being a “good leader” more measurable.
The first project, Game Analysis, is a “warm-up” project to get students used to these concepts and the methodology. They’re asked to choose a single-player video game to analyze. They are usually excited about this but don’t realize how much work is involved. By the end of this project, when they’ve documented the timing of all video game entities, established players’ attentional zones on the screen, recorded data from novice and expert players and compared them, students are often exhausted but proud of their work.
In the second, larger, project, the Interaction Metrics Project, students are asked apply the same techniques to a real-world workplace environment. They define a work task, the employee roles involved, and the interactions that occur. They define at least three new metrics of interest that don’t already exist. E.g., one student team analyzed hospital shift changes, when one set of nurses switches to a new set, and established a metric for the quality of the shift change. Another team analyzed English Second Language professionals’ interactions with Google Translate in the workplace and established a metric for fluency. Students gather data from the workplace that is required for their metrics, build a simple statistical predictive model, and then discuss with their contacts to see whether interactions rated high by their model are perceived as high by workplace experts. Students have the option, if they are unable to gain access to a workplace to model, of programming a realistic task simulation of a workplace activity and using that simulation. In this approach, they have to ground their design decisions in real world details of the workplace.
These projects were designed as a significant portion (60% of course grade) of an interdisciplinary graduate class on cognitive engineering within a human factors / human computer interaction curriculum. The Game Analysis project has no prerequisites, though students with experience collecting and analyzing data will have an advantage. The Interaction Metrics project builds more on the topics covered in the course, which include cognitive task analysis and cognitive work analysis*, visual and auditory perception, mental models and knowledge representation*, cognitive workload*, attention, human error, design of alerts, adaptive automation, user interfaces and controls, decision making*, team dynamics, computer-supported collaborative work and social computing, and data visualization. The topics with asterisks* are most useful to the Interaction Metrics project.
The Game Analysis project requires 4 weeks and the Interaction Metrics Project requires 7 weeks. However, these projects are somewhat scalable in complexity. The Game Analysis project can be shortened by removing the requirement to collect actual data from human players; students could analyze game-play videos on YouTube/Twitch. Also, the students might focus simply on characterizing the game and player dynamics rather than taking the next step and distinguishing novices vs. experts. The Interaction Metrics project can be slightly shortened if only one metric is required rather than three. If an instructor wanted students to grapple briefly with the concept of developing good metrics without developing them, they could have students read the examples in the Interaction Metrics assignment and ask them to argue why they’re good or bad metrics and discuss how they could be improved.
Both projects can be extended, if desired, by having students use a discrete event simulation tool like Simio, Arena, or NetLogo to model their game or workplace and make predictions. I have done this successfully but only with extra time provided to teach students the basics of the simulation tool using online videos and tutorials. These projects are longer, have a lot of freedom of choice in them, and potentially require more work than students might anticipate,
which means it’s useful to scaffold students throughout their efforts using check-in meetings and discussions. Each class meeting, I ask for any project questions, and since some of my students are online, I have a dedicated Discussions forum for each project in which they can ask questions.
Also, during lectures I try to relate the projects to the course content. E.g., in a lecture on mental models and how they might be represented in software, I point out that in the game project, students are trying to build simple representations of novice’s and expert’s mental models. In a lecture enumerating types of human error, I point out that these types should apply in their interaction metrics project as well, since they’re effectively trying to reduce human error in the workplace setting they chose. It is worth noting that using games as classroom content could lead to possibly controversy or promotion of problematic social dynamics, given that some popular games are quite violent and/or misogynistic. This article discusses these issues well: https://doi.org/10.1080/08838151.2014.999917. This issue is largely avoided by having the constraint in the game project assignment, “The game is not so violent/obscene/misogynistic that some of us would be disgusted/embarrassed to see screenshots/videos presented in class” and by requiring approval of student game choices by the instructor. Interpretation of this constraint is, of course, subjective, but can be interpreted however the instructor feels would keep their classroom a safe space for learning. In 8 years of using the game project in class, I have not found any issues along these lines, especially since there are so many games to choose from, and because simpler games like Fruit Ninja or Frogger require much less analysis than a more complex first-person shooter. I did reject one game choice edging into to this controversial category with a statement that tried to focus on comfort of the students’ classmates and the workload that choosing it would imply, e.g., “Can you choose a different game? I’m a little worried about the content of this game making some classmates
uncomfortable. Plus, this game would be pretty complex to analyze with the complicated maps and different dependencies.” If an instructor wanted to avoid such discussions, they could require the game to be rated for Everyone or Everyone 10+. Or, on the other hand, if the instructor wanted to tie such topics into class discussion, the project could present an excellent opportunity to discuss equity and inclusion, the identity formation of technology users, and the impact of games on marginalized groups.
These projects are highly engaging and use several techniques described in the NCWIT Engaging Practices Framework. Students find them motivating, challenging, and gratifying to complete. I tell new students that last year’s students felt this way, as a way to acknowledge the work they’re about to undertake (effective encouragement). The projects “make it matter” by asking students to analyze an existing workplace activity to seek ways to improve it. Students who are gamers find the warm-up Game Analysis task also meaningful and relevant. Non-gamer students often contribute data analysis skills to this project and become interested in trying to distinguish experts vs. novices. The non-gamers may also serve as novice participants. These projects make interdisciplinary connections between CS, engineering, and psychology by using principles from each as students model the game or workplace and the humans’ actions and cognition within them. Because students can choose their game and workplace, these projects incorporate student choice. For each project, students first write a proposal (e.g., “We’re going to analyze XYZ and establish metrics for A, B, C.”). I encourage instructors to offer very personalized feedback on these proposals with both encouragement and cautionary notes about scope or infeasibility. I check in weekly with teams to see if they have questions or if their team is performing (not just forming and storming and norming). This approach provides opportunities for interaction with faculty. To promote an inclusive community, I assign group members using two principles: 1) based on skillset (rather than letting students choose their own groups) and 2) try to ensure that women or underrepresented students are not alone on teams (e.g., have two women on a team, not one). Regarding skillset, it is often useful to at least one person with programming skills on each team. For the gamer project, it is useful to have at least one regular gamer on each team, though not critical. If the class is interdisciplinary, and some students have statistical analysis or experimental design skills, it would be good to distribute them among teams as well. I do sometimes accept petitions from students who want to work together based on a common time zone, since logistics for team meetings are easier.