B. Nojavanasghari, T. Baltrušaitis, C. Hughes and L.-P. Morency. Emoreact: A Multimodal Approach and Dataset for Recognizing Emotional Responses in Children. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2016
With the recent increase of computer technology in science education, a research agenda is emerging to understand the opportunities and challenges of designing and adopting technologies in school classrooms. While emerging technologies have demonstrated new forms of learning media, little research investigates the dynamics influencing the social interactions between children as well as their response to various instructional approaches.
Our goal in this body of work is to deepen our understanding of students’ learning behaviors in classrooms by building computational models of children’s engagement, collaborative behavior, curiosity, and affective state leveraging visual, vocal, and verbal cues
Emoreact (Emotion detection in children): Although there has been a considerable amount of research on automatic emotion recognition in adults, emotion recognition in children has been understudied. This problem is more challenging as children tend to fidget and move around more than adults, leading to more self-occlusions and non-frontal head poses. In this work, we introduce a newly collected multimodal emotion dataset of children between the ages of four and fourteen years old. The dataset contains 1102 audio-visual clips annotated for 17 different emotional states: six basic emotions, neutral, valence and nine complex emotions including curiosity, uncertainty, and frustration. Our experiments compare unimodal and multimodal emotion recognition baseline models to enable future research on this topic, along with providing a detailed analysis of the most indicative behavioral cues for emotion recognition in children.
Curiosity: Curiosity is a vital socio-emotional skill in educational contexts, and curiosity and exploratory behaviors are believed by many to be tightly intertwined with one another. In this body of work, we focus on identifying visual, acoustic, and verbal behavior indicators of curiosity, and developing techniques for automatically recognizing them.