single

A Multi-label Convolutional Neural Network Approach to Cross-Domain Action Unit Detection

A Multi-label Convolutional Neural Network Approach to Cross-Domain Action Unit Detection

S. Ghosh, E. Laksana, S. Scherer, and L.-P. Morency, A Multi-label Convolutional Neural Network Approach to Cross-Domain Action Unit Detection. In Proceedings of International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

Publications

As an important channel of nonverbal communication, the facial reveals a wealth of information about an individual’s affective and cognitive state (e.g., emotions, intentions, and engagement). Thus, automated analysis of facial behavior has the potential to enhance fields ranging from human-computer interaction and consumer electronics to science and healthcare.

Our work explores all aspects of facial behavior including head pose, head motion, facial expression, and eye gaze. For example, automatic detection and analysis of facial Action Units is an essential building block in nonverbal behavior and emotion recognition systems. In addition to Action Units, head pose and gesture also play a significant role in emotion and social signal perception and expression. Finally, gaze direction is important when evaluating things like attentiveness, social skills, and mental health, as well as the intensity of emotions. In our work, we develop methods to analyze the modalities mentioned above, especially in challenging real-world environments.

Facial expression is a rich source of information which provides an important communication channel for human interaction. People use them to reveal intent, display affection, and express emotion. Automated tracking and analysis of such visual cues would greatly benefit human-computer interaction. A crucial initial step in many affect sensing, face recognition, and human behavior understanding systems is the detection of certain facial feature points such as eyebrows, corners of eyes, and lips.

While facial landmark detection algorithms have seen considerable progress over the recent years, they still struggle under occlusion, in adverse lighting conditions and the presence of extreme pose variations. Our work specifically focuses on addressing such challenging scenarios using various computer vision and machine learning techniques.