Emotion and Affect Recognition

Psychologists believe that facial expressions and verbal messages are some of the primary channels of human communication. In recent years, automatic emotion recognition has received considerable attention. The development of technologies in emotion recognition is surprisingly fast but requires further research.

At the early stage, researchers focused mostly on emotion analysis from single static facial images under constrained circumstances. The recognition in the real world is certainly quite different. As human emotions are dynamic streaming, the research is turning into recognition through video or image sequences. In our work, we try to develop multimodal machine learning methods for static and temporal emotion and affect recognition.


S. Ghosh, E. Laksana, L.-P. Morency and S. Scherer, Representation Learning for Speech Emotion Recognition, In Proceedings of the Annual Conference of the International Speech Communication Association (Interpseech), 2016

S. Ghosh, E. Laksana, L.-P. Morency and S. Scherer, Learning Representations of Affect from Speech, In Proceedings of the International Conference on Learning Representations Workshop (ICLR-W), 2016

L.-P. Morency. The Role of Context in Affective Behavior Understanding. Social Emotions in Nature and Artifact: Emotions in Human and Human-Computer Interaction, Jonathan Gratch and Stacy Marsella, Editors, Oxford University Press, 2014

J.-C. Levesque, C. Gagne and L.-P. Morency. Sequential Emotion Recognition using Latent-Dynamic Conditional Neural Fields. In Proceedings of the IEEE Conference on Automatic Face and Gesture Recognition (FG), 2013

Y. Song, L.-P. Morency and R. Davis. Learning a Sparse Codebook of Facial and Body Expressions for Audio-Visual Emotion Recognition. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2013

D.Ozkan, S. Scherer and L.-P. Morency. Step-wise Emotion Recognition using Concatenated-HMM. In Proceedings of the 2nd International Audio/Visual Emotion Challenge and Workshop (AVEC), in conjunction with International Conference on Multimodal Interfaces (ICMI), Santa Monica, October, 2012