D.Ozkan, S. Scherer and L.-P. Morency. Step-wise Emotion Recognition using Concatenated-HMM. In Proceedings of the 2nd International Audio/Visual Emotion Challenge and Workshop (AVEC), in conjunction with International Conference on Multimodal Interfaces (ICMI), Santa Monica, October, 2012
Psychologists believe that facial expressions and verbal messages are some of the primary channels of human communication. In recent years, automatic emotion recognition has received considerable attention. The development of technologies in emotion recognition is surprisingly fast but requires further research.
At the early stage, researchers focused mostly on emotion analysis from single static facial images under constrained circumstances. The recognition in the real world is certainly quite different. As human emotions are dynamic streaming, the research is turning into recognition through video or image sequences. In our work, we try to develop multimodal machine learning methods for static and temporal emotion and affect recognition.