MultiSense Live


MultiSense is the sensing and analysis technologies created during the SimSensei project, covering a multidisciplinary range of topics, including multimodal interaction, social psychology, computer vision, machine learning, and artificial intelligence. Using this technology, SimSensei is a virtual interviewer designed to help clinicians diagnose psychological distress. Outfitted with a Microsoft Kinect augmented with facial recognition technology and depth-sensing cameras, SimSensei “interviews” patients while analyzing their smile, gaze and fidgeting behavior. This data is then provided to doctors, who can use it as a more objective way to identify depression and anxiety. These technologies can identify indicators of psychological distress such as depression, anxiety and PTSD, and are being integrated into ICT’s virtual human application to provide healthcare support.


S. Stratou and L.-P. Morency, MultiSense – Context-Aware Nonverbal Behavior Analysis Framework: A Psychological Distress Use Case, IEEE Transactions on Affective Computing, 2016

G. Stratou, L.-P. Morency, D. Devault, A. Hartholt, E. Fast, M. Lhommet, G. Lucas, F. Morbini, K. Georgila, S. Scherer, J. Gratch, S. Marsella, D. Traum and A. Rizzo, A Demonstration of the Perception System in SimSensei, a Virtual Human Application for Healthcare Interviews, Demo Paper at the sixth International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

D. Devault, A. Rizzo and L.-P. Morency, SimSensei: A Virtual Human Interviewer for Healthcare Decision Support. In Proceedings of the Thirteenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2014