The Multimodal Communication and Machine Learning Laboratory (MultiComp Lab) is headed by Dr. Louis-Philippe Morency at the Institute for Creative Technologies (ICT) of the University of Southern California (USC). At the core of this research field is the need for new computational models of human interaction emphasizing the multi-modal, multi-participant and multi-behavior aspects of human social communication.
Call for papers for the first workshop on Computational Modeling of Human Multimodal Language @ACL
Dr. Morency receives Finmeccanica Career Development Professorship in Computer Science
Commencing two reading groups for Spring 2018
We build probabilistic models and real-time algorithms for automatically recognizing and modeling the interdependence between linguistic symbols (e.g., words) and nonverbal signals (e.g., gestures and prosody) during human social interactions.
We develop new machine learning techniques specifically tailored to the challenges of multi-view symbol-signals online integration and adaptation necessary to recognize and interpret human high-level communicative behaviors.
This research has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders