The Multimodal Communication and Machine Learning Laboratory (MultiComp Lab) is headed by Dr. Louis-Philippe Morency at the Institute for Creative Technologies (ICT) of the University of Southern California (USC). At the core of this research field is the need for new computational models of human interaction emphasizing the multi-modal, multi-participant and multi-behavior aspects of human social communication.

Latest News


Call for papers for the first workshop on Computational Modeling of Human Multimodal Language @ACL


Dr. Morency receives Finmeccanica Career Development Professorship in Computer Science


Commencing two reading groups for Spring 2018


Human Communication Dynamics

  • Facial and body gestures
  • Verbal content
  • Prosodic signals

We build probabilistic models and real-time algorithms for automatically recognizing and modeling the interdependence between linguistic symbols (e.g., words) and nonverbal signals (e.g., gestures and prosody) during human social interactions.

Read More >

Multimodal Machine Learning

  • Latent-variable models
  • Multi-view learning
  • Domain Adaptation

We develop new machine learning techniques specifically tailored to the challenges of multi-view symbol-signals online integration and adaptation necessary to recognize and interpret human high-level communicative behaviors.

Read More >

Health Behavior Informatics

  • Medical diagnosis
  • Training and education
  • Multimedia retrieval

This research has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders

Read More >