This year, the workshop on Multimodal Artificial Intelligence presents a chorus of invited speakers, discussing the frontiers of multimodal AI.
1. Keynotes
2. Scope and Related Areas
5. Workshop Schedule
6. Organizing Committee
The NAACL 2022 Workshop on Multimodal Artificial Intelligence (MAI-Workshop) offers a unique opportunity for interdisciplinary researchers to study and model interactions between (but not limited to) modalities of language, vision, and acoustic. Advances in multimodal learning allows the field of NLP to take the leap towards better generalization to real-world (as opposed to limitation to textual applications), and better downstream performance in Conversational AI, Virtual Reality, Robotics, HCI, Healthcare, and Education.
We invite researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI, and Affective Computing to attend the workshop. We will cover a broad range of topics, such as:
Devi Parikh – FAIR at Meta, Georgia Tech | |
Yonatan Bisk – Carnegie Mellon University | |
Aida Nematzadeh– Google DeepMind | |
Victor Zhong– University of Washington | |
Danna Gurari – University of Colorado Boulder | |
Alane Suhr – Cornell | |
Wei-Ning Hsu – Meta AI | |
Drew A Hudson – Stanford |
(ALL PDT TIMEZONE)
Amir Zadeh – Alexa AI, Carnegie Mellon University | |
Louis-Philippe Morency – Language Technologies Institute, Carnegie Mellon University | |
Paul Pu Liang – Machine Learning Department, Carnegie Mellon University | |
Kelly Shi – Carnegie Mellon University | |
Alex Wilf – Carnegie Mellon University | |
Ruslan Salakhutdinov – Carnegie Mellon University | |
Soujanya Poria – Singapore University of Technology and Design | |
Erik Cambria – Nanyang Technological University | |