Multimodal Deep Reinforcement Learning

Eligible: Undergraduate and Masters students

Mentor: Paul Pu Liang

Description: Many real-world agents interact with their environment through a variety of sensors and modalities. This project will develop models that integrate information from multiple modalities. We will explore models that are robust to noisy or missing modalities, experiment with environments which require agents to take actions grounded in language and vision, and settings where multiple agents communicate to complete a task.

Skills/Experience: Prior experience in deep learning is an advantage but not a requirement.

Contact: Interested students should send an email to Paul Pu Liang with their CV and description of their experience in machine learning.