Amir Zadeh is an Artificial Intelligence Ph.D student at Carnegie Mellon University. His research is focused on Multimodal Deep Learning both theory and applied. From theoretical perspective he is interested in building foundations of multimodal machine learning. From application perspective, he is keen on giving computers capability to understand human communication as a multimodal signal involving language, gestures and voice. His research spans different areas of natural language processing, computer vision and speech processing. He started his Ph.D in January 2015. Prior to that he received his Bachelors of Science from University of Tehran, ECE department, where he was a member of Advanced Robotics Laboratory.