Michal Muszynski is a postdoctoral research associate (SNSF fellowship holder) carrying out interdisciplinary research at the intersection of computer science, neuroscience, medicine, and psychology. He received his Ph.D. in Computer Science from the University of Geneva in 2018. His research interests are in the areas of affective computing, affective neuroscience, multimodal deep machine learning, pattern recognition, signal processing, and big data. As part of the MultiComp lab, Michal brings his work experience in physiological and behavioural signal analysis.
Jeffrey Girard is a postdoctoral research associate working in the interdisciplinary space between psychology, medicine, and computer science. He completed his PhD in Clinical Psychology at the University of Pittsburgh in 2018 and has been collaborating with computer scientists at CMU since 2010. He is interested in how internal factors (e.g., emotion, personality, and psychopathology) and external factors (e.g., context, culture, and group processes) influence human behavior. As part of the MultiComp lab, Jeffrey brings expertise in facial computing, statistical analysis, and psychological theory.
Elif Bozkurt was a post-doctoral associate at the Language Technologies Institute, Carnegie Mellon University. Her primary research interests include nonverbal human behavior analysis, speech and multimodal signal processing particularly for healthcare applications. She received both her PhD and MS degrees in Electrical and Electronics Engineering from Koc University. Her PhD research focused on affective speech-driven gesture synthesis. During her MS she worked on emotion recognition from speech.
Tadas Baltrušaitis was a post-doctoral associate at the Language Technologies Institute, Carnegie Mellon University. His primary research interests lie in the automatic understanding of non-verbal human behaviour, computer vision, and multimodal machine learning. In particular, he is interested in the application of such technologies to healthcare settings, with a particular focus on mental health. Before joining CMU, he was a post-doctoral researcher at the University of Cambridge, where he also received his Ph.D and Bachelor’s degrees in Computer Science. His Ph.D research focused on automatic facial expression analysis in especially difficult real world settings.
Vasu is presently pursuing his Masters in Language Technologies from the School of Computer Science at CMU. His primary areas of interest are Multimodal Machine Learning, Dialog Systems, Question Answering and building explainable and robust Deep Learning models via adversarial training.
He is presently working on trying to decode and reconstruct the Neural Basis of Real World Social Perception by trying to find relations between intra-cranial EEG signals and human expressions, pose, actions, emotions etc. He is also working on designing adversarial attack to test the robustness of Visual Question Answering Models and build training pipelines which make these models robust to such adversarial attacks. He is also interested in various facets of facial expression analysis.
Peter is a Masters student in the Machine Learning Department. His current research focuses on multimodal learning, audio processing, and natural language processing. Other areas he’s worked on include federated learning and machine learning for healthcare.
Hongliang Yu was a Masters student in Language Technologies Institute. His research interests lie in multimodal machine learning and natural language processing. He received his B.S. and M.S. from School of Electronics Engineering and Computer Science at Peking University.
Muqiao is a graduate student at CMU. His research interest is about understanding real-world problems and mechanisms with machine learning techniques. Prior to joining CMU, he received his bachelor degree from The Hong Kong Polytechnic University.
Tianjun (TJ) Ma
TJ is currently a fifth year master’s student in the school of computer science, advised by Louis-Philippe Morency. His research interests include multimodal signal processing, assistive technologies, and visual QA.
Tejas is a second-year Master’s student in the Language Technologies Institute at CMU. Tejas is currently working on Multimodal Co-learning to build models that are robust to missing modalities during inference time. He has worked on a number of research projects in the space of NLP, including multimodal speech recognition, dialog, and machine translation. Prior to joining CMU, he worked on end-to-end speech translation at the Indian Institute of Technology, Bombay.
Zeeshan Ashraf is a graduate student at Carnegie Mellon University. His research interests lie in multimodal machine learning, computer vision and generative modelling. Currently he is also working on low resource machine translation and VQA using unstructured knowledge. He looks to learn more about and undertake research in Bayesian Learning. Previously, he has worked on gradient estimation in Stochastic Neural Networks at International Institute of Information Technology, Hyderabad.
Chengfeng Mao is a second-year master student in Computer Science at Carnegie Mellon University. He is passionate about machine learning and applying it to solve real-world problems. During his study at CMU, he has also been working on multimodal machine learning applied to video sentiment analysis and multimodal generation with Professor L.P. Morency. Prior to attending CMU, he had worked at Yahoo, Cask, and Google Cloud as a software engineer, specializing in big data infrastructure development. He received his bachelor’s degree in Computer Engineering from the University of Illinois at Urbana-Champaign.
Qingtao is a first-year master student in Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. His research interests lie in deep learning, probabilistic graphical models, data mining and information extraction. In particular, he is interested in disentangling and reconstructing phoneme and emotion information in speech using deep learning approaches. Prior to joining Carnegie Mellon, he obtained his B.S. in Computer Engineering from University of Illinois at Urbana-Champaign, where he worked on analyzing how different roles in stack exchanges affect their sizes.
Anirudha Rayasam is a Masters student in Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. His interests range in various topics in natural language, computer vision, deep learning and reinforcement learning. He is currently working on human pose forecasting conditioned on rich natural language descriptions.
Atabak Ashfaq is a first-year Masters student at the Language Technologies Institute at the School of Computer Science at CMU. He is working on active learning techniques to optimize annotation collection process. His research interests lie in the field of active learning and natural language processing. Prior to CMU, he was working at a financial giant to automate production management using time series modelling and active learning.
Ying is a second-year master student in the Language Technologies Institute of School of Computer Science at Carnegie Mellon University. Her research interests lie in deep learning, multimodal machine learning, natural language processing and computer vision. In particularly, she is interested in understanding human multimodal language and develop agents that feature a more human-like intelligence with the help of deep learning. Prior to joining Carnegie Mellon, she received the Bachelor’s Degree in Software Engineering from Fudan University.
Irene Li is a Machine Learning master student at Carnegie Mellon University. Her research interest lies in speech recognition and natural language processing. She has worked on speech representation learning and removing social biases from sentence embeddings. She is currently working on multilingual style transfer. Irene obtained her B.S. in Computer Science from Carnegie Mellon University.
Zhun is a second-year master student in the Language Technologies Institute at Carnegie Mellon University. His research interests lie in deep learning, natural language processing, and multimodal machine learning. In particular, he is interested in understanding the representation and reasoning capabilities of deep learning models through the lens of concrete language and multimodal tasks. Prior to joining Carnegie Mellon, he obtained his Bachelor’s degree in Applied Mathematics at Wuhan University, where he spent his last semester working on hybrid neural/graphical models for linguistic structured prediction.
Sharath is a second-year master’s student in the Electrical and Computer Engineering program; his primary research interest lies in Machine Learning and Signal Processing. He is currently working on recognition and generation of backchannels in speech, and has previously worked on topics such as speech processing, emotion recognition, and graph convolutional networks.
Yao Chong Lim
Yao Chong is a Master’s Student in Computer Science. His research interests include interpretable multimodal representations and models, computer vision, and natural language processing, with a focus on applications in human communications. Previously, he worked on facial landmark detection methods. He holds a Bachelor’s Degree in Computer Science from Carnegie Mellon University.
Minghai is a Master’s student in Intelligent Information Systems (MIIS) program at the Language Technologies Institute. His interests lie in deep learning for computer vision and natural language processing. Previously he worked on image captioning and recently he works on multimodal sentiment analysis under Prof. Morency. In the summer of 2017 he worked with a machine learning team at Google. He holds a bachelor’s degree in software engineering from Tsinghua University, China.
Sen is a Master’s student in intelligent information System(MIIS) program at the Language Technology Institute. His interests focus on computer vision, natural language processing, and deep reinforcement learning. Previously he worked on suicidality inclination analysis advised by Tadas and Prof. Morency. For the summer of 2017, he worked at the discovery engineering team at Google. He holds a bachelor’s degree in Interdisciplinary Information Science from Tsinghua University, China.
Supriya Vijay was a Master’s student in the Intelligent Information Systems program under the Language Technologies Institute. Her interests lie in applied machine learning in the domain of healthcare. Her research is centered around the analysis of non-verbal behavior for identifying schizophrenic symptoms, and she worked on Visual Question Answering as a capstone project, under Prof. Morency. She holds a Bachelor’s degree in Computer Science from PES Institute of Technology, Bangalore, India.
Christy Yuan Li was a master student in Computational Data Science program at Carnegie Mellon University. Her research interests lie in computer vision and machine learning. Before joining CMU, she earned a bachelor’s degree in Information Engineering from The Chinese University of Hong Kong.
Jayanth was a student in the Computational Data Science program at LTI, working with Prof. Morency on the program’s capstone project. His interests are in computer vision, deep learning, and machine learning. He holds a bachelor’s degree in computer science from BITS Pilani, India.
Deepak was a Master’s student in the Computational Data Science (MCDS) program at the Language Technologies Institute. His interests lie in deep learning for computer vision and language modeling, and machine learning. He works with Prof. Morency on a video captioning related problem for his MCDS capstone project. He holds a bachelor’s degree in computer science from BITS Pilani, India. He has worked with the machine learning teams at Facebook and Amazon and the search team at Linkedin.
Ryo Ishii is a visiting scholar at the Language Technologies Institute. He is a senior research scientist in NTT Media Intelligence Laboratories, NTT Corporation. He received his PhD in Infomatics from Kyoto University and his research focuses on Multimodal Interaction, Multimodal Machine Learning, Conversation Analysis, and Social Signal Processing.
Yukiko Nakano is a visiting scholar from April to September in 2019. She is a professor in the Department of Computer and Information Science at Seikei University, Japan, and leading the Intelligent User Interface Laboratory (IUI lab). With the goal of allowing more natural human-computer interaction, she has addressed issues on modeling conversations by analyzing human verbal and nonverbal communicative behaviors, and developing Multimodal Conversational Interfaces based on the empirical models.
Qinglan Wei was a visiting scholar at the Language Technologies Institute, Carnegie Mellon University. Her interests focuses on developing multimodal machine learning methods for emotion expression. Qinglan Wei is a PhD candidate at the College of Information Science and Technology in Beijing Normal University, supervised by Prof. Bo Sun. She holds her master degree in communication engineering from Communication University of China.
Liandong Li was a visiting scholar at the Language Technologies Institute, Carnegie Mellon University. His research focuses on developing multimodal machine learning methods for the analysis of facial action unit and expression. Liandong is a PhD candidate at the College of Information Science and Technology in Beijing Normal University, supervised by Prof. Bo Sun. He also got his B.S. degree in computer science from Beijing Normal University.
Wenjie Pei was a visiting scholar at the Language Technologies Institute, Carnegie Mellon University. His research focuses on the time series modelling, including time series classification, time series similarity embedding learning, and time-series related applications. Wenjie is a PhD candidate at the Pattern Recognition Laboratory in Delft University of Technology, supervised by Dr. David Tax and Dr. Laurens van der Maaten.
Behnaz Nojavanasghari was a visiting scholar at the Language Technologies Institute, Carnegie Mellon University. Her research interests are in the intersection of affective computing, computer vision and machine learning with a particular focus on multimodal emotion recognition for children. Behnaz is a PhD candidate in computer science at the University of Central Florida. Before starting her graduate studies, she has received her B.S. in computer science from Amirkabir University of Technology (Tehran Polytechnic).
Shreya is a senior in the Carnegie Mellon School of Computer Science, minoring in Machine Learning. She is currently working on audio source separation and is broadly interested in audio/speech-related tasks, and reinforcement learning. Previously, she has worked on using machine learning to identify related ideas in scientific text and patents.
Gayatri Shandar is an undergraduate in the School of Computer Science at Carnegie Mellon University. She works with Dr. Girard where her current research interests include developing machine learning models to further facial computing and the understanding of human emotion and its influences on facial behavior.
Holmes Wu is an undergraduate student in Carnegie Mellon University. His current research project involves convolutional neural networks in 3D. His hope to gain in-depth understanding of the underlying principles of machine learning, especially deep learning.
Alex Schneidman is an undergraduate student in Computer Science at Carnegie Mellon University. His research involves designing tools that collect and organize datasets to train machine learning models in social interactions. He hopes to continue studying machine learning to explore its applications in Artificial Intelligence and Computational Finance.
Michael Chan is an undergraduate student at Carnegie Mellon University. His current work concerns using artifical intelligence to identify qualities of human social interaction through body language and speech from video clips. Michael hopes to learn more about machine learning overall and intends to use his knowledge to challenge the artistic and creative capabilities of machine learning models.
Azaan is in his senior year as a Computer Science Undergraduate. His research interests lie in Multi-modal Machine Learning and Natural Language Processing, and his current goal is to produce a system that’s able to communicate, interact and adapt with humans in everyday life.
Jonathan VanBriesen was an undergraduate pursuing a major in Computer Science at Carnegie Mellon University. He was doing research under Amir Zadeh focused on multimodal sentiment analysis.
Edmund Tong was an undergraduate who is pursuing a Computer Science major and Language Technologies minor at Carnegie Mellon University. He has been an undergraduate researcher with the Multimodal Communication and Machine Learning Lab since October 2015. Notably, the work that Edmund has done with Amir Zadeh and Prof. Louis-Philippe Morency on applying deep multimodal networks to combat human trafficking is published at ACL 2017. Edmund’s research interests include multimodal representations and theoretical bases for deep learning, and he can be contacted at edtong AT cmu DOT edu.