Alex Wilf is a doctoral student in the Language Technologies Institute in the School of Computer Science. He is interested in multimodal representation learning, specifically for tasks involving how people express themselves – both when they are alone and in groups, and across both modalities and languages. He is currently interested in the promise of self-supervised learning and graph neural network architectures for use in designing novel multimodal networks. He completed his B.S. in Computer Science at the University of Michigan, where he worked with Emily Mower Provost on building robust and generalizable models for speech emotion recognition.