Eligible: Undergraduate and Masters students
Mentor: Paul Pu Liang
Description: Quantifying what neural networks don’t know and when they should abstain from making predictions is an important goal for safe real-world decision-making. This project will involve designing algorithms that can quantify uncertainty in neural networks and explore their applications towards noisy labels, outlier detection, interpretability, and robustness.
Skills/Experience: Prior experience in deep learning is an advantage but not a requirement.
Contact: Interested students should send an email to Paul Pu Liang with their CV and description of their experience in machine learning.