Tim G. J. Rudner

Data Science Assistant Professor, Faculty Fellow, and Instructor, Center for Data Science, New York University
AI Fellow, Center for Security & Emerging Technology, Georgetown University


 
timrudner.png

tim.rudner [at] nyu.edu


The goal of my research is to create trustworthy machine learning models with a focus on developing methods and theoretical insights that improve the reliability, safety, and transparency of machine learning systems deployed in safety-critical settings. In pursuit of this goal, my research uses probabilistic methods to improve reliable uncertainty quantification [1,2,3], robustness to distribution shifts [1,2,3], interpretability [1,2], and sequential decision-making [1,2,3], with an emphasis on problems in healthcare [1,2,3].

Bio: I am a Data Science Assistant Professor, Faculty Fellow, and Instructor at New York University. Before joining New York University, I conducted PhD research on probabilistic machine learning in the Department of Computer Science at the University of Oxford, where I was advised by Yarin Gal and Yee Whye Teh. For my work on safe decision-making under uncertainty, I received the 2021 Qualcomm Innovation Fellowship. I care deeply about equitable access to education and was an Equality, Diversity & Inclusion Fellow at the University of Oxford.

I am also an AI Fellow at Georgetown’s Center for Security & Emerging Technology and a Rhodes Scholar.

Mentoring: I was the first in my family to attend college, and I know that navigating higher education can be challenging for first-generation low-income students. If you identify as a first-generation low-income student and are looking for mentorship, please feel free to get in touch using this form.

I am on the academic job market this year (2023-24). Please feel free to reach out if you think I would be a good fit for your department. You can find my CV here.


News

Apr '24 Our work on language-guided control won an outstanding paper award at the GenAI4DM Workshop!
Mar '24 I was awarded a $700,000 Foundational Research Grant to improve the trustworthiness of LLMs!
Feb '24 I’m leading New York University’s effort to help establish the United States AI Safety Institute!
Dec '23 I will be co-organizing the 6th Symposium on Advances in Approximate Bayesian Inference (AABI)!
Dec '23 I published four papers at NeurIPS 2023 (including a spotlight): [1,2,3,4]!

Representative Papers

For a complete list, please see: [Publications] or [Google Scholar]


Reliable Uncertainty Quantification

  1. Uncertainty-Aware Priors for Fine-Tuning Pre-trained Vision and Language Models
    Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, and Andrew Gordon Wilson
    Preprint, 2024
  2. rudner2023fseb.png
    Function-Space Regularization in Neural Networks: A Probabilistic Perspective
    Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2023
  3. rudner2022fsvi.png
    Tractable Function-Space Variational Inference in Bayesian Neural Networks
    Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2022

Robustness to Distribution Shifts

  1. Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
    Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, and Julia Kempe
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
  2. tran2022plex.png
    Plex: Towards Reliability Using Pretrained Large Model Extensions
    Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil BandTim G. J. Rudner, Karan Singhal, Zachary Nado, Joost Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, and Balaji Lakshminarayanan
    ICML Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, 2022
  3. band2021benchmarking.png
    Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
    Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2021

Interpretability

  1. Should We Learn Most Likely Functions or Parameters?
    Shikai Qiu*, Tim G. J. Rudner*, Sanyam Kapoor, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  2. wang2023m2ib.png
    Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution
    Ying Wang*, Tim G. J. Rudner*, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023

Probabilistic Sequential Decision-Making

  1. rudner2022sfsvi.png
    Continual Learning via Sequential Function-Space Variational Inference
    Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, and Yarin Gal
    International Conference on Machine Learning (ICML), 2022
  2. rudner2021odrl.png
    Outcome-Driven Reinforcement Learning via Variational Inference
    Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, and Sergey Levine
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  3. rudner2021pathologies.png
    On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations
    Tim G. J. Rudner*, Cong Lu*, Michael A. Osborne, Yarin Gal, and Yee Whye Teh
    Advances in Neural Information Processing Systems (NeurIPS), 2021

Clinical Decision-Making and Drug Discovery

  1. gruver2023nos.png
    Protein Design with Guided Discrete Diffusion
    Nate Gruver, Samuel Stanton, Nathan C. FreyTim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  2. Informative Priors Improve the Reliability of Multimodal Clinical Data Classification
    Julian Lechuga LopezTim G. J. Rudner, and Farah Shamout
    Machine Learning for Health Symposium Findings (ML4H), 2023
  3. klarner2023qsavi.png
    Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions
    Leo KlarnerTim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M. Morris, Charlotte Deane, and Yee Whye Teh
    International Conference on Machine Learning (ICML), 2023

Policy Reports & Issue Briefs

  1. Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs (Forthcoming), 2024
  2. OECD Framework for the Classification of AI systems
    OECD (as a contributing author)
    OECD Digital Economy Papers, 2022
  3. rudner2021specification.png
    Key Concepts in AI Safety: Specification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  4. rudner2021interpretability.png
    Key Concepts in AI Safety: Interpretability in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  5. rudner2021robustness.png
    Key Concepts in AI Safety: Robustness and Adversarial Examples
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  6. rudner2021aisafety.png
    Key Concepts in AI Safety: An Overview
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021