Tim G. J. Rudner

Faculty Fellow, Center for Data Science, New York University
AI Fellow, Center for Security & Emerging Technology, Georgetown University


 
timrudner.png

tim.rudner [at] nyu.edu


The goal of my research is to create trustworthy machine learning models with a focus on developing methods and theoretical insights that improve the reliability, safety, and transparency of machine learning systems deployed in safety-critical and high-stakes settings. In pursuit of this goal, my research uses probabilistic methods to improve reliable uncertainty quantification [1,2,3], robustness to distribution shifts [1,2,3], interpretability [1,2], and sequential decision-making [1,2,3], with an emphasis on problems in generative AI, healthcare, and scientific discovery [1,2,3,4].

Bio: I am a Data Science Faculty Fellow at New York University. Before joining New York University, I conducted PhD research on probabilistic machine learning in the Department of Computer Science at the University of Oxford, where I was advised by Yarin Gal and Yee Whye Teh. For my work on safe decision-making under uncertainty, I received the 2021 Qualcomm Innovation Fellowship. I care deeply about equitable access to education and was an Equality, Diversity & Inclusion Fellow at the University of Oxford. For further details, please see my CV.

I am also an AI Fellow at Georgetown’s Center for Security & Emerging Technology and a Rhodes Scholar.

Mentoring: I was the first in my family to attend college, and I know that navigating higher education can be challenging for first-generation low-income students. If you identify as a first-generation low-income student and are looking for mentorship, please feel free to get in touch using this form.


News

Sep '24 I was selected as a Rising Star in Generative AI!
Jun '24 I was awarded a $700,000 Foundational Research Grant to improve the trustworthiness of LLMs!
May '24 Our work on group-aware priors won a notable paper award at AISTATS 2024!
Apr '24 Our work on language-guided control won an outstanding paper award at the GenAI4DM Workshop!

Selected Papers

For a complete list, please see: [Publications] or [Google Scholar]

  1. Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
    Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, and Tim G. J. Rudner
    Advances in Neural Information Processing Systems (NeurIPS), 2024
    Outstanding Paper Award, ICLR 2024 Workshop on GenAI for Decision-Making
    NeurIPS Spotlight Talk
  2. Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
    Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, and Julia Kempe
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
    AISTATS Notable Paper Award
  3. Domain-Aware Guidance for Out-of-Distribution Molecular and Protein Design
    Leo KlarnerTim G. J. Rudner, Garrett M. Morris, Charlotte Deane, and Yee Whye Teh
    International Conference on Machine Learning (ICML), 2024
  4. Non-Vacuous Generalization Bounds for Large Language Models
    Sanae Lotfi*, Marc Finzi*, Yilun Kuang*Tim G. J. Rudner, Micah Goldblum, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2024
  5. Uncertainty-Aware Priors for Fine-Tuning Pre-trained Vision and Language Models
    Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, and Andrew Gordon Wilson
    ICML Workshop on Structured Probabilistic Inference & Generative Modeling, 2024
  6. wang2023m2ib.png
    Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution
    Ying Wang*, Tim G. J. Rudner*, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  7. rudner2023fseb.png
    Function-Space Regularization in Neural Networks: A Probabilistic Perspective
    Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2023
  8. rudner2022fsvi.png
    Tractable Function-Space Variational Inference in Bayesian Neural Networks
    Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2022
  9. tran2022plex.png
    Plex: Towards Reliability Using Pretrained Large Model Extensions
    Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil BandTim G. J. Rudner, Karan Singhal, Zachary Nado, Joost Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, and Balaji Lakshminarayanan
    ICML Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, 2022
    Contributed Talk, ICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
  10. rudner2021odrl.png
    Outcome-Driven Reinforcement Learning via Variational Inference
    Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, and Sergey Levine
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  11. band2021benchmarking.png
    Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
    Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    Spotlight Talk, NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications

AI Governance

  1. Not Oracles of the Battlefield: Safety Considerations for AI-Based Military Decision Support Systems
    Emilia Probasco, Matthew Burtell, Helen Toner, and Tim G. J. Rudner
    AAAI Conference on Artificial Intelligence, Ethics, and Society (Forthcoming) (AIES), 2024
  2. Evaluating Explainability Claims is Not Self-Explanatory
    Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
    CSET Issue Briefs (Forthcoming), 2024
  3. Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2024
  4. OECD Framework for the Classification of AI systems
    OECD (as a contributing author)
    OECD Digital Economy Papers, 2022
  5. rudner2021specification.png
    Key Concepts in AI Safety: Specification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  6. rudner2021interpretability.png
    Key Concepts in AI Safety: Interpretability in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  7. rudner2021robustness.png
    Key Concepts in AI Safety: Robustness and Adversarial Examples
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  8. rudner2021aisafety.png
    Key Concepts in AI Safety: An Overview
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021