Tim G. J. Rudner

Faculty Fellow, Center for Data Science, New York University
AI Fellow, Center for Security & Emerging Technology, Georgetown University


 
timrudner.png

tim.rudner [at] nyu.edu


The goal of my research is to create robust and transparent machine learning models that enable the safe, reliable, and fair use of machine learning systems deployed in safety-critical and high-stakes settings. In pursuit of this goal, I develop probabilistic machine learning models that
are robust to distribution shifts [1,2,3], able to reliably estimate their un- certainty [1,2,3], and interpretable [1,2, 3], with an emphasis on problems in generative LLMs [1, 2], healthcare [1, 2], and biomedical discovery [1,2,3,4].

Short bio: I am a Faculty Fellow at New York University, an AI Fellow at the Georgetown Center for Security & Emerging Technology, and an incoming Visiting Researcher at Google DeepMind. I did my PhD in computer science and statistics at the University of Oxford, where I was advised by Yee Whye Teh and Yarin Gal. I received a master’s degree in statistics from the University of Oxford and an undergraduate degree in mathematics and economics from Yale University. I was selected as a 2024 Rising Star in Generative AI, and I am a Qualcomm Innovation Fellow and Rhodes Scholar.

For further details, please see my CV.

Mentoring: I was the first in my family to attend college, and I know that navigating higher education can be challenging for first-generation low-income students. If you identify as a first-generation low-income student and are looking for mentorship, please feel free to get in touch using this form.


News

Oct '24 I was invited to give a contributed talk on fair prediction with group-aware priors at INFORMS 2024!
Sep '24 I was selected as a Rising Star in Generative AI!
Jun '24 I was awarded a $700,000 Foundational Research Grant to improve the trustworthiness of LLMs!
May '24 Our work on group-aware priors won a notable paper award at AISTATS 2024!
Apr '24 Our work on language-guided control won an outstanding paper award at the GenAI4DM Workshop!

Selected Papers

For a complete list, please see: [Publications] or [Google Scholar]

  1. Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
    Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, and Tim G. J. Rudner
    Advances in Neural Information Processing Systems (NeurIPS), 2024
    Outstanding Paper Award, ICLR 2024 Workshop on GenAI for Decision-Making
    NeurIPS Spotlight Talk
  2. Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
    Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, and Julia Kempe
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
    AISTATS Notable Paper Award
  3. rudner2023fseb.png
    Function-Space Regularization in Neural Networks: A Probabilistic Perspective
    Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2023
  4. Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable
    Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, and Andrew Gordon Wilson
    ICML Workshop on Structured Probabilistic Inference & Generative Modeling, 2024
  5. Domain-Aware Guidance for Out-of-Distribution Molecular and Protein Design
    Leo KlarnerTim G. J. Rudner, Garrett M. Morris, Charlotte Deane, and Yee Whye Teh
    International Conference on Machine Learning (ICML), 2024
  6. Non-Vacuous Generalization Bounds for Large Language Models
    Sanae Lotfi*, Marc Finzi*, Yilun Kuang*Tim G. J. Rudner, Micah Goldblum, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2024
  7. rudner2022fsvi.png
    Tractable Function-Space Variational Inference in Bayesian Neural Networks
    Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2022
  8. tran2022plex.png
    Plex: Towards Reliability Using Pretrained Large Model Extensions
    Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil BandTim G. J. Rudner, Karan Singhal, Zachary Nado, Joost Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, and Balaji Lakshminarayanan
    ICML Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, 2022
    Contributed Talk, ICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
  9. rudner2021odrl.png
    Outcome-Driven Reinforcement Learning via Variational Inference
    Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, and Sergey Levine
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  10. band2021benchmarking.png
    Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
    Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    Spotlight Talk, NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications

AI Governance

  1. Evaluating Explainability Claims is Not Self-Explanatory
    Mina Narayanan, Christian Schoeberl, and Tim G. J. Rudner
    CSET Issue Briefs (Forthcoming), 2024
  2. Not Oracles of the Battlefield: Safety Considerations for AI-Based Military Decision Support Systems
    Emilia Probasco, Matthew Burtell, Helen Toner, and Tim G. J. Rudner
    AAAI Conference on Artificial Intelligence, Ethics, and Society (AIES), 2024
  3. Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2024
  4. OECD Framework for the Classification of AI systems
    OECD (as a contributing author)
    OECD Digital Economy Papers, 2022
  5. rudner2021specification.png
    Key Concepts in AI Safety: Specification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  6. rudner2021interpretability.png
    Key Concepts in AI Safety: Interpretability in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  7. rudner2021robustness.png
    Key Concepts in AI Safety: Robustness and Adversarial Examples
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  8. rudner2021aisafety.png
    Key Concepts in AI Safety: An Overview
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021