Tim G. J. Rudner

Assistant Professor, Department of Statistical Sciences, University of Toronto
Faculty Member, Vector Institute for Artificial Intelligence


 
timrudner.png

tim.rudner[at]utoronto.ca

Google Scholar

My research interests span machine learning, AI safety, and AI governance. The goal of my research is to create well-specified, robust, and transparent machine learning models that can be deployed in safety-critical and high-stakes settings. I am particularly interested in (i) understanding and expanding the statistical foundations of generative models [1,2] (with a focus on robustness to domain shifts [1,2], reliable uncertainty quantification [1,2,3], and interpretability [1,2,3]), (ii) creating trustworthy AI agents [1,2,3], and (iii) designing regulatory approaches that enable the effective governance of frontier AI models [1,2].

Short bio: I am an Assistant Professor of Statistical Sciences at the University of Toronto, a Faculty Member at the Vector Institute for Artificial Intelligence, and a Faculty Affiliate at the Schwartz Reisman Institute for Technology and Society. I am also a Junior Research Fellow of Trinity College at the University of Cambridge, an Associate Member of the Department of Computer Science at the University of Oxford, a Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society, and an AI Fellow at Georgetown University’s Center for Security & Emerging Technology. Before joining the University of Toronto, I was an Assistant Professor and Faculty Fellow at New York University. I hold a PhD in Computer Science from the University of Oxford, where I was a Qualcomm Innovation Fellow and Rhodes Scholar.

I am hiring PhD students at the University of Toronto: I will be recruiting several PhD students at the University of Toronto to start in the fall of 2026. Please apply to both the Department of Statistical Sciences and the Department of Computer Science. I will post information for prospective applicants here by mid-September.

Mentoring: I was the first in my family to attend college, and I know that navigating higher education can be challenging for first-generation low-income students. If you identify as a first-generation low-income student and are looking for mentorship, please feel free to get in touch using this form.


News

≫≫ I have joined the University of Toronto as an Assistant Professor of Statistical Sciences!
Sep '25 I was appointed a Junior Research Fellow of Trinity College at the University of Cambridge!
Sep '25 I was appointed a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society!
Jan '25 NYU published articles about my work on robust and transparent generative AI and reliable LLMs.
Dec '24 I was awarded a $30,000 Apple Seed Grant! Thank you, Apple!
Sep '24 I was selected as a Rising Star in Generative AI!
Jun '24 I was awarded a $700,000 Foundational Research Grant to improve the trustworthiness of LLMs!
May '24 Our work on group-aware priors won a notable paper award at AISTATS 2024!
Apr '24 Our work on language-guided control won an outstanding paper award at the GenAI4DM Workshop!

Selected Papers

For a complete list, please see: [Publications] or [Google Scholar]

  1. Fine-Tuning with Uncertainty-Aware Priors Makes Vision and Language Foundation Models More Reliable
    T. G. J. Rudner, X. Pan, Y. L. Li, R. Shwartz-ZivA. G. Wilson
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2025
  2. MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs
    G. K. Liu, G. Yona, A. Caciularu, I. SzpektorT. G. J. RudnerA. Cohan
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025
  3. Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
    T. G. J. Rudner, Y. S. Zhang, A. G. WilsonJ. Kempe
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
    AISTATS Notable Paper Award
  4. Non-Vacuous Generalization Bounds for Large Language Models
    S. Lotfi, M. Finzi, Y. KuangT. G. J. Rudner, M. GoldblumA. G. Wilson
    International Conference on Machine Learning (ICML), 2024
  5. SCIURus: Shared Circuits for Interpretable Uncertainty Representations in Language Models
    C. Teplica, Y. Liu, A. CohanT. G. J. Rudner
    Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL), 2025
  6. Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
    G. Gupta, K. Yadav, Y. Gal, D. Batra, Z. Kira, C. LuT. G. J. Rudner
    Advances in Neural Information Processing Systems (NeurIPS), 2024
    Outstanding Paper Award, ICLR 2024 Workshop on GenAI for Decision-Making
    NeurIPS Spotlight Talk
  7. Domain-Aware Guidance for Out-of-Distribution Molecular and Protein Design
    L. KlarnerT. G. J. Rudner, G. M. Morris, C. DeaneY. W. Teh
    International Conference on Machine Learning (ICML), 2024
  8. rudner2023fseb.png
    Function-Space Regularization in Neural Networks: A Probabilistic Perspective
    T. G. J. Rudner, S. Kapoor, S. QiuA. G. Wilson
    International Conference on Machine Learning (ICML), 2023