Publications

For an up-to-date list, please see my Google Scholar page.

2024

  1. Domain-Aware Guidance for Out-of-Distribution Molecular Design
    Leo KlarnerTim G. J. Rudner, and Yee Whye Teh. Garrett M. Morris
    International Conference on Machine Learning (ICML), 2024
  2. Non-Vacuous Generalization Bounds for Large Language Models
    Sanae Lotfi, Marc Finzi, Yilun KuangTim G. J. Rudner, Micah Goldblum, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2024
  3. Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI
    Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. OsborneTim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, and Ruqi Zhang
    International Conference on Machine Learning (ICML), 2024
  4. Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
    Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, and Tim G. J. Rudner
    ICLR Workshop on Generative Models for Decision Making, 2024
  5. Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
    Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, and Julia Kempe
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
  6. li2024bostudy.png
    A Study of Bayesian Neural Network Surrogates for Bayesian Optimization
    Yucen Lily LiTim G. J. Rudner, and Andrew Gordon Wilson
    International Conference on Learning Representations (ICLR), 2024
  7. Uncertainty-Aware Priors for Fine-Tuning Pre-trained Vision and Language Models
    Tim G. J. Rudner, Xiang Pan, Yucen Lily Li, Ravid Shwartz-Ziv, and Andrew Gordon Wilson
    Preprint, 2024

2023

  1. Should We Learn Most Likely Functions or Parameters?
    Shikai Qiu*, Tim G. J. Rudner*, Sanyam Kapoor, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  2. wang2023m2ib.png
    Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution
    Ying Wang*, Tim G. J. Rudner*, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  3. gruver2023nos.png
    Protein Design with Guided Discrete Diffusion
    Nate Gruver, Samuel Stanton, Nathan C. FreyTim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, and Andrew Gordon Wilson
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  4. shwartz2023vicreg.png
    An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization
    Ravid Shwartz-Ziv, Randall Balestriero, Kenji KawaguchiTim G. J. Rudner, and Yann LeCun
    Advances in Neural Information Processing Systems (NeurIPS), 2023
  5. Informative Priors Improve the Reliability of Multimodal Clinical Data Classification
    Julian Lechuga LopezTim G. J. Rudner, and Farah Shamout
    Machine Learning for Health Symposium Findings (ML4H), 2023
  6. rudner2023fseb.png
    Function-Space Regularization in Neural Networks: A Probabilistic Perspective
    Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2023
  7. klarner2023qsavi.png
    Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions
    Leo KlarnerTim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M. Morris, Charlotte Deane, and Yee Whye Teh
    International Conference on Machine Learning (ICML), 2023
  8. feng2023attackingbayes.png
    Attacking Bayes: Are Bayesian Neural Networks Inherently Robust?
    Yunzhen FengTim G. J. Rudner, Nikolaos Tsilivis, and Julia Kempe
    Symposium on Advances in Approximate Bayesian Inference (AABI), 2023
  9. lu2023challenges.png
    Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
    Cong Lu, Philip J. BallTim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, and Yee Whye Teh
    Transactions on Machine Learning Research (TMLR), 2023
  10. gupta2023activesampling.png
    Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?
    Gunshi GuptaTim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, and Yarin Gal
    Conference on Causal Learning and Reasoning (CLeaR), 2023

2022

  1. rudner2022fsvi.png
    Tractable Function-Space Variational Inference in Bayesian Neural Networks
    Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2022
  2. A Neural Tangent Kernel Perspective on Function-Space Regularization in Neural Networks
    Zonghao Chen, Xupeng ShiTim G. J. Rudner, Qixuan Feng, Weizhong Zhang, and Tong Zhang
    NeurIPS Workshop on Optimization for Machine Learning, 2022
  3. rudner2022sfsvi.png
    Continual Learning via Sequential Function-Space Variational Inference
    Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, and Yarin Gal
    International Conference on Machine Learning (ICML), 2022
  4. tran2022plex.png
    Plex: Towards Reliability Using Pretrained Large Model Extensions
    Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil BandTim G. J. Rudner, Karan Singhal, Zachary Nado, Joost Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, and Balaji Lakshminarayanan
    ICML Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, 2022

2021

  1. rudner2021odrl.png
    Outcome-Driven Reinforcement Learning via Variational Inference
    Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, and Sergey Levine
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  2. rudner2021pathologies.png
    On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations
    Tim G. J. Rudner*, Cong Lu*, Michael A. Osborne, Yarin Gal, and Yee Whye Teh
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  3. band2021benchmarking.png
    Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
    Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, and Yarin Gal
    Advances in Neural Information Processing Systems (NeurIPS), 2021
  4. nado2021uncertaintybaselines.png
    Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
    Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie RenTim G. J. Rudner, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, and Dustin Tran
    NeurIPS Workshop on Bayesian Deep Learning, 2021
  5. rudner2021snrissues.png
    On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes
    Tim G. J. Rudner*, Oscar Key*, Yarin Gal, and Tom Rainforth
    International Conference on Machine Learning (ICML), 2021

2020

  1. rudner2020interdomaindgps.png
    Inter-domain Deep Gaussian Processes
    Tim G. J. Rudner, Dino Sejdinovic, and Yarin Gal
    International Conference on Machine Learning (ICML), 2020

2019

  1. fellows2019virel.png
    VIREL: A Variational Inference Framework for Reinforcement Learning
    Matthew Fellows, Anuj MahajanTim G. J. Rudner, and Shimon Whiteson
    Advances in Neural Information Processing Systems (NeurIPS), 2019
  2. rudner2019naturalntk.png
    The Natural Neural Tangent Kernel: Neural Network Training Dynamics under Natural Gradient Descent
    Tim G. J. Rudner, Florian Wenzel, Yee Whye Teh, and Yarin Gal
    NeurIPS Workshop on Bayesian Deep Learning, 2019
  3. filos2019bdlb.png
    A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks
    Angelos Filos, Sebastian Farquhar, Aidan N. GomezTim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud Kroon, and Yarin Gal
    NeurIPS Workshop on Bayesian Deep Learning, 2019
  4. rudner2019multi3net.png
    Multi³Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery
    Tim G. J. Rudner, Marc Rußwurm, Jakub Fil, Ramona Pelich, Benjamin Bischke, Veronika Kopackova, and Piotr Bilinski
    AAAI Conference on Artificial Intelligence (AAAI), 2019
  5. samvelyan19smac.png
    The StarCraft Multi-Agent Challenge
    Mikayel Samvelyan, Tabish Rashid, Christian Schroeder Witt, Gregory Farquhar, Nantas NardelliTim G. J. Rudner, Chia-Man Hung, Philip H. S. Torr, Jakob Foerster, and Shimon Whiteson
    International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), 2019


Policy Reports & Issue Briefs

2024

  1. Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs (Forthcoming), 2024

2022

  1. OECD Framework for the Classification of AI systems
    OECD (as a contributing author)
    OECD Digital Economy Papers, 2022

2021

  1. rudner2021specification.png
    Key Concepts in AI Safety: Specification in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  2. rudner2021interpretability.png
    Key Concepts in AI Safety: Interpretability in Machine Learning
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  3. rudner2021robustness.png
    Key Concepts in AI Safety: Robustness and Adversarial Examples
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021
  4. rudner2021aisafety.png
    Key Concepts in AI Safety: An Overview
    Tim G. J. Rudner, and Helen Toner
    CSET Issue Briefs, 2021