Incoming Assistant Professor & Faculty Fellow, New York University
PhD Candidate, Department of Computer Science, University of Oxford
Oxford Applied & Theoretical Machine Learning Group (OATML)
Oxford Computational Statistics & Machine Learning Group (OxCSML)
tim.rudner [AT] cs.ox.ac.uk
I am a PhD Candidate in the Department of Computer Science at the University of Oxford, where I conduct research on probabilistic machine learning with Yarin Gal and Yee Whye Teh.
The goal of my research is to develop methods and theoretical insights that enable the safe deployment of machine learning systems in safety-critical settings by drawing on tools from variational Bayesian inference and reinforcement learning. I am particularly interested in Bayesian uncertainty quantification in deep learning, probabilistic inference in reinforcement learning and control, and AI safety.
For my work on safe decision-making under uncertainty, I received the 2021 Qualcomm Innovation Fellowship. I care deeply about equitable access to education and serve as an Equality, Diversity & Inclusion Fellow at the University of Oxford.
I am also an AI Fellow at Georgetown University's Center for Security and Emerging Technology and a Rhodes Scholar.
I obtained an undergraduate degree in mathematics and economics from Yale University, where I received the Charles E. Clark Memorial Award for Academic Excellence. Subsequently, I earned a master's degree in statistics from the University of Oxford, where I was advised by Dino Sejdinovic. During my PhD, I was fortunate to work with Sergey Levine at the University of California, Berkeley and with Sekhar Tatikonda at Yale University. Prior to coming to Oxford, I conducted research on game theoretic equilibria in digital goods markets, systemic risk in financial markets, and drivers of financial crises. I am a member of the Oxford Center for Doctoral Training in Autonomous Intelligent Machines & Systems (AIMS) and the OECD Network of Experts, as well as a Fellow of the German Academic Scholarship Foundation.
Most recent publications on Google Scholar.
Tractable Function-Space Variational Inference in Bayesian Neural Networks
Tim G. J. Rudner*, Zonghao Chen*, Yee Whye Teh, Yarin Gal
NeurIPS '22 Conference on Neural Information Processing Systems. 2022.
Continual Learning via Sequential Function-Space Variational Inference
Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal
ICML '22 International Conference on Machine Learning. 2022.
On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes
Tim G. J. Rudner*, Oscar Key*, Yarin Gal, Tom Rainforth
ICML '21 International Conference on Machine Learning. 2021.
Inter-domain Deep Gaussian Processes
Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal
ICML '20 International Conference on Machine Learning. 2020.
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Cong Lu*, Philip J. Ball*, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh
Preprint. 2022.
RSS '22 Workshop on Learning from Diverse, Offline Data. 2022. (Outstanding Paper Award)
ICML '22 Workshop on Decision Awareness in Reinforcement Learning. 2022.
Outcome-Driven Reinforcement Learning via Variational Inference
Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, Sergey Levine
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations
Tim G. J. Rudner*, Cong Lu*, Michael A. Osborne, Yarin Gal, Yee Whye Teh
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
VIREL: A Variational Inference Framework for Reinforcement Learning
Matthew Fellows, Anuj Mahajan, Tim G. J. Rudner, Shimon Whiteson
NeurIPS '19 Conference on Neural Information Processing Systems. 2019. (Spotlight Talk)
Plex: Towards Reliability Using Pretrained Large Model Extensions
Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier,
Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band,
Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort,
Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan,
Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani,
Jasper Snoek, Balaji Lakshminarayanan
Preprint. 2022.
ICML '22 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward. 2022. (Contributed Talk)
ICML '22 Workshop on Principles of Distribution Shift. 2022.
Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
Zachary Nado*, Neil Band*, Mark Collier, Josip Djolonga, Michael W. Dusenberry,
Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton,
Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren,
Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy,
D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
Technical Report. 2022.
NeurIPS '21 Workshop on Bayesian Deep Learning. 2021.
Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado,
Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, Yarin Gal
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
Key Concepts in AI Safety: Specification in Machine Learning
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: Interpretability in Machine Learning
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: Robustness and Adversarial Examples
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: An Overview
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
OECD Framework for the Classification of AI systems
OECD (contributing author: Tim G. J. Rudner)
OECD Publishing. 2022.
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Cong Lu*, Philip J. Ball*, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh
Preprint. 2022.
RSS '22 Workshop on Learning from Diverse, Offline Data. 2022. (Outstanding Paper Award)
ICML '22 Workshop on Decision Awareness in Reinforcement Learning. 2022.
Plex: Towards Reliability Using Pretrained Large Model Extensions
Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier,
Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band,
Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort,
Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan,
Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani,
Jasper Snoek, Balaji Lakshminarayanan
Preprint. 2022.
ICML '22 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward. 2022. (Contributed Talk)
ICML '22 Workshop on Principles of Distribution Shift. 2022.
Tractable Function-Space Variational Inference in Bayesian Neural Networks
Tim G. J. Rudner*, Zonghao Chen*, Yee Whye Teh, Yarin Gal
NeurIPS '22 Conference on Neural Information Processing Systems. 2022.
Continual Learning via Sequential Function-Space Variational Inference
Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal
ICML '22 International Conference on Machine Learning. 2022.
Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
Zachary Nado*, Neil Band*, Mark Collier, Josip Djolonga, Michael W. Dusenberry,
Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton,
Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren,
Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy,
D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
Technical Report. 2022.
NeurIPS '21 Workshop on Bayesian Deep Learning. 2021.
Outcome-Driven Reinforcement Learning via Variational Inference
Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, Sergey Levine
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations
Tim G. J. Rudner*, Cong Lu*, Michael A. Osborne, Yarin Gal, Yee Whye Teh
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks
Neil Band*, Tim G. J. Rudner*, Qixuan Feng, Angelos Filos, Zachary Nado,
Michael W. Dusenberry, Ghassen Jerfel, Dustin Tran, Yarin Gal
NeurIPS '21 Conference on Neural Information Processing Systems. 2021.
On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes
Tim G. J. Rudner*, Oscar Key*, Yarin Gal, Tom Rainforth
ICML '21 International Conference on Machine Learning. 2021.
Inter-domain Deep Gaussian Processes
Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal
ICML '20 International Conference on Machine Learning. 2020.
VIREL: A Variational Inference Framework for Reinforcement Learning
Matthew Fellows, Anuj Mahajan, Tim G. J. Rudner, Shimon Whiteson
NeurIPS '19 Conference on Neural Information Processing Systems. 2019. (Spotlight Talk)
The Natural Neural Tangent Kernel: Neural Network Training Dynamics under Natural Gradient Descent
Tim G. J. Rudner, Florian Wenzel, Yee Whye Teh, Yarin Gal
NeurIPS '19 Workshop on Bayesian Deep Learning. 2019. (Contributed Talk)
A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks
Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh,
Arnoud de Kroon, Yarin Gal
Technical Report. 2019.
NeurIPS '19 Workshop on Bayesian Deep Learning. 2019.
Multi3Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery
Tim G. J. Rudner, Marc Rußwurm, Jakub Fil, Ramona Pelich, Benjamin Bischke, Veronika Kopackova, Piotr Bilinski
AAAI '19 Conference on Artificial Intelligence. 2019.
The StarCraft Multi-Agent Challenge
Mikayel Samvelyan, Tabish Rashid, Christian Schroeder de Witt, Gregory Farquhar, Nantas Nardelli,
Tim G. J. Rudner, Chia-Man Hung, Philip H. S. Torr, Jakob Foerster, Shimon Whiteson
AAMAS '19 International Conference on Autonomous Agents and Multiagent Systems. 2019.
On the Connection between Neural Processes and Gaussian Processes with Deep Kernels
Tim G. J. Rudner, Vincent Fortuin, Yee Whye Teh, Yarin Gal
NeurIPS '18 Workshop on Bayesian Deep Learning. 2018.
Key Concepts in AI Safety: Specification in Machine Learning
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: Interpretability in Machine Learning
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: Robustness and Adversarial Examples
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
Key Concepts in AI Safety: An Overview
Tim G. J. Rudner and Helen Toner
CSET Issue Briefs. 2021.
OECD Framework for the Classification of AI systems
OECD (contributing author: Tim G. J. Rudner)
OECD Publishing. 2022.