Tim G. J. Rudner
Faculty Fellow, Center for Data Science, New York University
AI Fellow, Center for Security & Emerging Technology, Georgetown University
tim.rudner [at] nyu.edu
The goal of my research is to create robust and transparent machine learning models that enable the safe, reliable, and fair use of machine learning systems deployed in safety-critical and high-stakes settings.
In pursuit of this goal, I develop probabilistic machine learning models that
are robust to distribution shifts [1,2,3], able to reliably estimate their un- certainty [1,2,3], and interpretable [1,2, 3], with an emphasis on problems in generative LLMs [1, 2], healthcare [1, 2], and biomedical discovery [1,2,3,4].
Short bio: I am a Faculty Fellow at New York University, an AI Fellow at the Georgetown Center for Security & Emerging Technology, and an incoming Visiting Researcher at Google DeepMind. I did my PhD in computer science and statistics at the University of Oxford, where I was advised by Yee Whye Teh and
For further details, please see my CV.
Mentoring: I was the first in my family to attend college, and I know that navigating higher education can be challenging for first-generation low-income students. If you identify as a first-generation low-income student and are looking for mentorship, please feel free to get in touch using this form.
News
Oct '24 | I was invited to give a contributed talk on fair prediction with group-aware priors at INFORMS 2024! |
---|---|
Sep '24 | I was selected as a Rising Star in Generative AI! |
Jun '24 | I was awarded a $700,000 Foundational Research Grant to improve the trustworthiness of LLMs! |
May '24 | Our work on group-aware priors won a notable paper award at AISTATS 2024! |
Apr '24 | Our work on language-guided control won an outstanding paper award at the GenAI4DM Workshop! |