profile photo

Nate Gruver

I am a machine learning PhD student at NYU Courant advised by Andrew Gordon Wilson and working closely with Kyunghyun Cho.

I received a BS/MS in computer science from Stanford University, where I worked with Stefano Ermon, Mykel Kochenderfer, and Chris Piech.

I spent my last two summers working on generative modeling for applications in chemistry and biology at FAIR. I've also spent time at Waymo, Apple, and Google.

Starting October 2024 I will on the job market. I am primarily looking for industry research positions, but I will also be considering engineering roles and postdoc positions if the team is really exciting. Please send me an email if you see a good fit!

Email  /  Twitter  /  Google Scholar

Research

I work on deep learning and generative modeling with the following themes:

  1. Understanding the relationship between large-scale pretraining and inductive biases [1, 2]
  2. Generative modeling for protein and materials design [3, 4]
  3. Combining generative models with uncertainty estimates [5, 3]
Publications

Large Language Models Must Be Taught to Know What They Don't Know
Sanyam Kapoor*, Nate Gruver*, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, Andrew Gordon Wilson,
Under Review
Fine-Tuned Language Models Generate Stable Inorganic Materials as Text
Nate Gruver, Anuroop Sriram, Andrea Madotto, Andrew Gordon Wilson, C. Lawrence Zitnick, Zachary Ulissi,
ICLR, 2024 (Poster)
Large Language Models Are Zero Shot Time Series Forecasters
Nate Gruver*, Marc Finzi*, Shikai Qiu*, Andrew Gordon Wilson
NeurIPS, 2023 (Poster)
Protein Design with Guided Discrete Diffusion
Nate Gruver*, Samuel Stanton*, Nathan Frey, Tim Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew Gordon Wilson
NeurIPS, 2023 (Spotlight)
The Lie Derivative for Measuring Learned Equivariance
Nate Gruver*, Marc Finzi*, Micah Goldblum, Andrew Gordon Wilson
ICLR, 2023 (Oral)
On Feature Learning in the Presence of Spurious Correlations
Pavel Izmailov*, Polina Kirichenko*, Nate Gruver*, Andrew Gordon Wilson
NeurIPS, 2022
Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders
Samuel Stanton, Wesley Maddox, Nate Gruver, Phillip Maffettone, Emily Delaney, Peyton Greenside, Andrew Gordon Wilson
ICML, 2022 (short talk)
Deconstructing the Inductive Biases of Hamiltonian Neural Networks
Nate Gruver, Marc Finzi, Samuel Stanton, Andrew Gordon Wilson
ICLR, 2022 (Spotlight)
Effective Surrogate Models for Protein Design with Bayesian Optimization
Nate Gruver, Samuel Stanton, Polina Kirichenko, Marc Finzi, Phillip Maffettone, Vivek Myers, Emily Delaney, Peyton Greenside, Andrew Gordon Wilson
ICML Workshop on Computational Biology, 2021
Epistemic Uncertainty in Learning Chaotic Dynamical Systems
Nate Gruver, Sanyam Kapoor, Miles Cranmer, Andrew Gordon Wilson
ICML Uncertainty in Deep Learning Workshop, 2021
Disagreement-Regularized Imitation of Complex Multi-Agent Interactions
Nate Gruver, Jiaming Song, Stefano Ermon
NeuRIPs, Machine Learning for Autonomous Driving Workshop, 2020
Multi-agent Adversarial Inverse Reinforcement Learning with Latent Variables
Nate Gruver, Jiaming Song, Mykel Kochenderfer, Stefano Ermon
AAMAS, 2020
Online Stochastic Planning for Multimodal Sensing and Navigation under Uncertainty
Nate Gruver, Shushman Choudhury, Mykel Kochenderfer
ICAPS, 2020
Additional Projects
Incentives in Choosing Academic Research Projects
Y. Carmon, I. Fosli, N. Gruver. CS269I course project, 2018.