About Me
I am a Ph.D. candidate in Applied Mathematics at the University of Washington, advised by Prof. Lillian Ratliff (ECE) and Prof. Eric Shea-Brown (AMATH). My research focuses on generative models, neural network generalization, reinforcement learning, and computational neuroscience.
Research Overview
My work explores fundamental questions about how intelligent systems learn, represent uncertainty, and generalize. I investigate:
- Diffusion Language Models: Developing ultra-fast inference methods via reinforcement learning to make diffusion models competitive with autoregressive approaches
- Neural Network Theory: Establishing connections between loss landscape geometry (sharpness) and representation robustness
- Probabilistic Computation: Understanding how recurrent neural networks can implement probabilistic sampling and uncertainty quantification
- Robotics & Vision-Language Models: Applying generative modeling techniques to robot learning and foundation models
Background
Before joining UW, I completed my B.Sc. in Mathematics and Computer Science at University of Wisconsin-Madison, where I was nominated for the Dean’s Prize (top 0.1% of graduating class) and competed as an ICPC World Finalist.
I have industry experience at Ai2 (working on robotics foundation models), Meta (neural interface systems), and Facebook (enterprise software). My research has been published at top venues including NeurIPS, TMLR, and iScience.
Skills & Expertise
I work primarily with PyTorch for large-scale deep learning, with expertise in:
- Generative models (diffusion models, language models)
- Reinforcement learning (policy optimization, reward design)
- Mathematical modeling and theoretical analysis
- Distributed training and efficient inference systems
Feel free to reach out if you’re interested in collaboration or discussing research ideas!
