About me

I am a research scientist at Facebook Reality Labs, working at the intersection of computational cognitive science and machine learning to make the future of augmented/virtual reality awesome. I did my postdoc with Jon Cohen at the the Princeton Neuroscience Institute, and my Ph.D at the University of Michigan with Rick Lewis and Satinder Singh. I got into science working as an RA for Colin Phillips at the University of Maryland Cognitive Neuroscience of Language Lab, and writing my undergrad linguistics thesis with Maria Piñango at Yale, in between which I had a brief stint managing a multimillion-dollar emerging-markets product for Gartner.

I turn theory into computational models and tools that make scientists more effective. Sometimes those scientists are just me, but I try to multiply my impact by helping people do better science. I’m passionate about understanding how humans make sense of the world and have done work on eye movement control, language and music processing, decision making, neuroimaging, and most recently psychophysics. While having a lot of data is great, I often gravitate to problems where we can bring domain knowledge and structure to data-poor problems.

My modeling toolkit is focused on dynamical system and probabilistic models (especially hierarchical models and Bayesian nonparametrics), though I have also published on neural network models and organize a deep learning conference. While I primarily do theoretical / modeling work now, I also have end-to-end expertise in methods for studying human brain and behavior (choice and response time data, eyetracking, and MEG/EEG), from experiment design to analysis.