About Me
Hello! I’m a Ph.D. student at MIT in the Operations Research Center (ORC) where I am extremely fortunate to be advised by Professor Dimitris Bertsimas. I’m currently focused on building interpretable, multi-modal, and robust machine learning systems for high-stakes decision making, especially in the healthcare domain.
Before joining MIT, I was a Machine Learning Engineer at Enolink, a Healthcare AI start-up based out of Cambridge, MA. There, I had the opportunity to work with a great group of people on designing and deploying clinical decision support systems. Prior to that, I was a software engineer in machine learning at Capital One working on applied NLP projects, building intelligent search and recommendation systems. I received a B.A. in Mathematics from Cornell in the Spring of 2020, with concentrations in Applied Mathematics and Computer Science.
As a developer, I use Python as my primary general-purpose language for backend development, data engineering, and machine learning. My tech stack includes Python, Go, SQL, AWS (SA Certified), K8s, Docker, Ansible, Gitlab, Flask/Django, and Vue. For data engineering, I’ve worked with Airflow and Dask. For ML/DS work, I’ve used (and developed) many libraries in Python, using primarily PyTorch for deep learning projects, along with a host of other libraries for specific applications such as NLP, Survival Analysis, Explainable + Interpretable ML, Computer Vision, and more. Check out my CV for more details!
New Paper! Optimal Predictive-Policy Trees (OP2T)
Given a collection of machine learning (ML) models, which one should developers be using for a particular scenario, and when? When are the models likely to be error-prone? Should we use a black-box model or an interpretable model? Our new paper proposes a tree-based methodology that provides intepretable policies for adaptive model selection. The result? Improved performance and model insight. Check out the pre-print here!