I am Robustness Research Lead at Hugging Face 🤗. My interests and expertise lie at the intersection of Model Evaluation, Robustness, and Interpretability. My long term research agenda includes 1. Dataset curation for understanding training and evaluation data, 2. Human-in-the-loop training and evaluation for model robustness, 3. Repurposing LLMs and PLMs to make them more usable in the real world. Checkout my HF spaces to see what I am building
Before joing Hugging Face, I was a Senior Research Scientist at Salesforce Research where I worked with Richard Socher and Caiming Xiong on commonsense reasoning and interpretability in NLP. I led a small team focused on building robust natural language generation models. Prior to working at Salesforce, I completed my Ph.D. thesis in the Department of Computer Science at the Machine Learning Research Group of the University of Texas, Austin. I worked with my advisor Prof. Ray Mooney on problems in NLP, Vision and at the intersection of NLP and Vision. I have also worked on problems in Explainable AI (XAI) wherein I proposed a scalable approach to generate visual explanations for ensemble methods using the localization maps of the component systems. Evaluating explanations is also a challenging problem and I proposed two novel evaluation metrics that does not require human generated GT.
I completed my MS in CS with thesis advised by Jason Baldridge on new topic detection using topical alignment from tweets based on their author and recipient.