Nazneen Fatema Rajani


News

  • Honored to serve on the United Nation's AI Advisory Board [ link ]
  • Featured in the NYT article cover on RLHF [ link ]
  • Zephyr 7B performance as good as ChatGPT [ paper ]
  • Transformers United seminar lecture [ slides]
  • Invited talk at Stanford's NLP Seminar [slides]
  • Invited talk at UC Berkeley Decentralization Summit [ slides ]
  • Keynote at the AI Conference [link]
  • Tutorial on Trustworthy Generative AI at ICML '23 and FAccT '23 [slides]
  • Fireside chat with Pear VC on Generative AI [blog]
  • Guest lectures at UT Austin, UMich Ann Arbor, and Georgia Tech March-April '23 [slides].
  • Invited talk at NIST on March 2, '23 [recording].
  • Keynote at the EMNLP'22 Industry track

I am Research Lead at Hugging Face 🤗. Currently, I am part of the team working on building an open-source alternative to ChatGPT called H4. While building such a powerful LLM, I am thinking deeply about evaluation, red-teaming, alternatives to RLHF, and model interpretability in the era of LLMs. Check out my blogpost for some musings as well as empirical results on these topics.

My interests and expertise lie at the intersection of Model Evaluation, Robustness, and Interpretability. My long term research agenda includes 1. Dataset curation for understanding training and evaluation data, 2. Human-in-the-loop training and evaluation for model robustness, 3. Repurposing LLMs and PLMs to make them more usable in the real world. Checkout my HF spaces to see what I am building

Before joing Hugging Face, I was a Senior Research Scientist at Salesforce Research where I worked with Richard Socher and Caiming Xiong on commonsense reasoning and interpretability in NLP. I led a small team focused on building robust natural language generation models. Prior to working at Salesforce, I completed my Ph.D. thesis in the Department of Computer Science at the Machine Learning Research Group of the University of Texas, Austin. I worked with my advisor Prof. Ray Mooney on problems in NLP, Vision and at the intersection of NLP and Vision. I have also worked on problems in Explainable AI (XAI) wherein I proposed a scalable approach to generate visual explanations for ensemble methods using the localization maps of the component systems. Evaluating explanations is also a challenging problem and I proposed two novel evaluation metrics that does not require human generated GT.

I completed my MS in CS with thesis advised by Jason Baldridge on new topic detection using topical alignment from tweets based on their author and recipient.