Nazneen Fatema Rajani


  • I will be giving an invited talk at the Toronto Machine Learning Summit (TMLS) on using language models to generate explanations for commonsense reasoning.
  • I will be talking at DreamForce 2019 as part of the Research keynote as well as a session of my own. DF19
  • I will be giving at talk at UTCS for their FAI series.
  • I am hiring a RS to work broadly on XAI. If you are interested, please apply here.
  • My research and talk on AI and fairness is featured on the UNESCO page .
  • Giving an invited talk in the AI and Fairness session at the Deep Learning Indaba organized by UNESCO in Nairobi, Kenya.
  • My work was featured in various media outlets: venturebeat, datanami , zdnet, siliconangle.
  • Released the Commonsense Explanations dataset on github .
  • Our paper on leveraging explanations for commonsense reasoning accepted at ACL 2019!

I am a Research Scientist at Salesforce Research since January 2019. My primary research interests are in the fields of Natural Language Understanding, Machine Learning and Explainable Artificial Intelligence (XAI). I am currently working on projects in the areas of language modeling with commonsense reasoning, language grounding with vision, gender bias in language modeling and explainable reinforcement learning. I consider myself to be very lucky to be able to collaborate on these exciting projects with really smart people.

Before joing Salesforce Research, I defended my Ph.D. thesis in the Department of Computer Science at the Machine Learning Research Group of the University of Texas, Austin. I worked with my advisor Prof. Ray Mooney on problems in NLP, Vision and at the intersection of NLP and Vision. My research proposed a new ensembling algorithm called Stacking With Auxiliary Features (SWAF). The idea behind SWAF is that an output is more reliable if not just multiple systems agree on it but they also agree on “why” that output was predicted. SWAF effectively leverages component models by integrating such relevant information from context to improve ensembling. We demonstrated our approach to challenging structured prediction problems in NLP and Vision including Information Extraction, Object Detection, and Visual Question Answering.

I have also worked on problems in Explainable AI (XAI) wherein I proposed a scalable approach to generate visual explanations for ensemble methods using the localization maps of the component systems. Evaluating explanations is also a challenging problem and I proposed two novel evaluation metrics that does not require human generated GT.

I completed my MS in CS with thesis advised by Jason Baldridge on new topic detection using topical alignment from tweets based on their author and recipient. In the past I have also worked with Maytal Saar-Tsechansky on rumour detection using tweets and with Inderjit Dhillon on analysis of time series tweets as well as fast parallel graph mining.