Announcement: In July 2024, I will join the UCLA Samueli School of Engineering as an Assistant Professor 🌴! I am actively looking for motivated students to start in Fall 2024. If you're interested, apply to UCLA's CS PhD program and mention me as a potential advisor in your application. You can also send me an email (firstname.lastname@example.org), though I can't promise individual responses and will not consider applications until after December. You can find out more about my research agenda here. At NYU and UCLA, I'll be running the Misinformation, AI and Responsible Society (MARS) Lab.
I am a NYU Data Science Faculty Fellow affiliated with ML2, the Center for Responsible AI (RAI) and the Alignment Research Group (ARG). I also work with the wonderful Prof. Marzyeh Ghassemi as a MIT CSAIL Postdoctoral Fellow. Previously, I received my PhD from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I was very fortunate to be advised by Prof. Yejin Choi and Prof. Franziska Roesner. My work focuses on measuring factuality and intent of human-written language. Two key dimensions of machine reasoning that excite me are social commonsense reasoning and fairness in NLP. During my PhD, I interned at SRI, the AI2 Mosaic group and MSR.
December 2023: Tutorial co-chair for NeurIPS 2023.
November 2023: Invited talk at NYU CDS Seminar.
November 2023: Guest lecture in NLP at MIT.
November 2023: Invited talk at Northeastern.
November 2023: Presenting at NYU-KAIST Inclusive AI Workshop.
October 2023: Invited talk at Mount Holyoke College.
October 2023: Guest lecture on AI Ethics at Oakton College.
September 2023: Co-teaching my first class as a professor (NYU Data Science Capstone).
September 2023: Thank you to MIT (Generative AI Impact Award) and Cohere for $61,000 of grant support over the summer. I look forward to discussing the funded projects!
August 2023: New paper on LLMs for mental health prediction.
August 2023: New paper and dataset (Socratis) exploring capabilities of multimodal models for understanding emotional reactions to images.
June 2023: Panelist at CHIL 2023 on LLMs for healthcare.
June 2023: Talk at Spotify NYC.
April 2023: Invited talks at UCLA, MIT and Princeton.
March 2023: Guest lectures at the University of Washington (Undergraduate NLP, CSE 447) and Carnegie Mellon University (Computational Ethics, CS 11-830).
March 2023: Invited talks at the University of Chicago, Northeastern and Cornell.
February 2023: Invited talks at the University of Pittsburgh, University of Michigan, UMass Amherst, Boston University and Johns Hopkins.
January 2023: Invited talks at Heriot-Watt and Emory.
October 2022: New paper on testing robustness of NLI and hate speech classifiers with generated adversaries accepted to EMNLP Findings!
August 2022: Guest lecture in UW Intro to Machine Learning course (CSE 416).
July 2022: Named an outstanding reviewer for NAACL 2022.
July 2022: Socio-Cultural Inclusion co-chair for NAACL 2022.
May 2022: Our team's proposal to investigate misinformation and social biases will be part of a new TACC high-performance computing program initative.
April 2022: Invited talk at Cornell JEDI dialogues seminar.
February 2022: Two papers accepted to ACL 2022 main conference!
February 2022: Darpa Semafor keynote talk on Misinfo Reaction Frames.
December 2021: Invited talk at Stanford NLP seminar.
October 2021: Presenting at MIT EECS Rising Stars Workshop. July 2021: Co-organizing Safety for E2E Conversational AI at SIGDIAL 2021.
May 2021: Work on evaluating effectiveness of factuality metrics for summarization (GO FIGURE) accepted to ACL 2021 Findings!
April 2021: New preprint on defending against misinformation.
January 2021: Invited talk at UMass Amherst Rising Stars Seminar.
December 2020: Paragraph-level Commonsense Transformers accepted to AAAI 2021.
Presenting at NeurIPS 2020 Resistance AI Workshop.
October 2020: Presented on Social and Power Implications of Language at UW colloquium.
September 2020: Presented on summarization with cooperative generator-discriminator networks and detection of implicit social biases in text at BBN Technologies.
July 2020: Presented as part of Voice Tech Global panel on implicit bias towards the Black community and conversational AI.
Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang.
Can Machines Learn Morality? The Delphi Experiment
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, Yulia Tsvetkov, Oren Etzioni, Maarten Sap, Regina Rini, Yejin Choi.
Socratis: Are large multimodal models emotionally aware?
Katherine Deng, Arijit Ray, Reuben Tan, Saadia Gabriel, Bryan Plummer, Kate Saenko.
ICCV 2023 WECEIA.
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?
Saadia Gabriel, Hamid Palangi, Yejin Choi.
EMNLP 2022 Findings.
GO FIGURE: A Meta Evaluation of Factuality in Summarization
Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, Jianfeng Gao.
ACL 2021 Findings.
Discourse Understanding and Factual Consistency in Abstractive Summarization
Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi.
Detecting and Tracking Communal Bird Roosts in Weather Radar Data
Zezhou Cheng, Saadia Gabriel, Pankaj Bhambhani, Daniel Sheldon, Subhransu Maji, Andrew Laughlin, David Winkler.
The Risk of Racial Bias in Hate Speech Detection
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith.
ACL 2019. Best Paper Nominee.
Early Fusion for Goal Directed Robotic Vision
Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox.
IROS 2019. Best Paper Nominee.