publications
2025
- SnuggleSense: Empowering Online Harm Survivors Through a Structured Sensemaking ProcessSijia Xiao, Haodi Zou, Amy Mathews, Jingshu Rui, Coye Cheshire, and Niloufar SalehiCSCW, 2025
Online interpersonal harm, such as cyberbullying and sexual harassment, remains a pervasive issue on social media platforms. Traditional approaches, primarily content moderation, often overlook survivors’ needs and agency. We introduce SnuggleSense, a system that empowers survivors through structured sensemaking. Inspired by restorative justice practices, SnuggleSense guides survivors through reflective questions, offers personalized recommendations from similar survivors, and visualizes plans using interactive sticky notes. A controlled experiment demonstrates that SnuggleSense significantly enhances sensemaking compared to an unstructured process of making sense of the harm. We argue that SnuggleSense fosters community awareness, cultivates a supportive survivor network, and promotes a restorative justice-oriented approach toward restoration and healing. We also discuss design insights, such as tailoring informational support and providing guidance while preserving survivors’ agency.
2024
- SAGE: System for Accessible Guided Exploration of Health InformationSabriya M. Alam, Haodi Zou, Reya Vir, and Niloufar SalehiAAAI Workshop on Public Sector LLMs, 2024
The Center for Disease Control estimates that six in ten adults in the United States currently live with a chronic disease such as cancer, heart disease, or diabetes. Yet most patients lack sufficient access to comprehensible information and guidance for effective self-management of chronic conditions and remain unaware of gaps in their knowledge. To address this challenge, we introduce SAGE, a System for Accessible Guided Exploration of healthcare information. SAGE is an information system that leverages Large Language Models (LLMs) to help patients identify and fill gaps in their understanding through automated organization of healthcare information, generation of guiding questions, and retrieval of reliable and accurate answers to patient queries. While LLMs may be a powerful intervention for these tasks, they pose risks and lack reliability in such high-stakes settings. One approach to address these limitations is to augment LLMs with Knowledge Graphs (KGs) containing well-structured and pre-verified health information. Thus, SAGE demonstrates how LLMs and KGs can complement each other to aid in the construction and retrieval of structured knowledge. By integrating the flexibility and natural language capabilities of LLMs with the reliability of KGs, SAGE seeks to create a collaborative system that promotes knowledge discovery for informed decision-making and effective self-management of chronic conditions.
- ALOHa: A New Measure for Hallucination in Captioning ModelsSuzanne Petryk*, David Chan*, Anish Kachinthaya, Haodi Zou, John Canny, Joseph E. Gonzalez, and Trevor DarrellNACCL, 2024
Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6% more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8% more on nocaps, where objects extend beyond MS COCO categories.