Synthia: Visually Interpreting and Synthesizing Feedback for Writing Revision
Chao Zhang, Kexin Ju, Zhuolun Han, Yu-Chun Grace Yen, and Jeffrey M. Rzeszotarski
Feedback, Sensemaking, Writing, Revision, Visual Interfaces, Human-AI Interaction
UIST 2025
While recent advances in HCI and generative AI have improved authors' access to feedback on their work, the abundance of critiques can overwhelm writers and obscure actionable insights. We introduce Synthia, a system that visually scaffolds feedback-based writing revision with LLM-powered synthesis. Synthia helps authors strategize their revisions by breaking down large feedback collections into interactive visual bubbles that can be clustered, colored, and resized to reveal patterns and highlight valuable suggestions. Bidirectional highlighting links each feedback unit to its original context and relevant parts of the text. Writers can selectively combine feedback units to generate alternative drafts, enabling rapid, parallel exploration of revision possibilities. These interactions support feedback curation, interpretation, and experimentation throughout the revision process. A within-subjects study (N=12) showed that Synthia helped participants identify more helpful feedback, explore more diverse revisions, and revise with greater intentionality and transparency than a GPT-4-based writing interface.
Friction: Deciphering Writing Feedback into Writing Revisions through LLM-Assisted Reflection
Chao Zhang, Kexin Ju, Peter Bidoshi, Yu-Chun Grace Yen, and Jeffrey M. Rzeszotarski
Feedback, Reflection, Sensemaking, Writing, Revision, Creativity, Large Language Models
CHI 2025
This paper introduces Friction, a novel interface designed to scaffold novice writers in reflective feedback-driven revisions. Effective revision requires mindful reflection upon feedback, but the scale and variability of feedback can make it challenging for novice writers to decipher it into actionable, meaningful changes. Friction leverages large language models to break down large feedback collections into manageable units, visualizes their distribution across sentences and issues through a co-located heatmap, and guides users through structured reflection and revision with adaptive hints and real-time evaluation. Our user study (N=16) showed that Friction helped users allocate more time to reflective planning, attend to more critical issues, develop more actionable and satisfactory revision plans, iterate more frequently, and ultimately produce higher-quality revisions, compared to the baseline system. These findings highlight the potential of human-AI collaboration to foster a balanced approach between maximum efficiency and deliberate reflection, supporting the development of creative mastery.
More AI Assistance Reduces Cognitive Engagement: Examining the AI Assistance Dilemma in AI-Supported Note-Taking
Xinyue Chen, Kunlin Ruan, Kexin Phyllis Ju, Nathan Yap, Xu Wang
AI Assistance Dilemma, Note-taking, Cognitive Load, Desirable AI Assistance
CSCW 2025 🏆 Best Paper Honorable Mention
As AI tools become increasingly integrated into cognitively demanding tasks, like note-taking, questions remain about whether they enhance or compromise cognitive engagement. This paper investigates the "AI Assistance Dilemma" in note-taking, examining how varying levels of AI support impact user engagement and comprehension. In a within-subject experiment, we asked participants (N=30) to take notes during lecture videos under three conditions: AutomatedAI (high assistance with structured notes), IntermediateAI (moderate assistance with real-time summary, and MinimalAI (low assistance with transcript). Results reveal that Intermediate AI yields the highest post-test scores and Automated AI the lowest. Participants, however, preferred the automated setup for its perceived ease of use and perceived lower cognitive effort, suggesting a discrepancy between preferred convenience and cognitive benefit. Our study provides insights on designing AI assistance that preserves cognitive engagement, offering implications for designing moderate AI support in cognitive tasks.
[PDF]
AcaMate: Supporting Novice A Cappella Singers in Iterative Individual Practice
Kexin Phyllis Ju, Ting-Yu Pan, and Hao-Wen Dong
A cappella, Feedback, Music Visualizations, Deliberate Practice
UIST 2025 - Poster
Novice singers in collegiate A cappella groups often struggle with individual practice due to limited guidance and the challenges of asynchronous rehearsal. We propose AcaMate, a system designed to support individual practice by integrating group recordings, visualizing musical patterns across voice parts, and providing intuitive feedback to guide iterative and deliberate practice.
ACappellaSet: A Multilingual A Cappella Dataset for Source Separation and AI-assisted Rehearsal Tools
Ting-Yu Pan, Kexin Phyllis Ju, and Hao-Wen Dong
NeurIPS 2025 - AI4Music Workshop (To Appear)
A cappella music presents unique challenges for source separation due to its diverse vocal styles and the presence of vocal percussion. Current a cappella datasets are limited in size and diversity, hindering the development of robust source separation models. In this paper, we present ACappellaSet, a collection of 55 professionally recorded a cappella songs performed by three professional groups. In addition, we present experimental results showing that fine-tuning Demucs on ACappellaSet substantially improves vocal percussion (VP) separation, raising VP SDR from 5.22dB to 7.62dB. Finally, we discuss future work on AI-driven dataset augmentation and supporting tools for asynchronous a cappella rehearsals.
SceneGenA11y: How can Runtime Generative tools improve the Accessibility of a Virtual 3D Scene?
Xinyun Cao, Kexin Phyllis Ju, Chenglin Li, and Dhruv Jain
CHI 2025 - Late Breaking Work
Previous works in runtime generative tools allow users to use natural language prompts to generate scene-specific and personalized modifications or explanations. One application of this method is to enable personalized accessibility improvements of a virtual 3D scene, like changing colors to highlight an object. However, the capability and usability of such tools for accessibility are underexplored and not formally evaluated. In this work, we propose SceneGenA11y, a system that combines accessibility-specific prompt engineering, multi-modal item identification, and an LLM-powered query and modification loop. The system includes detailed documentation of the categories to query and modify within a 3D scene to enhance accessibility. We conducted a preliminary evaluation of our system with three blind and low-vision people and three deaf and hard-of-hearing people. The results show that our system is intuitive to use and can successfully improve accessibility. We discussed usage patterns of the system, potential improvements, and integration into apps. We ended with highlighting plans for future work.