Friction: Deciphering Writing Feedback into Writing Revisions through LLM-Assisted Reflection
Chao Zhang, Kexin Ju, Peter Bidoshi, Yu-Chun Grace Yen, and Jeffrey M. Rzeszotarski
To Appear in Proceedings of the 2025 CHl Conference on Human Factors in Computing Systems (CHI 2025)
This paper introduces Friction, a novel interface designed to scaffold novice writers in reflective feedback-driven revisions. Effective revision requires mindful reflection upon feedback, but the scale and variability of feedback can make it challenging for novice writers to decipher it into actionable, meaningful changes. Friction leverages large language models to break down large feedback collections into manageable units, visualizes their distribution across sentences and issues through a co-located heatmap, and guides users through structured reflection and revision with adaptive hints and real-time evaluation. Our user study (N=16) showed that Friction helped users allocate more time to reflective planning, attend to more critical issues, develop more actionable and satisfactory revision plans, iterate more frequently, and ultimately produce higher-quality revisions, compared to the baseline system. These findings highlight the potential of human-AI collaboration to foster a balanced approach between maximum efficiency and deliberate reflection, supporting the development of creative mastery.
SceneGenA11y: How can Runtime Generative tools improve the Accessibility of a Virtual 3D Scene?
Xinyun Cao, Kexin Phyllis Ju, Chenglin Li, and Dhruv Jain
To Appear in Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems (CHI EA 2025)
Previous works in runtime generative tools allow users to use natural language prompts to generate scene-specific and personalized modifications or explanations. One application of this method is to enable personalized accessibility improvements of a virtual 3D scene, like changing colors to highlight an object. However, the capability and usability of such tools for accessibility are underexplored and not formally evaluated. In this work, we propose SceneGenA11y, a system that combines accessibility-specific prompt engineering, multi-modal item identification, and an LLM-powered query and modification loop. The system includes detailed documentation of the categories to query and modify within a 3D scene to enhance accessibility. We conducted a preliminary evaluation of our system with three blind and low-vision people and three deaf and hard-of-hearing people. The results show that our system is intuitive to use and can successfully improve accessibility. We discussed usage patterns of the system, potential improvements, and integration into apps. We ended with highlighting plans for future work.