VRodoro: Designing an Ambient Time Management Tool for Productive and Healthy Work in VR (Poster Paper)
The 24th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2025)
https://drive.google.com/file/d/1IMz50RbjbxSoKp5VF0DcmwjcRVcX3-oO/view?usp=drive_link
Dang Tran-Hai, Donghyeon Ko
Virtual Reality (VR) offers deep immersion and enhances user focus and productivity, making it a promising next-generation platform for productivity tools. At the same time, this immersion can lead to ”time compression,” causing users to underestimate session duration and resulting in prolonged exposure, fatigue, and discomfort.
To explore how time management tools can support productive and healthy VR work, we preliminarily compared physical and VR-integrated Pomodoro timers. We identified a key design trade-off: the VR timer improved time awareness but introduced distractions, while the physical timer supported sustained focus yet caused discomfort due to abrupt alerts and limited continuous awareness.
Based on these findings, we propose VRodoro, an ambient VR Pomodoro timer that subtly communicates session progress through natural environmental transitions, aiming to balance immersive continuity with unobtrusive time awareness.
DoodleSnap: Enhancing Photography for Blind and Visually Impaired Users Through 3D Pen Interactions (Poster Paper)
The 24th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2025)
https://drive.google.com/file/d/1Kwigc4QFqa4lWbJBe6j_Y5BqbS7oYWI4/view?usp=sharing
Dakyeong Yoon, Seoyeon Hwang, Donghyeon Ko
Photography has become an essential part of daily life, enabling individuals to preserve memories, communicate, and express themselves.
Blind and visually impaired (BVI) users also engage with photography to document experiences and gather visual informa- tion, but they frequently encounter substantial challenges, often failing to achieve their goals despite existing voice guidance.
In this paper, we explore the challenges faced by BVI users and in- troduce DoodleSnap, a novel method using tactile markers created with a 3D pen. DoodleSnap enables BVI users to proactively de- fine specific targets or regions of interest before capturing images. Throughout the paper, we present findings from interviews with seven BVI users, outline key design objectives, and illustrate repre- sentative usage scenarios of DoodleSnap.
Typing Haptically: Towards Enabling Non-auditory Smartphone Text Entry with Haptic Feedback for Blind and Low Vision Users
In Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST 2025)
https://dl.acm.org/doi/10.1145/3746059.3747801
Jisu Yim, Donghyeon Ko, Taeho Kim, Taejun Kim, Jonggi Hong, and Geehyuk Lee
Text entry on smartphones remains challenging for Blind and Low Vision (BLV) users, particularly in environments where audio feedback is impractical due to noise, privacy, or social stigma.
We present TypeHap, a new system that enables BLV users to type confidently on smartphones using only haptic feedback without relying on audio. Through formative interviews (N=20), we identified the key user needs and iteratively designed a compact, attachable system combining phoneme-based haptic cues delivered through piezo actuators embedded on both sides of the smartphone and a tactile overlay on a touchscreen for differentiating rows in a keyboard. In a four-day study (N=11), BLV participants trained with TypeHap achieved text entry speeds and accuracies comparable to typing with conventional audio feedback.
Participants described TypeHap as liberating in public, noisy, and private contexts where audio feedback falls short. Our findings highlight haptic feedback as a promising alternative to audio-based interaction for enabling more private, accessible smartphone use of BLV users in diverse everyday contexts.
StringTouch: A Non-occlusive 3DoF Haptic Interface Using String Structures for Modulating Finger Sensations
In Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST 2025)
https://doi.org/10.1145/3746059.3747658
YoungIn Kim, Jisu Yim, Yohan Yun, Donghyeon Ko, and Geehyuk Lee
Providing realistic and diverse tactile feedback during interactions with objects in virtual and augmented reality, various studies have explored the use of tangible proxies.
However, tangible proxies face limitations due to their fixed physical properties, restricting the expression of various stiffness, weights, and shapes. To address these issues, we propose StringTouch, a device modulating sensations from proxies without obstructing the fingers to preserve finger sensitivity. StringTouch modulates sensations utilizing 0.2mm thin nylon threads tactor to deform fingers with 3DoF. In a user study (n = 12), ourstring structure showed better performance in distinguishing orientation, roughness, and weight than conditions using a 0.1 mm latex finger cot and was comparable to bare fingers in some of the discriminating tasks.
Another experiment (n = 12) verified the device’s capability to modulate orientation, stiffness, and weight perceptions. Finally, in a user study (n = 10) in proxy-based VR scenarios (pouring water, touching a teddy bear, touching a bottle), participants preferred StringTouch over bare finger interactions, with most of them reporting enhanced presence.
We present FlexBoard, an interaction prototyping platform that enables rapid prototyping with interactive components such as sensors, actuators and displays on curved and deformable objects.
FlexBoard offers the rapid prototyping capabilities of traditional breadboards but is also flexible to conform to different shapes and materials. FlexBoard’s bendability is enabled by replacing the rigid body of a breadboard with a flexible living hinge that holds the metal strips from a traditional breadboard while maintaining the standard pin spacing. In addition, FlexBoards are also shape-customizable as they can be cut to a specific length and joined together to form larger prototyping areas.
We discuss FlexBoard’s mechanical design and present a technical evaluation of its bendability, adhesion to curved and deformable surfaces, and holding force of electronic components. Finally, we show the usefulness of FlexBoard through 3 application scenarios with interactive textiles, curved tangible user interfaces, and VR.
We demonstrate FlexBoard, a flexible breadboard that enables interaction prototyping with electronic components such as sensors, actuators, and displays on curved and deformable objects.
We show how FlexBoard offers flexible and bidirectional bending capabilities to conform to different shapes and materials, including the rapid prototyping capabilities of the traditional breadboard. FlexBoard’s bendability is enabled by providing a flexible living hinge instead of the rigid body of a traditional breadboard. FlexBoard holds the metal strips as the traditional breadboard, which can maintain the standard pin spacing for compatibility. In addition, FlexBoards are shape-customizable. Users can cut Flexboard to a specific length and join them together to cover various ranges of prototyping areas.
We present the way of fabricating FlexBoard and three application scenarios with interactive textiles, curved tangible user interfaces, and VR devices.
We propose a thermoformable shell called TF-Shell that allows repeatable thermoforming. Due to the low thermal conductivity of typical printing materials like polylactic acid (PLA), thermoforming 3D printed objects is largely limited. Through embedding TF-Shell, users can thermoform target parts in diverse ways. Moreover, the deformed structures can be restored by reheating.
In this demo, we introduce the TF-Shell and demonstrate four thermoforming behaviors with the TF-Shell embedded figure. With our approach, we envision bringing the value of hands-on craft to digital fabrication.
The 3D pen has become a popular crafting tool where hands-on deformations are largely engaged. However, as malleable states are invisible, users might be burnt, or their fabrication might fail.
We designed a thermochromic 3D filament, ChromoFilament, that displays the malleable states in three different colors according to the associated temperatures.
From color design workshops, we identified proper stages of malleability and design considerations for color combinations, which are applied to ChromoFilament.
Next, we depict a way to fabricate ChromoFilament from customizing thermochromic ink to extruding with the coated pellets.
Finally, we illustrate the users’ distinctive behaviors with ChromoFilament to imply the effects of visible malleable states. We believe that our material-perspective approach, design process, and a series of findings could not only inspire supporting creativity through thermoforming but also heat-based processing in 3D printing.
We propose SensorViz, a visualization tool that supports novice makers during diferent stages of prototyping with sensors.
SensorViz provides three modes of visualization: (1) visualizing datasheet specifcations before buying sensors, (2) visualizing sensor interaction with the environment via AR before building the physical prototype, and (3) visualizing live/recorded sensor data to test the assembled prototype.
SensorViz includes a library of visualization primitives for diferent types of sensor data and a sensor database builder, which once a new sensor is added automatically creates a matching visualization by composing visualization primitives.
Our user study with 12 makers shows that users are more efective in selecting sensors and confguring sensor layouts using SensorViz compared to traditional prototyping utilizing datasheets and manual testing on the prototype. Our post hoc interviews indicate that SensorViz reduces trial and error by allowing makers
As young children’s screen time has significantly increased in recent years, their healthy use of digital media is a topic of interest. To support children’s healthy screen use, we are developing Romi, a physical screen peripheral device interface.
Romi was designed to help children to transition from screen to out-of-screen by reducing negative experiences when ending screen time.
To specify the design concept, we conducted the preliminary interview and experience prototyping focusing on screen-time transition. Based on design attributes which are the peripheral presence, the connection between screen and device, and friendly character, we implemented Romi interface. We believe our approach will provide motivation and inspiration for designing a physical interface for supporting children’s self-regulation.
Defining basic archetypes of an intelligent agent’s behaviour in an open-ended interactive environment
Digital Creativity, 31(2), 2020.
Richard Chulwoo Park, Donghyeon Ko, Hyunjung Kim, Seung Hyeon Han, Jee Bin Lim, Jiseong Goo, Geehyuk Lee & Woohun Lee
Despite the emergence of advanced technologies, the behavioural complexities remain underexplored, especially from the viewpoint of their potential in interactive installations. This study aims to explore the future interactivity of an agent’s behaviour by envisioning speculative future human operations using a human-controlled interactive installation called LumiLand.
This empirical study reveals that an agent can craft a user experience by (1) controlling the plot, (2) cocreating content with a user in a social manner, and (3) promptly adjusting undefined behaviours and situations. Based on Janlert and Stolterman’s interaction model, we present an interaction flow with four primary classifications: Leading, Responding, Poking, and Linking. We believe that a human-centered approach for understanding agent interactivity could help create entertaining human–computer interactions (HCI) by improving the agent design and exploring the challenges that will be faced during such interactions.
Recently, adopting a hands-on approach to conventional 3D fabrication has been attracting attention due to its advantages in design activity. In this context, we aim to support hands-on design activity in digital fabrication by designing internal structures for alleviating issues of external heating for shape deformation.
As a first step, we simulate four simple structures with Computational Fluid Dynamic (CFD) simulation to investigate effective structural parameters such as cavity’s ratio, its geometry, exposure to the heat source for influencing thermal properties, and deformation in a malleable state.
Through the pilot experiment, we figured out that the simulation results of the basic structures are valid, the structure is stable in a malleable state, and the parameters are effective. In the future, we will design functional structures based on the explored parameters and embed them on various topologies.
Investigating the effect of digitally augmented toys on young children’s social pretend play
Digital Creativity, 30(3), 2019.
Jiwoo Hong, Donghyeon Ko, and Woohun Lee
Taking an interaction design approach, we explore how children perceive augmented toys, assign symbolic meaning, and perform pretense socially in technology-mediated playing.
We developed a system with three kinds of toys—each with distinct abstract appearances and audiovisual augmentation—based on several design decisions.
An observational user study with thirty-two young children aged 3–7 years revealed that children utilized digital augmentation as a facilitator of pretending behaviour with the possibility for subjective interpretation while substituting an object for another. Digital augmentation was not only a social cue for children to respond by gathering together and negotiating socially for mutual pretence—it was also a hindrance by amplifying conflicts in a shared space. Our study empirically clarifies children’s cognition and behaviour in an interactive system for pretend play and provides broader insight for the design of an interface enriching symbolic interpretation and social interaction.
Despite the popularity of fish as pets, there is little knowledge available about the fishkeeping experience and the related interactions. In this regard, this study aims to look into the experience of fishkeeping by supporting people’s actions through a tech-mediated system.
Based on the results, an interactive system called BubbleTalk was developed to help people to convey their actions using bubbles into a fish tank.
A user study was conducted with BubbleTalk, and the results showed that the interaction through BubbleTalk varied people’s behavior, prolonged their interaction and thus reshaped their relationship with fish. Beyond the implications for fishkeeping, we believe that our findings could serve as insight and further motivation for overcoming interactions limited by this physically disconnected environment.
Nowadays, it is easy to find concepts for connecting mobile devices and looking at photos together. Despite the increasing interest in multi-device single display groupware (multi-device SDG) today, most of the existing research is limited to using a rectangular form array to enlarge the display.
We suggest a new way of assembling mobile devices to create unique forms, such as rings, bars, and radial displays, and to develop three games with them. During the development process, we conducted generative workshops to understand the inter-device interaction characteristics of the uniquely formed multi-device SDG. Directionality, inter-device space, and tangible interactions were extracted from the workshop.
Through a user study of the developed games, we refined the characteristics and found additional design issues: sitting and ownership. The inter-device interaction characteristics and issues of multi-device SDG obtained from this study could be generally applied to collocated multi-mobile interactions.