Towards Personalized Navigation in XR: Design Recommendations to Accommodate Individual Differences
J. Lee, W. Stuerzlinger (2025). Towards Personalized Navigation in XR: Design Recommendations to Accommodate Individual Differences, IEEELocXR '25, 4 pages. 2025-03.
Abstract: Navigation interfaces in Extended Reality (XR) have traditionally targeted universal solutions that perform well for all users. However, research has shown that users exhibit distinct preferences and performance patterns when using different navigation techniques. This position paper argues for the necessity of and strategies for designing personalized navigation interfaces that accommodate individual differences in spatial abilities, navigation strategies, and individual needs. Drawing from empirical findings from previous work investigating locomotion and wayfinding techniques and re- search in spatial cognition and navigation, we demonstrate how different user groups respond uniquely to navigation interface components. Based on these insights, we propose design recommendations for developing adaptive navigation interfaces that cater to individual user characteristics while maintaining usability. Further- more, we discuss opportunities for standardization in user assess- ment, interface adaptation, and inclusive design. This approach could lead to more inclusive and effective navigation solutions for XR environments.
Scaling Technique for Exocentric Navigation Interface in Multiscale Virtual Environments
J. Lee, W. Stuerzlinger (2025). Scaling Technique for Exocentric Navigation in Multiscale Virtual Environments, IEEE TVCG '25, 9 pages. 2025-03
Abstract: Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.
Designing Viewpoint Transition Techniques in Multiscale Virtual EnvironmentsJ.
J. Lee, P. Asente, W. Stuerzlinger (2023). Designing Viewpoint Transition Techinques in Multiscale Virtual Environments, IEEE VR '23, 9 pages. 2023-03
Abstract: Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when they are navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend this work with an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE. We show that some types of viewpoint transitions enhance users' spatial awareness and confidence in their spatial orientation and reduce the need to revisit a target point of interest multiple times.
A Comparison of Zoom-In Transition Methods for Multiscale VR
J. Lee, P. Asente, W. Stuerzlinger (2022). A Comparison of Zoom-In Transition Methods for Multiscale VR, ACM SIGGRAPH '22, 2 pages. Poster. 2 pages. 2022-08
Abstract: When navigating within an unfamiliar virtual environment in VR, transitions between pre-defined viewpoints are known to facilitate spatial awareness of a user. Previously, different viewpoint transition techniques had been investigated, but mainly for single-scale environments. We present a comparative study of zoom-in transition techniques, where the viewpoint of a user is being smoothly transitioned from a large level of scale (LoS) to a smaller LoS in a multiscale virtual environment (MVE) with a nested structure. We identify that orbiting first before zooming in is preferred over other alternatives when transitioning to a viewpoint at a small LoS.
Evaluating Automatic Parameter Control Methods for Locomotion in Multiscale Virtual Environments
J. Lee, P. Asente, B. Kim, Y. Kim, W. Stuerzlinger (2020). Evaluating Automatic Parameter Control Methods for Locomotion in Multiscale Virtual Environments, ACM VRST '20, 10 pages. 2018-04
Abstract: Virtual environments with a wide range of scales are becoming commonplace in Virtual Reality applications. Methods to control locomotion parameters can help users explore such environments more easily. For multi-scale virtual environments, point-and-teleport locomotion with a well-designed distance control method can enable mid-air teleportation, which makes it competitive to flying interfaces. Yet, automatic distance control for point-and-teleport has not been studied in such environments. We present a new method to automatically control the distance for point-and-teleport. In our first user study, we used a solar system environment to compare three methods: automatic distance control for point-and-teleport, manual distance control for point-and-teleport, and automatic speed control for flying. Results showed that automatic control significantly reduces overshoot compared with manual control for pointand-teleport, but the discontinuous nature of teleportation made users prefer flying with automatic speed control. We conducted a second study to compare automatic-speed-controlled flying and two versions of our teleportation method with automatic distance control, one incorporating optical flow cues. We found that pointand-teleport with optical flow cues and automatic distance control was more accurate than flying with automatic speed control, and both were equally preferred to point-and-teleport without the cues.
Moving Target Selection: A Cue Integration Model
B. Lee, S. Kim, A. Oulasvirta, J. Lee, E. Park (2018). Moving Target Selection: A Cue Integration Model, ACM CHI '18, 10 pages. 2018-05
Abstract: This paper investigates a common task requiring temporal precision: the selection of a rapidly moving target on display by invoking an input event when it is within some selection window. Previous work has explored the relationship between accuracy and precision in this task, but the role of visual cues available to users has remained unexplained. To expand modeling of timing performance to multimodal settings, common in gaming and music, our model builds on the principle of probabilistic cue integration. Maximum likelihood estimation (MLE) is used to model how different types of cues are integrated into a reliable estimate of the temporal task. The model deals with temporal structure (repetition, rhythm) and the perceivable movement of the target on display. It accurately predicts error rate in a range of realistic tasks. Applications include the optimization of difficulty in game-level design.
Reflector: Distance-Independent, Private Pointing on a Reflective Screen
J. Lee, S. Kim, M. Fukumoto, B. Lee (2017). Reflector: Distance-Independent, Private Pointing on a Reflective Screen, ACM UIST '17, 10 pages. 2017-10
Abstract: Reflector is a novel direct pointing method that utilizes hidden design space on reflective screens. By aligning a part of the user’s onscreen reflection with objects rendered on the screen, Reflector enables (1) distance-independent and (2) private pointing on commodity screens. Reflector can be implemented easily in both desktop and mobile conditions through a single camera installed at the edge of the screen. Reflector’s pointing performance was compared to today’s major direct input devices: eye trackers and touchscreens. We demonstrate that Reflector allows the user to point more reliably, regardless of distance from the screen, compared to an eye tracker. Further, due to the private nature of an onscreen reflection, Reflector shows a shoulder surfing success rate 20 times lower than that of touchscreens for the task of entering a 4-digit PIN.