A novel position-aware virtual agent locomotion method that can perform virtual agent positioning (position+orientation) in real time for room-scale VR navigation assistance and is superior to the baseline condition.
Virtual agents are typical assistance tools for navigation and interaction in Virtual Reality (VR) tour, training, education, etc. It has been demonstrated that the gaits, gestures, gazes, and positions of virtual agents are major factors that affect the userโs perception and experience for seated and standing VR. In this paper, we present a novel position-aware virtual agent locomotion method, called PAVAL, that can perform virtual agent positioning (position+orientation) in real time for room-scale VR navigation assistance. We first analyze design guidelines for virtual agent locomotion and model the problem using the positions of the user and the surrounding virtual objects. Then we conduct a one-off preliminary study to collect subjective data and present a model for virtual agent positioning prediction with fixed user position. Based on the model, we propose an algorithm to optimize the object of interest, virtual agent position, and virtual agent orientation in sequence for virtual agent locomotion. As a result, during user navigation in a virtual scene, the virtual agent automatically moves in real time and introduces virtual object information to the user. We evaluate PAVAL and two alternative methods via a user study with humanoid virtual agents in various scenes, including virtual museum, factory, and school gym. The results reveal that our method is superior to the baseline condition.